Sample records for reliability levels assumed

  1. Comparison of ELCAP data with lighting and equipment load levels and profiles assumed in regional models

    Taylor, Z.T.; Pratt, R.G.


    The analysis in this report was driven by two primary objectives: to determine whether and to what extent the lighting and miscellaneous equipment electricity consumption measured by metering in real buildings differs from the levels assumed in the various prototypes used in power forecasting; and to determine the reasons for those differences if, in fact, differences were found. 13 refs., 47 figs., 4 tabs.

  2. Reliability-Centric High-Level Synthesis

    Tosun, S; Arvas, E; Kandemir, M; Xie, Yuan


    Importance of addressing soft errors in both safety critical applications and commercial consumer products is increasing, mainly due to ever shrinking geometries, higher-density circuits, and employment of power-saving techniques such as voltage scaling and component shut-down. As a result, it is becoming necessary to treat reliability as a first-class citizen in system design. In particular, reliability decisions taken early in system design can have significant benefits in terms of design quality. Motivated by this observation, this paper presents a reliability-centric high-level synthesis approach that addresses the soft error problem. The proposed approach tries to maximize reliability of the design while observing the bounds on area and performance, and makes use of our reliability characterization of hardware components such as adders and multipliers. We implemented the proposed approach, performed experiments with several designs, and compared the results with those obtained by a prior proposal.

  3. Reliability-Based Optimization and Optimal Reliability Level of Offshore Wind Turbines

    Tarp-Johansen, N.J.; Sørensen, John Dalsgaard


    Different formulations relevant for the reliability-based optimization of offshore wind turbines are presented, including different reconstruction policies in case of failure. Illustrative examples are presented and, as a part of the results, optimal reliability levels for the different failure m...

  4. Reliability-Based Optimization and Optimal Reliability Level of Offshore Wind Turbines

    Sørensen, John Dalsgaard; Tarp-Johansen, N.J.


    Different formulations relevant for the reliability-based optimization of offshore wind turbines are presented, including different reconstruction policies in case of failure. Illustrative examples are presented and, as a part of the results, optimal reliability levels for the different failure...

  5. Fast wafer-level detection and control of interconnect reliability

    Foley, Sean; Molyneaux, James; Mathewson, Alan


    Many of the technological advances in the semiconductor industry have led to dramatic increases in device density and performance in conjunction with enhanced circuit reliability. As reliability is improved, the time taken to characterize particular failure modes with traditional test methods is getting substantially longer. Furthermore, semiconductor customers expect low product cost and fast time-to-market. The limits of traditional reliability testing philosophies are being reached and new approaches need to be investigated to enable the next generation of highly reliable products to be tested. This is especially true in the area of IC interconnect, where significant challenges are predicted for the next decade. A number of fast, wafer level test methods exist for interconnect reliability evaluation. The relative abilities of four such methods to detect the quality and reliability of IC interconnect over very short test times are evaluated in this work. Four different test structure designs are also evaluated and the results are bench-marked against conventional package level Median Time to Failure results. The Isothermal test method combine with SWEAT-type test structures is shown to be the most suitable combination for defect detection and interconnect reliability control over very short test times.

  6. Risk-based Optimization and Reliability Levels of Coastal Structures

    Sørensen, John Dalsgaard; Burcharth, Hans F.


     Identification of optimum reliability levels for coastal structures is considered. A class of breakwaters is considered where no human injuries can be expected in cases of failure. The optimum reliability level is identified by minimizing the total costs over the service life of the structure......, including building costs, maintenance and repair costs, downtime costs and decommission costs. Different formulations are considered. Stochastic models are presented for the main failure modes for rubble mound breakwaters without superstructures, typically used for outer protection of basins. The influence...... on the minimumcost reliability levels is investigated for different values of the real rate of interest, the service lifetime, the downtime costs due to malfunction and the decommission costs....

  7. Risk-based Optimization and Reliability Levels of Coastal Structures

    Sørensen, John Dalsgaard; Burcharth, H. F.

    Identification of optimum reliability levels for coastal structures is considered. A class of breakwaters is considered where no human injuries can be expected in cases of failure. The optimum reliability level is identified by minimizing the total costs over the service life of the structure......, including building costs, maintenance and repair costs, downtime costs and decommission costs. Different formulations are considered. Stochastic models are presented for the main failure modes for rubble mound breakwaters without superstructures, typically used for outer protection of basins. The influence...... on the minimumcost reliability levels is investigated for different values of the real rate of interest, the service lifetime, the downtime costs due to malfunction and the decommission costs....

  8. Manipulative physiotherapists can reliably palpate nominated lumbar spinal levels.

    Downey, B J; Taylor, N F; Niere, K R


    Palpating a nominated spinal level is a prerequisite to more complex tasks such as palpating the level most likely to be the source of the patient's symptoms. The aim of this study was to investigate the reliability of physiotherapists with a post-graduate qualification in manipulation (manipulative physiotherapists) in palpating the lumbar spines of patients in a clinical setting. Three pairs of manipulative physiotherapists palpated the randomly-nominated lumbar spinal levels of 20 patients presenting to their practices for treatment of low-back pain. Each therapist marked the skin overlying the spinous process of the nominated spinal level with an ultraviolet pen and these marks were transcribed onto transparencies for analysis. The therapists obtained an overall weighted kappa of 0.92 indicating almost perfect agreement for locating the nominated spinal level. The results of this study indicate that manipulative physiotherapists can reliably palpate nominated lumbar spinal levels, suggesting further training in spinal therapy enhances the palpatory skills of physiotherapists in palpating nominated lumbar spinal levels.

  9. Web life: If We Assume


    The title If We Assume refers to physicists' habit of making back-of-the-envelope calculations, but do not let the allusion to assumptions fool you: there are precious few spherical cows rolling around frictionless surfaces in this corner of the Internet.

  10. Partial Safety Factors and Target Reliability Level in Danish Structural Codes

    Sørensen, John Dalsgaard; Hansen, J. O.; Nielsen, T. A.


    The partial safety factors in the newly revised Danish structural codes have been derived using a reliability-based calibration. The calibrated partial safety factors result in the same average reliability level as in the previous codes, but a much more uniform reliability level has been obtained...

  11. The Reliability of Highly Elevated CA 19-9 Levels

    B. R. Osswald


    Full Text Available CA 19-9 is used as a tumour marker of the upper gastrointestinal tract. However, extremely elevated CA 19-9 levels are found also in patients with benign diseases. Cholestasis was present in 97.1 % of patients with high elevated CA 19-9, independent of their primary disease. 50% of patients with non-malignant diseases and increased CA 19-9 levels showed liver cirrhosis, cholecystitis, pancreatitis and/or hepatitis. In 8.8% no explanation was found for the extremely high CA 19-9 level. The results provide evidence of different factors influencing the CA 19-9 level.

  12. Assessing Reliability of Two Versions of Vocabulary Levels Tests in Iranian Context

    Bayazidi, Aso; Saeb, Fateme


    This study examined the equivalence and reliability of the two versions of the Vocabulary Levels Test in an Iranian context. This study was motivated by the fact that the Vocabulary Levels test is increasingly being used in Iran for both research and pedagogical purposes without having been checked for validity and reliability in this context. The…

  13. Reliability of Level Three Valuations and Credit Crisis

    Arber Hoti


    Full Text Available This research paper evaluates the impact of levelthree valuations in accordance with FAS157 and its impact on investors, auditors’ work, and valuation. The objective of this research is todemonstrate that the fair value measurements shouldnot be suspended. The standards provide formeasurement of fair value in all market conditions.Therefore, level 3 measurements or mark-to-model is an answer for many issuers that are not sure how to measure their assets and liabilities at thefair value. The paper concludes that fair value measurement has not caused the current crisis and hasno pro-cyclical effect and suggests several recommendations for policy makers and regulators.

  14. System-level Reliability Assessment of Power Stage in Fuel Cell Application

    Zhou, Dao; Wang, Huai; Blaabjerg, Frede


    the lifetime model and the stress levels, the Weibull distribution of the power semiconductors lifetime can be obtained by using Monte Carlo analysis. Afterwards, the reliability block diagram can further be adopted to evaluate the reliability of the power stage based on the estimated power semiconductor...

  15. Urine suPAR levels compared with plasma suPAR levels as predictors of post-consultation mortality risk among individuals assumed to be TB-negative: a prospective cohort study

    Rabna, Paulo; Andersen, Andreas; Wejse, Christian;


    -suPAR), thereby exploring the possibility of replacing the blood sample with an easy obtainable urine sample. We enrolled 1,007 adults, older than 15 years of age, with a negative TB diagnosis between April 2004 and December 2006. Levels of U-suPAR and P-suPAR were available in 863 individuals. U......-suPAR was measured using a commercial ELISA (suPARnostic®). We found that U-suPAR carried significant prognostic information on mortality for HIV-infected subjects with an area under the ROC curve of 0.75. For HIV-negative individuals, little or no prognostic effect was observed. However, in both HIV positives...... and negatives, the predictive effect of U-suPAR was found to be inferior to that of P-suPAR....

  16. A level set method for reliability-based topology optimization of compliant mechanisms


    Based on the level set model and the reliability theory, a numerical approach of reliability-based topology optimization for compliant mechanisms with multiple inputs and outputs is presented. A multi-objective topology optimal model of compliant mechanisms considering uncertainties of the loads, material properties, and member geometries is developed. The reliability analysis and topology optimization are integrated in the optimal iterative process. The reliabilities of the compliant mechanisms are evaluated by using the first order reliability method. Meanwhile, the problem of structural topology optimization is solved by the level set method which is flexible in handling complex topological changes and concise in describing the boundary shape of the mechanism. Numerical examples show the importance of considering the stochastic nature of the compliant mechanisms in the topology optimization process.

  17. Implementation and Analysis of Probabilistic Methods for Gate-Level Circuit Reliability Estimation

    WANG Zhen; JIANG Jianhui; YANG Guang


    The development of VLSI technology results in the dramatically improvement of the performance of integrated circuits. However, it brings more challenges to the aspect of reliability. Integrated circuits become more susceptible to soft errors. Therefore, it is imperative to study the reliability of circuits under the soft error. This paper implements three probabilistic methods (two pass, error propagation probability, and probabilistic transfer matrix) for estimating gate-level circuit reliability on PC. The functions and performance of these methods are compared by experiments using ISCAS85 and 74-series circuits.

  18. Assessing Reliability of Two Versions of Vocabulary Levels Tests in Iranian Context

    Aso Bayazidi


    Full Text Available This study examined the equivalence and reliability of the two versions of the Vocabulary Levels Test in an Iranian context. This study was motivated by the fact that the Vocabulary Levels test is increasingly being used in Iran for both research and pedagogical purposes without having been checked for validity and reliability in this context. The equivalence and reliability of the two versions of the test were examined through the parallel-form approach to reliability in Classical True Score theory. Seventy-five intermediate learners of English as a foreign language at the Iran Language Institute took the two versions of the test with one week interval between the two administrations in a counterbalanced fashion. To examine the equivalence of the two versions, the means and variances of the scores obtained for the two tests were compared using paired-sample t-test and one-way ANOVA, respectively. The results of the analyses indicated that the difference between the means of the two versions was significant, and the two versions cannot be considered as parallel forms. To assess the reliability of the two versions, the correlation between the scores obtained from them was estimated using Pearson Product Moment correlation. The results of the analyses showed that the two versions are highly correlated and are reliable tests. It is concluded that the two versions should not be treated as equivalent in longitudinal and gain score studies.

  19. System-level Reliability Assessment of Power Stage in Fuel Cell Application

    Zhou, Dao; Wang, Huai; Blaabjerg, Frede


    High efficient and less pollutant fuel cell stacks are emerging and strong candidates of the power solution used for mobile base stations. In the application of the backup power, the availability and reliability hold the highest priority. This paper considers the reliability metrics from...... the component-level to the system-level for the power stage used in a fuel cell application. It starts with an estimation of the annual accumulated damage for the key power electronic components according to the real mission profile of the fuel cell system. Then, considering the parameter variations in both...... reliability. In a case study of a 5 kW fuel cell power stage, the parameter variations of the lifetime model prove that the exponential factor of the junction temperature fluctuation is the most sensitive parameter. Besides, if a 5-out-of-6 redundancy is used, it is concluded both the B10 and the B1 system...

  20. Determining the optimum length of a bridge opening with a specified reliability level of water runoff

    Evdokimov Sergey


    Full Text Available Current trends in construction are aimed at providing reliability and safety of engineering facilities. According to the latest government regulations for construction, the scientific approach to engineering research, design, construction and operation of construction projects is a key priority. The reliability of a road depends on a great number of factors and characteristics of their statistical compounds (sequential and parallel. A part of a road with such man-made structures as a bridge or a pipe is considered as a system with a sequential element connection. The overall reliability is the multiplication of the reliability of these elements. The parameters of engineering structures defined by analytical dependences are highly volatile because of the inaccuracy of the defining factors. However each physical parameter is statistically unstable that is evaluated by variable coefficient of their values. It causes the fluctuation in the parameters of engineering structures. Their study may result in the changes in general and particular design rules in order to increase the reliability. The paper gives the grounds for these changes by the example of a bridge. It allows calculating its optimum length with a specified reliability level of water runoff under the bridge.


    Erhan F.


    Full Text Available In the article the analysis of the influence level current of the short circuit of the electrical equipment. It is designed the mathematical model allowing the modeling prototype reliability of the operation electrical equipment installed in nodes of the systems.

  2. Assumed PDF modeling in rocket combustor simulations

    Lempke, M.; Gerlinger, P.; Aigner, M.


    In order to account for the interaction between turbulence and chemistry, a multivariate assumed PDF (Probability Density Function) approach is used to simulate a model rocket combustor with finite-rate chemistry. The reported test case is the PennState preburner combustor with a single shear coaxial injector. Experimental data for the wall heat flux is available for this configuration. Unsteady RANS (Reynolds-averaged Navier-Stokes) simulation results with and without the assumed PDF approach are analyzed and compared with the experimental data. Both calculations show a good agreement with the experimental wall heat flux data. Significant changes due to the utilization of the assumed PDF approach can be observed in the radicals, e. g., the OH mass fraction distribution, while the effect on the wall heat flux is insignificant.

  3. Improving the AODV Protocol to Satisfy the Required Level of Reliability for Home Area Networks

    Hossein Jafari Pozveh


    Full Text Available For decades, the structure of existing power grids has not changed. It is an old structure that depends heavily on fossil fuel as an energy source, and in the future, this is likely to be critical in the field of energy. To solve these problems and to make optimal use of energy resources, a new concept is proposed, called Smart Grid. Smart Grid is an electric power distribution automation system, which can provide a two-way flow of electricity and information between power plants and consumers. The Smart Grid communications infrastructure consists of different network components, such as Home Area Network (HAN, Neighborhood Area Network (NAN and Wide Area Network (WAN. Achieving the required level of reliability in the transmission of information to all sections, including the HAN, is one of the main objectives in the design and implementation of Smart Grid. This study offers a routing protocol by considering the parameters and constraints of HAN, which, by improving AODV routing protocol, achieves the level of required reliability for data transmission in this network. These improvements include: making table-driven AODV routing protocol, extending the routing protocol to compute multiple paths in a route discovery, simplification and providing the effect of HAN parameters. The results of the NS2 simulation indicate that applying this improved routing protocol in the HAN, satisfies the required level of reliability of the network, which is over 98%.

  4. The Yo-Yo intermittent recovery test level 1 is reliable in young high-level soccer players.

    Deprez, D; Fransen, J; Lenoir, M; Philippaerts, Rm; Vaeyens, R


    The aim of the study was to investigate test reliability of the Yo-Yo intermittent recovery test level 1 (YYIR1) in 36 high-level youth soccer players, aged between 13 and 18 years. Players were divided into three age groups (U15, U17 and U19) and completed three YYIR1 in three consecutive weeks. Pairwise comparisons were used to investigate test reliability (for distances and heart rate responses) using technical error (TE), coefficient of variation (CV), intra-class correlation (ICC) and limits of agreement (LOA) with Bland-Altman plots. The mean YYIR1 distances for the U15, U17 and U19 groups were 2024 ± 470 m, 2404 ± 347 m and 2547 ± 337 m, respectively. The results revealed that the TEs varied between 74 and 172 m, CVs between 3.0 and 7.5%, and ICCs between 0.87 and 0.95 across all age groups for the YYIR1 distance. For heart rate responses, the TEs varied between 1 and 6 bpm, CVs between 0.7 and 4.8%, and ICCs between 0.73 and 0.97. The small ratio LOA revealed that any two YYIR1 performances in one week will not differ by more than 9 to 28% due to measurement error. In summary, the YYIR1 performance and the physiological responses have proven to be highly reliable in a sample of Belgian high-level youth soccer players, aged between 13 and 18 years. The demonstrated high level of intermittent endurance capacity in all age groups may be used for comparison of other prospective young soccer players.




    Full Text Available Statement of the problem. The main points of contractual obligations is their timely performance with ensuring the desired level of investments of the investment funds provided for in this contract. The longer the execution of the works under the contract, the higher the probability of violation of these terms. Analysis of construction projects over the past decade has shown that the situation has not changed significantly, according to [8] contemporary data on the construction of a number of objects from which it follows that the larger the object, and accordingly, the longer construction period, the more the actual deviation of the actual terms of the construction of the planned, up to 50...100% in some cases. The comparison of these data shows that the problem of ensuring reliable operation of the construction company on the stage of implementation of a specific project is relevant in the present time. Analysis of recent research. The analysis of researches in the field of the rational justification of organizational and technological reliability values shows that its range is in the range from 0.35 to 0.9, it indicates the absence of a reasoned approach to this issue. Of course, for a more reliable implementation of the plan one needs to have a certain amount of appropriate material and financial resources, but in the management process is another important resource that should be in possession of the subject of management this is information. The purpose and objectives of work. The aim of this work is the study of the rational level of organizational and technological reliability (OTR based on analysis of the need for this information. To achieve the goal of the article were set and solve the following tasks: - to establish the relationship between OTR and the right amount of information; - to determine the influence of the accuracy of determining the current state of the controlled parameter and the level of information; - to justify the

  6. Reliability of Physician-Level Measures of Patient Experience in Primary Care.

    Fenton, Joshua J; Jerant, Anthony; Kravitz, Richard L; Bertakis, Klea D; Tancredi, Daniel J; Magnan, Elizabeth M; Franks, Peter


    Patient experience measures are widely used to compare performance at the individual physician level. To assess the impact of unmeasured patient characteristics on visit-level patient experience measures and the sample sizes required to reliably measure patient experience at the primary care physician (PCP) level. Repeated cross-sectional design. Academic family medicine practice in California. One thousand one hundred forty-one adult patients attending 1319 visits with 56 PCPs (including 45 resident and 11 faculty physicians). Post-visit patient experience surveys including patient measures used for standard adjustment as recommend by the Consumer Assessment of Healthcare Providers and Systems (CAHPS) Consortium and additional patient characteristics used for expanded adjustment (including attitudes toward healthcare, global life satisfaction, patient personality, current symptom bother, and marital status). The amount of variance in patient experience explained doubled with expanded adjustment for patient characteristics compared with standard adjustment (R(2) = 20.0% vs. 9.6%, respectively). With expanded adjustment, the amount of variance attributable to the PCP dropped from 6.1% to 3.4% and the required sample size to achieve a reliability of 0.90 in the physician-level patient experience measure increased from 138 to 255 patients per physician. After ranking of the 56 PCPs by average patient experience, 8 were reclassified into or out of the top or bottom quartiles of average experience with expanded as compared to standard adjustment [14.3% (95% CI: 7.0-25.2%)]. Widely used methods for measuring PCP-level patient experience may not account sufficiently for influential patient characteristics. If methods were adapted to account for these characteristics, patient sample sizes for reliable between-physician comparisons may be too large for most practices to obtain.

  7. The Disquietude of Duty Assuming Kant

    Max Maureira Pacheco


    Full Text Available For Kant, the moral duty is determined universally, that is, on account of its form, in the moral norm. However the moral norm is opposed to particularity, determined by what is not the norm itself, hence being the origin of singularity. The singularized norm is opposed, from experience, by its negation in individual cases. To assume Kant demands the reconciliation of the singular, manifested incases, with the universal. This article deals with this question, demonstrating, above all, the practical difficulties linked to the moral experience in its totality.

  8. Resource allocatiion: sequential data collection for reliability analysis involving systems and component level data

    Anderson-cooke, Christine M [Los Alamos National Laboratory


    In analyzing the reliability of complex systems, several types of data from full-system tests to component level tests are commonly available and are used. After a preliminary analysis, additional resources may be available to collect new data. The goal of resource allocation is to identify the best new data to collect to maximally improve the prediction of system reliability. While several possible definitions of 'maximally improve' are possible, we focus on reducing the uncertainty or the width of the uncertainty interval for the prediction of system reliability at a user-specified age(s). In this paper, we present an algorithm that allows us to estimate the anticipated improvement to the analysis with the addition of new data, based on current understanding of all of the statistical model parameters. This quantitative assessment of the anticipated improvement can be helpful to justify the benefits of collecting new data. Additionally by comparing different potential allocations, it is possible to determine what new data should be collected to improve our understanding of the response. This optimization takes into account the relative cost of different data types and can be based on flexible allocation options, or subject to logistical constraints.

  9. Stability and Reliability of Plasma Level of Lipid Biomarkers and Their Correlation with Dietary Fat Intake

    Sang-Ah Lee


    Full Text Available The reliability and stability of plasma lipid biomarkers and their association with dietary fat intake were evaluated among 48 subjects who were randomly chosen from the participants of a validation study of the population-based cohort, the Shanghai Men's Health Study (SMHS. Four spot blood samples, one taken each season, were measured for total cholesterol, triglyceride, HDL-cholesterol, and LDL-cholesterol levels. The reliability and stability of these measurements were assessed by intraclass correlation coefficients (ICC and by the correlations between a randomly chosen measurement with the mean of measurements across seasons using a bootstrap approach. The median levels for total cholesterol, triglycerides, HDL-cholesterol, and LDL-cholesterol were 177.5, 164.5, 41.0, and 102.5 (mg/dl, respectively. The ICCs of the biomarkers ranged from 0.58 (LDL-cholesterol to 0.83 (HDL-cholesterol. The correlation between randomly chosen spot measurements and the mean measurement were 0.91, 0.86, 0.93, and 0.83 for total cholesterol, triglycerides, HDL-cholesterol, and LDL-cholesterol, respectively. The correlations of lipid biomarkers with dietary fat intake and other lifestyle factors were comparable to other previous reports. In conclusion, this study suggests that measurements of lipid biomarkers from a single spot blood sample are a good representation of the average blood levels of these biomarkers in the study population and could be a useful tool for epidemiological studies.

  10. Reliability and Characteristics of Wafer-Level Chip-Scale Packages under Current Stress

    Chen, Po-Ying; Kung, Heng-Yu; Lai, Yi-Shao; Hsiung Tsai, Ming; Yeh, Wen-Kuan


    In this work, we present a novel approach and method for elucidating the characteristics of wafer-level chip-scale packages (WLCSPs) for electromigration (EM) tests. The die in WLCSP was directly attached to the substrate via a soldered interconnect. The shrinking of the area of the die that is available for power, and the solder bump also shrinks the volume and increases the density of electrons for interconnect efficiency. The bump current density now approaches to 106 A/cm2, at which point the EM becomes a significant reliability issue. As known, the EM failure depends on numerous factors, including the working temperature and the under bump metallization (UBM) thickness. A new interconnection geometry is adopted extensively with moderate success in overcoming larger mismatches between the displacements of components during current and temperature changes. Both environments and testing parameters for WLCSP are increasingly demanded. Although failure mechanisms are considered to have been eliminated or at least made manageable, new package technologies are again challenging its process, integrity and reliability. WLCSP technology was developed to eliminate the need for encapsulation to ensure compatibility with smart-mount technology (SMT). The package has good handing properties but is now facing serious reliability problems. In this work, we investigated the reliability of a WLCSP subjected to different accelerated current stressing conditions at a fixed ambient temperature of 125 °C. A very strong correlation exists between the mean time to failure (MTTF) of the WLCSP test vehicle and the mean current density that is carried by a solder joint. A series of current densities were applied to the WLCSP architecture; Black's power law was employed in a failure mode simulation. Additionally, scanning electron microscopy (SEM) was adopted to determine the differences existing between high- and low-current-density failure modes.

  11. A reliable procedure for the analysis of multiexponential transients that arise in deep level transient spectroscopy

    Hanine, M. [Laboratoire Electronique Microtechnologie et Instrumentation (LEMI), University of Rouen, 76821 Mont Saint Aignan (France)]. E-mail:; Masmoudi, M. [Laboratoire Electronique Microtechnologie et Instrumentation (LEMI), University of Rouen, 76821 Mont Saint Aignan (France); Marcon, J. [Laboratoire Electronique Microtechnologie et Instrumentation (LEMI), University of Rouen, 76821 Mont Saint Aignan (France)


    In this paper, a reliable procedure, which allows a fine as well as a robust analysis of the deep defects in semiconductors, is detailed. In this procedure where capacitance transients are considered as multiexponential and corrupted with Gaussian noise, our new method of analysis, the Levenberg-Marquardt deep level transient spectroscopy (LM-DLTS) is associated with two other high-resolution techniques, i.e. the Matrix Pencil which provides an approximation of exponential components contained in the capacitance transients and Prony's method recently revised by Osborne in order to set the initial parameters.

  12. Rorschach intercoder reliability for protocol-level comprehensive system variables in an international sample.

    Sahly, Jennifer; Shaffer, Thomas W; Erdberg, Philip; O'Toole, Siobhan


    This study examines the intercoder reliability of Rorschach Comprehensive System (CS; Exner, 2001) protocol-level variables. A large international sample was combined to obtain intercoder agreement for 489 Rorschach protocols coded using the CS. Intercoder agreement was calculated using an Iota coefficient, a statistical coefficient similar to kappa that is corrected for chance. Iota values for the variables analyzed ranged from .31 to 1.00, with 2 in the poor range of agreement, 4 in the fair range, 25 in the good range, and 116 in the excellent range of agreement. Discrepancies between variables are discussed.

  13. Improving reliability of non-volatile memory technologies through circuit level techniques and error control coding

    Yang, Chengen; Emre, Yunus; Cao, Yu; Chakrabarti, Chaitali


    Non-volatile resistive memories, such as phase-change RAM (PRAM) and spin transfer torque RAM (STT-RAM), have emerged as promising candidates because of their fast read access, high storage density, and very low standby power. Unfortunately, in scaled technologies, high storage density comes at a price of lower reliability. In this article, we first study in detail the causes of errors for PRAM and STT-RAM. We see that while for multi-level cell (MLC) PRAM, the errors are due to resistance drift, in STT-RAM they are due to process variations and variations in the device geometry. We develop error models to capture these effects and propose techniques based on tuning of circuit level parameters to mitigate some of these errors. Unfortunately for reliable memory operation, only circuit-level techniques are not sufficient and so we propose error control coding (ECC) techniques that can be used on top of circuit-level techniques. We show that for STT-RAM, a combination of voltage boosting and write pulse width adjustment at the circuit-level followed by a BCH-based ECC scheme can reduce the block failure rate (BFR) to 10-8. For MLC-PRAM, a combination of threshold resistance tuning and BCH-based product code ECC scheme can achieve the same target BFR of 10-8. The product code scheme is flexible; it allows migration to a stronger code to guarantee the same target BFR when the raw bit error rate increases with increase in the number of programming cycles.

  14. Measuring the anaesthesia clinical learning environment at the department level is feasible and reliable.

    Castanelli, D J; Smith, N A


    The learning environment describes the context and culture in which trainees learn. In order to establish the feasibility and reliability of measuring the anaesthetic learning environment in individual departments we implemented a previously developed instrument in hospitals across New South Wales. We distributed the instrument to trainees from 25 anaesthesia departments and supplied summarized results to individual departments. Exploratory and confirmatory factor analyses were performed to assess internal structure validity and generalizability theory was used to calculate reliability. The number of trainees required for acceptable precision in results was determined using the standard error of measurement. We received 172 responses (59% response rate). Suitable internal structure validity was confirmed. Measured reliability was acceptable (G-coefficient 0.69) with nine trainees per department. Eight trainees were required for a 95% confidence interval of plus or minus 0.25 in the mean total score. Eight trainees as assessors also allow a 95% confidence interval of approximately plus or minus 0.3 in the subscale mean scores. Results for individual departments varied, with scores below the expected level recorded on individual subscales, particularly the 'teaching' subscale. Our results confirm that, using this instrument, individual departments can obtain acceptable precision in results with achievable trainee numbers. Additionally, with the exception of departments with few trainees, implementation proved feasible across a training region. Repeated use would allow departments or accrediting bodies to monitor their individual learning environment and the impact of changes such as the introduction of new curricular elements, or local initiatives to improve trainee experience.

  15. Reliable groundwater levels: failures and lessons learned from modeling and monitoring studies

    Van Lanen, Henny A. J.


    Adequate management of groundwater resources requires an a priori assessment of impacts of intended groundwater abstractions. Usually, groundwater flow modeling is used to simulate the influence of the planned abstraction on groundwater levels. Model performance is tested by using observed groundwater levels. Where a multi-aquifer system occurs, groundwater levels in the different aquifers have to be monitored through observation wells with filters at different depths, i.e. above the impermeable clay layer (phreatic water level) and beneath (artesian aquifer level). A reliable artesian level can only be measured if the space between the outer wall of the borehole (vertical narrow shaft) and the observation well is refilled with impermeable material at the correct depth (post-drilling phase) to prevent a vertical hydraulic connection between the artesian and phreatic aquifer. We were involved in improper refilling, which led to impossibility to monitor reliable artesian aquifer levels. At the location of the artesian observation well, a freely overflowing spring was seen, which implied water leakage from the artesian aquifer affected the artesian groundwater level. Careful checking of the monitoring sites in a study area is a prerequisite to use observations for model performance assessment. After model testing the groundwater model is forced with proposed groundwater abstractions (sites, extraction rates). The abstracted groundwater volume is compensated by a reduction of groundwater flow to the drainage network and the model simulates associated groundwater tables. The drawdown of groundwater level is calculated by comparing the simulated groundwater level with and without groundwater abstraction. In lowland areas, such as vast areas of the Netherlands, the groundwater model has to consider a variable drainage network, which means that small streams only carry water during the wet winter season, and run dry during the summer. The main streams drain groundwater

  16. Reliability level III method in design of square pillar resting on weak floor stratum

    Pytel, W.M. (Southern Illinois University, Carbondale, IL (USA). Dept. of Mining Engineering)


    Current methods of design of pillars resting on weak floor strata involve only a deterministic, conventional safety factor calculation, based on material parameters treated as the mean values taken from observations. In a case where high parameters variability occurs, these methods may lead to fatal design errors resulting in excessive settlement and roof falls. Therefore, to include the influence of parameters quality, the new approach based on reliability level III method was developed. Consideration was given to the identification of the system parameters importance, and to density function for the safety factor treated as a random variable. Design procedure involving floor probability of failure was illustrated by numerical examples. 17 refs., 8 figs., 1 tab.

  17. Development of microsatellite markers for the rapid and reliable genotyping of Brettanomyces bruxellensis at strain level.

    Albertin, Warren; Panfili, Aurélie; Miot-Sertier, Cécile; Goulielmakis, Aurélie; Delcamp, Adline; Salin, Franck; Lonvaud-Funel, Aline; Curtin, Chris; Masneuf-Pomarede, Isabelle


    Although many yeasts are useful for food production and beverage, some species may cause spoilage with important economic loss. This is the case of Dekkera/Brettanomyces bruxellensis, a contaminant species that is mainly associated with fermented beverages (wine, beer, cider and traditional drinks). To better control Brettanomyces spoilage, rapid and reliable genotyping methods are necessary to determine the origins of the spoilage, to assess the effectiveness of preventive treatments and to develop new control strategies. Despite several previously published typing methods, ranging from classical molecular methods (RAPD, AFLP, REA-PFGE, mtDNA restriction analysis) to more engineered technologies (infrared spectroscopy), there is still a lack of a rapid, reliable and universal genotyping approach. In this work, we developed eight polymorphic microsatellites markers for the Brettanomyces/Dekkera bruxellensis species. Microsatellite typing was applied to the genetic analysis of wine and beer isolates from Europe, Australia and South Africa. Our results suggest that B. bruxellensis is a highly disseminated species, with some strains isolated from different continents being closely related at the genetic level. We also focused on strains isolated from two Bordeaux wineries on different substrates (grapes, red wines) and for different vintages (over half a century). We showed that all B. bruxellensis strains within a cellar are strongly related at the genetic level, suggesting that one clonal population may cause spoilage over decades. The microsatellite tool now paves the way for future population genetics research of the B. bruxellensis species. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Effect of merging levels of locomotion scores for dairy cows on intra- and interrater reliability and agreement

    Schlageter-Tello, A.; Bokkers, E.A.M.; Groot Koerkamp, P.W.G.; Hertem, van T.; Viazzi, S.; Romanini, C.E.B.; Halachmi, I.; Bahr, C.; Berckmans, D.; Lokhorst, K.


    Locomotion scores are used for lameness detection in dairy cows. In research, locomotion scores with 5 levels are used most often. Analysis of scores, however, is done after transformation of the original 5-level scale into a 4-, 3-, or 2-level scale to improve reliability and agreement. The objecti

  19. MEMS reliability

    Hartzell, Allyson L; Shea, Herbert R


    This book focuses on the reliability and manufacturability of MEMS at a fundamental level. It demonstrates how to design MEMs for reliability and provides detailed information on the different types of failure modes and how to avoid them.

  20. Reliable identification at the species level of Brucella isolates with MALDI-TOF-MS

    Lista Florigio


    Full Text Available Abstract Background The genus Brucella contains highly infectious species that are classified as biological threat agents. The timely detection and identification of the microorganism involved is essential for an effective response not only to biological warfare attacks but also to natural outbreaks. Matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF-MS is a rapid method for the analysis of biological samples. The advantages of this method, compared to conventional techniques, are rapidity, cost-effectiveness, accuracy and suitability for the high-throughput identification of bacteria. Discrepancies between taxonomy and genetic relatedness on the species and biovar level complicate the development of detection and identification assays. Results In this study, the accurate identification of Brucella species using MALDI-TOF-MS was achieved by constructing a Brucella reference library based on multilocus variable-number tandem repeat analysis (MLVA data. By comparing MS-spectra from Brucella species against a custom-made MALDI-TOF-MS reference library, MALDI-TOF-MS could be used as a rapid identification method for Brucella species. In this way, 99.3% of the 152 isolates tested were identified at the species level, and B. suis biovar 1 and 2 were identified at the level of their biovar. This result demonstrates that for Brucella, even minimal genomic differences between these serovars translate to specific proteomic differences. Conclusions MALDI-TOF-MS can be developed into a fast and reliable identification method for genetically highly related species when potential taxonomic and genetic inconsistencies are taken into consideration during the generation of the reference library.

  1. InfoDROUGHT: Technical reliability assessment using crop yield data at the Spanish-national level

    Contreras, Sergio; Garcia-León, David; Hunink, Johannes E.


    Drought monitoring (DM) is a key component of risk-centered drought preparedness plans and drought policies. InfoDROUGHT ( is a a site- and user-tailored and fully-integrated DM system which combines functionalities for: a) the operational satellite-based weekly-1km tracking of severity and spatial extent of drought impacts, b) the interactive and faster query and delivery of drought information through a web-mapping service. InfoDROUGHT has a flexible and modular structure. The calibration (threshold definitions) and validation of the system is performed by combining expert knowledge and auxiliary impact assessments and datasets. Different technical solutions (basic or advanced versions) or deployment options (open-standard or restricted-authenticated) can be purchased by end-users and customers according to their needs. In this analysis, the technical reliability of InfoDROUGHT and its performance for detecting drought impacts on agriculture has been evaluated in the 2003-2014 period by exploring and quantifying the relationships among the drought severity indices reported by InfoDROUGHT and the annual yield anomalies observed for different rainfed crops (maize, wheat, barley) at Spain. We hypothesize a positive relationship between the crop anomalies and the drought severity level detected by InfoDROUGHT. Annual yield anomalies were computed at the province administrative level as the difference between the annual yield reported by the Spanish Annual Survey of Crop Acreages and Yields (ESYRCE database) and the mean annual yield estimated during the study period. Yield anomalies were finally compared against drought greenness-based and thermal-based drought indices (VCI and TCI, respectively) to check the coherence of the outputs and the hypothesis stated. InfoDROUGHT has been partly funded by the Spanish Ministry of Economy and Competiveness through a Torres-Quevedo grant, and by the H2020-EU project "Bridging the Gap for Innovations in

  2. The reliability of plantar pressure assessment during barefoot level walking in children aged 7-11 years

    Cousins Stephen D


    Full Text Available Abstract Background Plantar pressure assessment can provide information pertaining to the dynamic loading of the foot, as well as information specific to each region in contact with the ground. There have been few studies which have considered the reliability of plantar pressure data and therefore the purpose of this study was to investigate the reliability of assessing plantar pressure variables in a group of typically developing children, during barefoot level walking. Methods Forty-five participants, aged 7 to 11 years, were recruited from local primary and secondary schools in East London. Data from three walking trials were collected at both an initial and re-test session, taken one week apart, to determine both the within- and between-session reliability of selected plantar pressure variables. The variables of peak pressure, peak force, pressure-time and force-time integrals were extracted for analysis in the following seven regions of the foot; lateral heel, medial heel, midfoot, 1st metatarsophalangeal joint, 2nd-5th metatarsophalangeal joint, hallux and the lesser toes. Reliability of the data were explored using Intra Class Correlation Coefficients (ICC 3,1 and 3,2 and variability with Coefficients of Variation (CoV's. Results The measurements demonstrated moderate to good levels of within-session reliability across all segments of the foot (0.69-0.93, except the lesser toes, which demonstrated poor reliability (0.17-0.50. CoV's across the three repeated trials ranged from 10.12-19.84% for each of the measured variables across all regions of the foot, except the lesser toes which demonstrated the greatest variability within trials (27.15-56.08%. The between-session results demonstrated good levels of reliability across all foot segments (0.79-0.99 except the lesser toes; with moderate levels of reliability reported at this region of the foot (0.58-0.68. The CoV's between-sessions demonstrated that the midfoot (16.41-36.23% and lesser

  3. Reliable Transmission of Audio Streams in Lossy Channels Using Application Level Data Hiding

    Parag Agarwal


    Full Text Available The paper improves the reliability of audio streams in a lossy channel. The mechanism groups audio data samples into source and carrier sets. The carrier set carry the information about the source set which is encoded using data hiding methodology - quantization index modulation. At the receiver side, a missing source data sample can be reconstructed using the carrier set and the remaining source set. Based on reliability constraints a hybrid design combining interleaving and data hiding is presented. Experiments show an improved reliability as compared to forward error correction and interleaving.

  4. Extreme storms, sea level rise, and coastal change: implications for infrastructure reliability in the Gulf of Mexico

    Anarde, K.; Kameshwar, S.; Irza, N.; Lorenzo-Trueba, J.; Nittrouer, J. A.; Padgett, J.; Bedient, P. B.


    Predicting coastal infrastructure reliability during hurricane events is important for risk-based design and disaster planning, such as delineating viable emergency response routes. Previous research has focused on either infrastructure vulnerability to coastal flooding or the impact of changing sea level and landforms on surge dynamics. Here we investigate the combined impact of sea level, morphology, and coastal flooding on the reliability of highway bridges - the only access points between barrier islands and mainland communities - during future extreme storms. We forward model coastal flooding for static projections of geomorphic change using ADCIRC+SWAN. First-order parameters that are adjusted include sea level and elevation. These are varied for each storm simulation to evaluate relative impact on the reliability of bridges surrounding Freeport, TX. Simulated storms include both synthetic and historical events, which are classified by intensity using the storm's integrated kinetic energy, a metric for surge generation potential. Reliability is estimated through probability of failure - given wave and surge loads - and time inundated. Findings include that: 1) bridge reliability scales inversely with surge height, and 2) sea level rise reduces bridge reliability due to a monotonic increase in surge height. The impact of a shifting landscape on bridge reliability is more complex: barrier island rollback can increase or decrease inundation times for storms of different intensity due to changes in wind-setup and back-barrier bay interactions. Initial storm surge readily inundates the coastal landscape during large intensity storms, however the draining of inland bays following storm passage is significantly impeded by the barrier. From a coastal engineering standpoint, we determine that to protect critical infrastructure, efforts now implemented that nourish low-lying barriers may be enhanced by also armoring back-bay coastlines and elevating bridge approach

  5. A Reliable Energy-Efficient Multi-Level Routing Algorithm for Wireless Sensor Networks Using Fuzzy Petri Nets

    Yu, Zhenhua; Fu, Xiao; Cai, Yuanli; Vuran, Mehmet C.


    A reliable energy-efficient multi-level routing algorithm in wireless sensor networks is proposed. The proposed algorithm considers the residual energy, number of the neighbors and centrality of each node for cluster formation, which is critical for well-balanced energy dissipation of the network. In the algorithm, a knowledge-based inference approach using fuzzy Petri nets is employed to select cluster heads, and then the fuzzy reasoning mechanism is used to compute the degree of reliability in the route sprouting tree from cluster heads to the base station. Finally, the most reliable route among the cluster heads can be constructed. The algorithm not only balances the energy load of each node but also provides global reliability for the whole network. Simulation results demonstrate that the proposed algorithm effectively prolongs the network lifetime and reduces the energy consumption. PMID:22163802

  6. Validity and Reliability Study of the Scale for Determining the Civic-Mindedness Levels of Teaching Staff

    Yesil, Rüstü


    The purpose of this study was to develop a valid and reliable scale that can be used in determining the civic-mindedness levels of teaching staff working at universities. The study group of the research consisted of 758 students, 256 of whom were male and 524 were female. The item list, which was based on the literature and expert opinions, was…

  7. Extending Failure Modes and Effects Analysis Approach for Reliability Analysis at the Software Architecture Design Level

    Sozer, Hasan; Tekinerdogan, Bedir; Aksit, Mehmet; Lemos, de Rogerio; Gacek, Cristina


    Several reliability engineering approaches have been proposed to identify and recover from failures. A well-known and mature approach is the Failure Mode and Effect Analysis (FMEA) method that is usually utilized together with Fault Tree Analysis (FTA) to analyze and diagnose the causes of failures.

  8. Wafer level reliability monitoring strategy of an advanced multi-process CMOS foundry

    Scarpa, Andrea; Tao, Guoqiao; Kuper, F.G.


    In an advanced multi-process CMOS foundry it is strategically important to make use of an optimum reliability monitoring strategy, in order to be able to run well controlled processes. Philips Semiconductors Business Unit Foundries wafer fab MOS4YOU has developed an end-of-line ultra-fast

  9. Wafer level reliability monitoring strategy of an advanced multi-process CMOS foundry

    Scarpa, Andrea; Tao, Guoqiao; Kuper, Fred G.


    In an advanced multi-process CMOS foundry it is strategically important to make use of an optimum reliability monitoring strategy, in order to be able to run well controlled processes. Philips Semiconductors Business Unit Foundries wafer fab MOS4YOU has developed an end-of-line ultra-fast reliabilit

  10. Moving to a Higher Level for PV Reliability through Comprehensive Standards Based on Solid Science (Presentation)

    Kurtz, S.


    PV reliability is a challenging topic because of the desired long life of PV modules, the diversity of use environments and the pressure on companies to rapidly reduce their costs. This presentation describes the challenges, examples of failure mechanisms that we know or don't know how to test for, and how a scientific approach is being used to establish international standards.


    Kushnir N. V.


    Full Text Available The article presents a mathematical model of control over dynamic hierarchy system. The model was proposed for dealing with systems with assumed order in the technical problem of predicting destructions depending onto the amount of defects on different scale levels. The problem of the closest to a certain point of shelf life of hierarchy system is solved. The example of approach control during the given time is given. The problem concerns mathematic programming. Formulation of multi-parameter vector optimization criteria (improvement with its own hierarchy and the formal exercise of multi-criteria optimization of the model parameters. The research can achieve clarity about the conditions under which the structure is preserved. Managing sustainable development system with a given level of the hierarchy for the technical systems can only be achieved in keeping

  12. Evaluasi Reliability dan Safety pada Sistem Pengendalian Level Syn Gas 2ND Interstage Separator Di PT. Petrokimia Gresik

    Dewi Nur Rahmawati


    Full Text Available Telah dilakukan evaluasi reliability dan safety pada sistem pengendalian level separator. Tujuan dilakukan tugas akhir ini yaitu untuk mengetahui evaluasi perhitungan reliability dan nilai SIL yang terpakai pada sistem pengendalian level separator. Metode yang digunakan yaitu metode kuantitatif. Synthesis gas compressor adalah plant untuk menaikkan pressure dari 30 kg/cm2 menjadi 180 kg/cm2. Plant ini terdiri dari 4 tingkat yang didalamnya terdapat cooler dan separator. Separator merupakan tabung bertekanan yang digunakan untuk memisahkan gas dengan air. Didalam separator diharapkan tidak terdapat air karena air dapat menyebabkan vibrasi dikompressor. Dari hasil evaluasi didapatkan nilai reliability terendah dimiliki oleh komponen LV 1159 sebesar 0,58574 selama 8760 jam. Untuk tingkat safety komponen sistem pengendalian level separator berada pada SIL 1, namun pada komponen LV 1159 dilakukan penurunan nilai PFD dengan metode redundant yang semula nilai PFD-nya 0,05220 menjadi 0,00892 sehingga nilai SIL-nya menjadi SIL 2. Berdasarkan batas acuan nilai reliability untuk dilakukan preventive maintenance sebesar 0,8 maka untuk komponen LV 1159 memiliki waktu preventive maintenance 1900 jam atau 2,5 bulan, LT 1159 t = 13900 jam atau 19 bulan, dan LIC 1159 t = 17300 jam atau 2 tahun. Dengan biaya preventive maintenance keseluruhan komponen sebesar Rp. 516.120,00 pertahunnya.

  13. Reliability Level of in the Production Schedule Design in the Construction Company

    Velina Yordanova


    A reliable production schedule is an important prerequisite for the achievement of good economic results on the part of the construction enterprise and for ensuring rhythmical and effective work in its functioning. In this respect in the present article there are considered the peculiarities of the production schedule, which we believe it is necessary to consider during its development. Taking into consideration the characteristics of construction production there is made an attempt at propos...

  14. What range of trait levels can the Autism-Spectrum Quotient (AQ) measure reliably? An item response theory analysis.

    Murray, Aja Louise; Booth, Tom; McKenzie, Karen; Kuenssberg, Renate


    It has previously been noted that inventories measuring traits that originated in a psychopathological paradigm can often reliably measure only a very narrow range of trait levels that are near and above clinical cutoffs. Much recent work has, however, suggested that autism spectrum disorder traits are on a continuum of severity that extends well into the nonclinical range. This implies a need for inventories that can capture individual differences in autistic traits from very high levels all the way to the opposite end of the continuum. The Autism-Spectrum Quotient (AQ) was developed based on a closely related rationale, but there has, to date, been no direct test of the range of trait levels that the AQ can reliably measure. To assess this, we fit a bifactor item response theory model to the AQ. Results suggested that AQ measures moderately low to moderately high levels of a general autistic trait with good measurement precision. The reliable range of measurement was significantly improved by scoring the instrument using its 4-point response scale, rather than dichotomizing responses. These results support the use of the AQ in nonclinical samples, but suggest that items measuring very low and very high levels of autistic traits would be beneficial additions to the inventory. (PsycINFO Database Record

  15. Reliability of the discrete choice experiment at the input and output level in patients with rheumatoid arthritis

    Skjoldborg, Ulla Slothuus; Lauridsen, Jørgen; Junker, Peter


    agents. Respondents participated in three face-to-face interviews over a period of 4 months. Reliability was measured both at the input level, where the consistency of matches made by respondents to the Discrete Choice Experiment (DCE) question between replications was determined, and at the output level......, where the parameters of the conjoint model were estimated and tested for joint significance and willingness to pay (WTP) confidence intervals were calculated. RESULTS: Input level: Of the 1661 choices made in survey 1, 1316 were repeated in survey 2. Based on the observed number of consistently repeated...

  16. The clubfoot assessment protocol (CAP; description and reliability of a structured multi-level instrument for follow-up

    Jarnlo Gun-Britt


    Full Text Available Abstract Background In most clubfoot studies, the outcome instruments used are designed to evaluate classification or long-term cross-sectional results. Variables deal mainly with factors on body function/structure level. Wide scorings intervals and total sum scores increase the risk that important changes and information are not detected. Studies of the reliability, validity and responsiveness of these instruments are sparse. The lack of an instrument for longitudinal follow-up led the investigators to develop the Clubfoot Assessment Protocol (CAP. The aim of this article is to introduce and describe the CAP and evaluate the items inter- and intra reliability in relation to patient age. Methods The CAP was created from 22 items divided between body function/structure (three subgroups and activity (one subgroup levels according to the International Classification of Function, Disability and Health (ICF. The focus is on item and subgroup development. Two experienced examiners assessed 69 clubfeet in 48 children who had a median age of 2.1 years (range, 0 to 6.7 years. Both treated and untreated feet with different grades of severity were included. Three age groups were constructed for studying the influence of age on reliability. The intra- rater study included 32 feet in 20 children who had a median age of 2.5 years (range, 4 months to 6.8 years. The Unweighted Kappa statistics, percentage observer agreement, and amount of categories defined how reliability was to be interpreted. Results The inter-rater reliability was assessed as moderate to good for all but one item. Eighteen items had kappa values > 0.40. Three items varied from 0.35 to 0.38. The mean percentage observed agreement was 82% (range, 62 to 95%. Different age groups showed sufficient agreement. Intra- rater; all items had kappa values > 0.40 [range, 0.54 to 1.00] and a mean percentage agreement of 89.5%. Categories varied from 3 to 5. Conclusion The CAP contains more detailed

  17. Penalty for Fuel Economy - System Level Perspectives on the Reliability of Hybrid Electric Vehicles During Normal and Graceful Degradation Operation


    the issue of system level reliability in hybrid electric vehicles from a quantitative point of view. It also introduces a quantitative meaning to the...internal combustion engine based vehicle and later transition of those to hybrid electric vehicles . The paper intends to drive the point that in HEV...Generally people tend to think only in terms of fuel economy and additional cost premium on vehicle price while discussing about hybrid electric

  18. Inter-rater reliability of cyclic and non-cyclic task assessment using the hand activity level in appliance manufacturing.

    Paulsen, Robert; Schwatka, Natalie; Gober, Jennifer; Gilkey, David; Anton, Dan; Gerr, Fred; Rosecrance, John


    This study evaluated the inter-rater reliability of the American Conference of Governmental Industrial Hygienists (ACGIH(®)) hand activity level (HAL), an observational ergonomic assessment method used to estimate physical exposure to repetitive exertions during task performance. Video recordings of 858 cyclic and non-cyclic appliance manufacturing tasks were assessed by sixteen pairs of raters using the HAL visual-analog scale. A weighted Pearson Product Moment-Correlation Coefficient was used to evaluate the agreement between the HAL scores recorded by each rater pair, and the mean weighted correlation coefficients for cyclic and non-cyclic tasks were calculated. Results indicated that the HAL is a reliable exposure assessment method for cyclic (r̄-bar w = 0.69) and non-cyclic work tasks (r̄-bar w = 0.68). When the two reliability scores were compared using a two-sample Student's t-test, no significant difference in reliability (p = 0.63) between these work task categories was found. This study demonstrated that the HAL may be a useful measure of exposure to repetitive exertions during cyclic and non-cyclic tasks.

  19. Fusion of multi-sensory NDT data for reliable detection of surface cracks: Signal-level vs. decision-level

    Heideklang, René; Shokouhi, Parisa


    We present and compare two different approaches for NDT multi-sensor data fusion at signal (low) and decision (high) levels. Signal-level fusion is achieved by applying simple algebraic rules to strategically post-processed images. This is done in the original domain or in the domain of a suitable signal transform. The importance of signal normalization for low-level fusion applications is emphasized in regard to heterogeneous NDT data sets. For fusion at decision level, we develop a procedure based on assembling joint kernel density estimation (KDE). The procedure involves calculating KDEs for individual sensor detections and aggregating them by applying certain combination rules. The underlying idea is that if the detections from more than one sensor fall spatially close to one another, they are likely to result from the presence of a defect. On the other hand, single-senor detections are more likely to be structural noise or false alarm indications. To this end, we design the KDE combination rules such that it prevents single-sensor domination and allows data-driven scaling to account for the influence of individual sensors. We apply both fusion rules to a three-sensor dataset consisting in ET, MFL/GMR and TT data collected on a specimen with built-in surface discontinuities. The performance of the fusion rules in defect detection is quantitatively evaluated and compared against those of the individual sensors. Both classes of data fusion rules result in a fused image of fewer false alarms and thus improved defect detection. Finally, we discuss the advantages and disadvantages of low-level and high-level NDT data fusion with reference to our experimental results.

  20. A Level of Care Instrument for Children's Systems of Care: Construction, Reliability and Validity

    Fallon, Theodore, Jr.; Pumariega, Andres; Sowers, Wesley; Klaehn, Robert; Huffine, Charles; Vaughan, Thomas, Jr.; Winters, Nancy; Chenven, Mark; Marx, Larry; Zachik, Albert; Heffron, William; Grimes, Katherine


    The Child and Adolescent Level of Care System/Child and Adolescent Service Intensity Instrument (CALOCUS/CASII) is designed to help determine the intensity of services needed for a child served in a mental health system of care. The instrument contains eight dimensions that are rated following a comprehensive clinical evaluation. The dimensions…

  1. Digoxin Therapy of Fetal Superior Ventricular Tachycardia: Are Digoxin Serum Levels Reliable?

    Saad, Antonio F; Monsivais, Luis; Pacheco, Luis D


    Despite its seldom occurrence, fetal tachycardia can lead to poor fetal outcomes including hydrops and fetal death. Management can be challenging and result in maternal adverse effects secondary to high serum drug levels required to achieve effective transplacental antiarrhythmic drug therapy. A 33-year-old woman at 33 weeks of gestation with a diagnosis of a fetal sustained superior ventricular tachycardia developed chest pain, shortness of breath, and bigeminy on electrocardiogram secondary to digoxin toxicity despite subtherapeutic serum drug levels. She required supportive care with repletion of corresponding electrolyte abnormalities. After resolution of cardiac manifestations of digoxin toxicity, the patient was discharged home. The newborn was discharged at day 9 of life on maintenance amiodarone. We describe an interesting case of digoxin toxicity with cardiac manifestations of digoxin toxicity despite subtherapeutic serum drug levels. This case report emphasizes the significance of instituting an early diagnosis of digoxin toxicity during pregnancy, based not only on serum drug levels but also on clinical presentation. In cases of refractory supportive care, digoxin Fab fragment antibody administration should be considered. With timely diagnosis and treatment, excellent maternal and perinatal outcomes can be achieved.

  2. Routes to improving the reliability of low level DNA analysis using real-time PCR

    Burns Malcolm J


    Full Text Available Abstract Background Accurate quantification of DNA using quantitative real-time PCR at low levels is increasingly important for clinical, environmental and forensic applications. At low concentration levels (here referring to under 100 target copies DNA quantification is sensitive to losses during preparation, and suffers from appreciable valid non-detection rates for sampling reasons. This paper reports studies on a real-time quantitative PCR assay targeting a region of the human SRY gene over a concentration range of 0.5 to 1000 target copies. The effects of different sample preparation and calibration methods on quantitative accuracy were investigated. Results At very low target concentrations of 0.5–10 genome equivalents (g.e. eliminating any replicates within each DNA standard concentration with no measurable signal (non-detects compromised calibration. Improved calibration could be achieved by eliminating all calibration replicates for any calibration standard concentration with non-detects ('elimination by sample'. Test samples also showed positive bias if non-detects were removed prior to averaging; less biased results were obtained by converting to concentration, including non-detects as zero concentration, and averaging all values. Tube plastic proved to have a strongly significant effect on DNA quantitation at low levels (p = 1.8 × 10-4. At low concentrations (under 10 g.e., results for assays prepared in standard plastic were reduced by about 50% compared to the low-retention plastic. Preparation solution (carrier DNA or stabiliser was not found to have a significant effect in this study. Detection probabilities were calculated using logistic regression. Logistic regression over large concentration ranges proved sensitive to non-detected replicate reactions due to amplification failure at high concentrations; the effect could be reduced by regression against log (concentration or, better, by eliminating invalid responses

  3. Reliable Design Versus Trust

    Berg, Melanie; LaBel, Kenneth A.


    This presentation focuses on reliability and trust for the users portion of the FPGA design flow. It is assumed that the manufacturer prior to hand-off to the user tests FPGA internal components. The objective is to present the challenges of creating reliable and trusted designs. The following will be addressed: What makes a design vulnerable to functional flaws (reliability) or attackers (trust)? What are the challenges for verifying a reliable design versus a trusted design?

  4. Analysis of the service of volleyball in female juvenile category in terms of level risk assumed and his efficacy Análisis del saque de voleibol en categoría juvenil femenina en función del nivel de riesgo asumido y su eficacia

    J. A. Valladares


    Full Text Available

    The service, one of the more important actions in volleyball has been conditioned by the experienced evolution by this sport over the past decades, what has brought important changes in the execution of the technique of the service just as in the tactical systems used by the teams. According to an observational model, a methodological proposal developed in five phases has ensued for analysis and assessment of the level of risk that they assume them players of volleyball in juvenile category in the service regarding the obtained efficacy. In that one methodology has been identified and established the variables that affect it, stops at a later time quantifying and examining the level of risk in terms of the kind of service and of the trajectory described all by himself. The obtained data (2.237 cases were examined statistically to come to an end that this category does not appraise a command and control itself enough as if to evidence a tactical obvious intentionality that players would assume a voluntary risk level and once the situation was adapted of the match.
    KEY WORDS: volleyball, service, risk level, efficacy.


    La evolución experimentada por el voleibol a lo largo de las últimas décadas ha condicionado en gran medida una de las acciones de juego más importantes, el servicio, lo que ha traído consigo notables cambios en la ejecución técnica del saque, así como en los sistemas tácticos empleados por los equipos. Atendiendo a un modelo observacional, se ha seguido una propuesta metodológica desarrollada en cinco fases para el análisis y valoración del nivel de riesgo que asumen las jugadoras de voleibol en categoría juvenil en el servicio con respecto a la eficacia obtenida. En dicha metodología se han identificado y establecido las variables que lo afectan, para posteriormente cuantificar y

  5. Autism detection in early childhood (ADEC): reliability and validity data for a Level 2 screening tool for autistic disorder.

    Nah, Yong-Hwee; Young, Robyn L; Brewer, Neil; Berlingeri, Genna


    The Autism Detection in Early Childhood (ADEC; Young, 2007) was developed as a Level 2 clinician-administered autistic disorder (AD) screening tool that was time-efficient, suitable for children under 3 years, easy to administer, and suitable for persons with minimal training and experience with AD. A best estimate clinical Diagnostic and Statistical Manual of Mental Disorders (4th ed., text rev.; DSM-IV-TR; American Psychiatric Association, 2000) diagnosis of AD was made for 70 children using all available information and assessment results, except for the ADEC data. A screening study compared these children on the ADEC with 57 children with other developmental disorders and 64 typically developing children. Results indicated high internal consistency (α = .91). Interrater reliability and test-retest reliability of the ADEC were also adequate. ADEC scores reliably discriminated different diagnostic groups after controlling for nonverbal IQ and Vineland Adaptive Behavior Composite scores. Construct validity (using exploratory factor analysis) and concurrent validity using performance on the Autism Diagnostic Observation Schedule (Lord et al., 2000), the Autism Diagnostic Interview-Revised (Le Couteur, Lord, & Rutter, 2003), and DSM-IV-TR criteria were also demonstrated. Signal detection analysis identified the optimal ADEC cutoff score, with the ADEC identifying all children who had an AD (N = 70, sensitivity = 1.0) but overincluding children with other disabilities (N = 13, specificity ranging from .74 to .90). Together, the reliability and validity data indicate that the ADEC has potential to be established as a suitable and efficient screening tool for infants with AD.

  6. Improving Reliability of Subject-Level Resting-State fMRI Parcellation with Shrinkage Estimators

    Mejia, Amanda F.; Nebel, Mary Beth; Shou, Haochang; Crainiceanu, Ciprian M.; Pekar, James J.; Mostofsky, Stewart; Caffo, Brian; Lindquist, Martin A.


    A recent interest in resting state functional magnetic resonance imaging (rsfMRI) lies in subdividing the human brain into anatomically and functionally distinct regions of interest. For example, brain parcellation is often a necessary step for defining the network nodes used in connectivity studies. While inference has traditionally been performed on group-level data, there is a growing interest in parcellating single subject data. However, this is difficult due to the inherent low signal-to...

  7. Demonstrating Reliable High Level Waste Slurry Sampling Techniques to Support Hanford Waste Processing

    Kelly, Steven E.


    The Hanford Tank Operations Contractor (TOC) and the Hanford Waste Treatment and Immobilization Plant (WTP) contractor are both engaged in demonstrating mixing, sampling, and transfer system capability using simulated Hanford High-Level Waste (HL W) formulations. This work represents one of the remaining technical issues with the high-level waste treatment mission at Hanford. The TOC must demonstrate the ability to adequately mix and sample high-level waste feed to meet the WTP Waste Acceptance Criteria and Data Quality Objectives. The sampling method employed must support both TOC and WTP requirements. To facilitate information transfer between the two facilities the mixing and sampling demonstrations are led by the One System Integrated Project Team. The One System team, Waste Feed Delivery Mixing and Sampling Program, has developed a full scale sampling loop to demonstrate sampler capability. This paper discusses the full scale sampling loops ability to meet precision and accuracy requirements, including lessons learned during testing. Results of the testing showed that the Isolok(R) sampler chosen for implementation provides precise, repeatable results. The Isolok(R) sampler accuracy as tested did not meet test success criteria. Review of test data and the test platform following testing by a sampling expert identified several issues regarding the sampler used to provide reference material used to judge the Isolok's accuracy. Recommendations were made to obtain new data to evaluate the sampler's accuracy utilizing a reference sampler that follows good sampling protocol.

  8. Undamped critical speeds of rotor systems using assumed modes

    Nelson, H. D.; Chen, W. J.


    A procedure is presented to reduce the DOF of a discrete rotordynamics model by utilizing an assumed-modes Rayleigh-Ritz approximation. Many possibilities exist for the assumed modes and any reasonable choice will yield a reduced-order model with adequate accuracy for most applications. The procedure provides an option which can be implemented with relative ease and may prove beneficial for many applications where computational efficiency is particularly important.

  9. System Verification Through Reliability, Availability, Maintainability (RAM) Analysis & Technology Readiness Levels (TRLs)

    Emmanuel Ohene Opare, Jr.; Charles V. Park


    The Next Generation Nuclear Plant (NGNP) Project, managed by the Idaho National Laboratory (INL), is authored by the Energy Policy Act of 2005, to research, develop, design, construct, and operate a prototype fourth generation nuclear reactor to meet the needs of the 21st Century. A section in this document proposes that the NGNP will provide heat for process heat applications. As with all large projects developing and deploying new technologies, the NGNP is expected to meet high performance and availability targets relative to current state of the art systems and technology. One requirement for the NGNP is to provide heat for the generation of hydrogen for large scale productions and this process heat application is required to be at least 90% or more available relative to other technologies currently on the market. To reach this goal, a RAM Roadmap was developed highlighting the actions to be taken to ensure that various milestones in system development and maturation concurrently meet required availability requirements. Integral to the RAM Roadmap was the use of a RAM analytical/simulation tool which was used to estimate the availability of the system when deployed based on current design configuration and the maturation level of the system.

  10. Effect of Interfacial Reactions on the Reliability of Lead-Free Assemblies after Board Level Drop Tests

    Xia, Yanghua; Lu, Chuanyan; Xie, Xiaoming


    The reliability of lead-free electronic assemblies after board level drop tests was investigated. Thin small outline package (TSOP) components with 42 FeNi alloy leads were reflow soldered on FR4 printed circuit boards (PCBs) with Sn3.0Ag0.5Cu (wt%) solder. The effects of different PCB finishes [organic solderability preservative (OSP) and electroless nickel immersion gold (ENIG)], multiple reflow (once and three times), and isothermal aging (500 h at 125°C after one time reflow) were studied. The ENIG finish showed better performance than its OSP counterparts. With the OSP finish, solder joints reflowed three times showed obvious improvement compared to those of the sample reflowed once, while aging led to apparent degradation. The results showed that intermetallic compound (IMC) types, IMC microstructure and solder microstructure compete with each other, all playing very important roles in the solder joint lifetime. The results also showed that it is important to specify adequate conditions for a given reliability assessment program, to allow meaningful comparison between results of different investigators.

  11. Bayesian modeling growth curves for quail assuming skewness in errors

    Robson Marcelo Rossi


    Full Text Available Bayesian modeling growth curves for quail assuming skewness in errors - To assume normal distributions in the data analysis is common in different areas of the knowledge. However we can make use of the other distributions that are capable to model the skewness parameter in the situations that is needed to model data with tails heavier than the normal. This article intend to present alternatives to the assumption of the normality in the errors, adding asymmetric distributions. A Bayesian approach is proposed to fit nonlinear models when the errors are not normal, thus, the distributions t, skew-normal and skew-t are adopted. The methodology is intended to apply to different growth curves to the quail body weights. It was found that the Gompertz model assuming skew-normal errors and skew-t errors, respectively for male and female, were the best fitted to the data.

  12. Reliability Improvement of a T-Type Three-Level Inverter With Fault-Tolerant Control Strategy

    Choi, Uimin; Blaabjerg, Frede; Lee, Kyo-Beum


    in a neutral-point switch, two methods will be proposed and compared based on thermal analysis and neutral-point voltage oscillation. The reliability of T-type inverter systems is improved considerably by the proposed algorithm when a switch fails. The proposed method does not require any additional components......This paper proposes a fault-tolerant control strategy for a T-type three-level inverter when an open-circuit fault occurs. The proposed method is explained by dividing fault into two cases: the faulty condition of half-bridge switches and neutral-point switches. In case of the open-circuit fault....... Simulation and experimental results verify the validity and feasibility of the proposed fault-tolerant control strategy....


    ZHANG Can-hui; HUANG Qian; FENG Wei


    The new methods to determine the zero-energy deformation modes in the hybrid elements and the zero-energy stress modes in their assumed stress fields are presented by the natural deformation modes of the elements. And the formula of the additional element deformation rigidity due to additional mode into the assumed stress field is derived.Based on, it is concluded in theory that the zero-energy stress mode cannot suppress the zero-energy deformation modes but increase the extra rigidity to the nonzero-energy deformation modes of the element instead. So they should not be employed to assume the stress field. In addition, the parasitic stress modes will produce the spurious parasitic energy and result the element behaving over rigidity. Thus, they should not be used into the assumed stress field even though they can suppress the zero-energy deformation modes of the element. The numerical examples show the performance of the elements including the zero-energy stress modes or the parasitic stress modes.

  14. Statistical motor number estimation assuming a binomial distribution.

    Blok, J.H.; Visser, G.H.A.; Graaf, S.S.N. de; Zwarts, M.J.; Stegeman, D.F.


    The statistical method of motor unit number estimation (MUNE) uses the natural stochastic variation in a muscle's compound response to electrical stimulation to obtain an estimate of the number of recruitable motor units. The current method assumes that this variation follows a Poisson distribution.

  15. Review of the Constellation Level II Safety, Reliability, and Quality Assurance (SR&QA) Requirements Documents during Participation in the Constellation Level II SR&QA Forum

    Cameron, Kenneth D.; Gentz, Steven J.; Beil, Robert J.; Minute, Stephen A.; Currie, Nancy J.; Scott, Steven S.; Thomas, Walter B., III; Smiles, Michael D.; Schafer, Charles F.; Null, Cynthia H.; hide


    At the request of the Exploration Systems Mission Directorate (ESMD) and the Constellation Program (CxP) Safety, Reliability; and Quality Assurance (SR&QA) Requirements Director, the NASA Engineering and Safety Center (NESC) participated in the Cx SR&QA Requirements forum. The Requirements Forum was held June 24-26; 2008, at GRC's Plum Brook Facility. The forums purpose was to gather all stakeholders into a focused meeting to help complete the process of refining the CxP to refine its Level II SR&QA requirements or defining project-specific requirements tailoring. Element prime contractors had raised specific questions about the wording and intent of many requirements in areas they felt were driving costs without adding commensurate value. NESC was asked to provide an independent and thorough review of requirements that contractors believed were driving Program costs, by active participation in the forum. This document contains information from the forum.

  16. Modeling turbulent/chemistry interactions using assumed pdf methods

    Gaffney, R. L, Jr.; White, J. A.; Girimaji, S. S.; Drummond, J. P.


    Two assumed probability density functions (pdfs) are employed for computing the effect of temperature fluctuations on chemical reaction. The pdfs assumed for this purpose are the Gaussian and the beta densities of the first kind. The pdfs are first used in a parametric study to determine the influence of temperature fluctuations on the mean reaction-rate coefficients. Results indicate that temperature fluctuations significantly affect the magnitude of the mean reaction-rate coefficients of some reactions depending on the mean temperature and the intensity of the fluctuations. The pdfs are then tested on a high-speed turbulent reacting mixing layer. Results clearly show a decrease in the ignition delay time due to increases in the magnitude of most of the mean reaction rate coefficients.

  17. Automated Assume-Guarantee Reasoning by Abstraction Refinement

    Pasareanu, Corina S.; Giannakopoulous, Dimitra; Glannakopoulou, Dimitra


    Current automated approaches for compositional model checking in the assume-guarantee style are based on learning of assumptions as deterministic automata. We propose an alternative approach based on abstraction refinement. Our new method computes the assumptions for the assume-guarantee rules as conservative and not necessarily deterministic abstractions of some of the components, and refines those abstractions using counter-examples obtained from model checking them together with the other components. Our approach also exploits the alphabets of the interfaces between components and performs iterative refinement of those alphabets as well as of the abstractions. We show experimentally that our preliminary implementation of the proposed alternative achieves similar or better performance than a previous learning-based implementation.

  18. Chemically reacting supersonic flow calculation using an assumed PDF model

    Farshchi, M.


    This work is motivated by the need to develop accurate models for chemically reacting compressible turbulent flow fields that are present in a typical supersonic combustion ramjet (SCRAMJET) engine. In this paper the development of a new assumed probability density function (PDF) reaction model for supersonic turbulent diffusion flames and its implementation into an efficient Navier-Stokes solver are discussed. The application of this model to a supersonic hydrogen-air flame will be considered.

  19. Assume-Guarantee Synthesis for Digital Contract Signing

    Chatterjee, Krishnendu


    We study the automatic synthesis of fair non-repudiation protocols, a class of fair exchange protocols, used for digital contract signing. First, we show how to specify the objectives of the participating agents and the trusted third party (TTP) as path formulas in LTL and prove that the satisfaction of these objectives imply fairness and abuse-freeness; properties required of fair exchange protocols. We then show that weak (co-operative) co-synthesis and classical (strictly competitive) co-synthesis fail, whereas assume-guarantee synthesis (AGS) succeeds. We demonstrate the success of assume-guarantee synthesis as follows: (a) any solution of assume-guarantee synthesis is attack-free; no subset of participants can violate the objectives of the other participants; (b) the Asokan-Shoup-Waidner (ASW) certified mail protocol that has known vulnerabilities is not a solution of AGS; (c) The Garay-Jakobsson-MacKenzie (GJM) protocol, while fair and abuse-free, is not attack-free by our definition and is hence not a ...

  20. Simple, reliable and fast spectrofluorometric method for determination of plasma Verteporfin (Visudyne) levels during photodynamic therapy for choroidal neovascularization.

    Aquaron, R; Forzano, O; Murati, J L; Fayet, G; Aquaron, C; Ridings, B


    Photodynamic therapy with Verteporfin, a potent photosensitizer dye, is a very effective treatment for age related macular degeneration due to choroidal neovascularization. Photodynamic therapy offers the potential for selective tissue injury in part attributable to preferential localization of Verteporfin, administrated by intravenous infusion, to the choroidal neovascularization complex and irradiation of the complex with non-laser thermal light at 690 nm resulting in at least temporary thrombosis and vessel closure. Verteporfin is a benzoporphyrin derivative monoacid ring A formulated as a unilamellar liposome. In the blood Verteporfin is associated with lipoprotein fractions and is rapidly cleared via a receptor-mediated uptake mechanism due the high expression of LDL receptors in neovascular tissues. Verteporfin was undetectable in plasma 24 hr after infusion of the recommended dose: 6 mg/m2 of body surface area. The main side effect is photosensitivity of skin which is usually short-lived (24-48 hr) with a low incidence (2.3%). As skin photosensitivity depends on circulating rather than tissue drug levels, we investigate the possibility of developing a simple, fast and reliable spectrofluorometric method to measure plasma Verteporfin levels. Fluorescence emission spectrum (550-750 nm) of 1:10 saline diluted plasma with lambda exc=430 nm showed a characteristic emission peak at 692 nm, the height being proportional to the Verteporfin levels. The sensitivity is around 100 ng/ml and the pharmacokinetics of Verteporfin has been studied from 0 to 5 hr after infusion in six patients older than 65 years with age-related macular degeneration.

  1. Ultra-Reliable Communication in a Factory Environment for 5G Wireless Networks: Link Level and Deployment Study

    Singh, Bikramjit; Lee, Zexian; Tirkkonen, Olav


    The focus of this paper on mission-critical Communications in a 5G cellular communication system. Technologies to provide ultra-reliable communication, with 99:999 % availability in a factory environment are studied. We have analysed the feasibility requirements for ultra-reliable communication...

  2. Assessment of torque-steadiness reliability at the ankle level in healthy young subjects: implications for cerebral palsy

    Bandholm, Thomas; Rose, Martin Høyer; Sonne-Holm, Stig;


    It was the primary objective of this study to investigate whether quantifying fluctuations in dorsi and plantarflexor torque during submaximal isometric contractions is a reliable measurement in young healthy subjects. A secondary objective was to investigate the reliability of the associated mus...

  3. A New Assumed Interaction. Experiments and Manifestations in Astrophysics

    Baurov, Yu A


    Results of experimental investigations of a new assumed interaction in nature with the aid of high-current magnets, torsion and piezoresonance balances, high-precision gravimeter, fluctuations in intensity of betta-decay of radioactive elements, plasma devices and manifestations in astrophysics are presented. A possible explanation of the results obtained based on a hypothesis of global anisotropy of physical space caused by the existence of a cosmological vectorial potential A_g, is given. It is shown that the vector A_g has the following coordinates in the second equatorial coordinate system: right ascension alpha = 293 +- 10, declination delta = 36 +- 10.

  4. Valid and reliable instruments for arm-hand assessment at ICF activity level in persons with hemiplegia: a systematic review


    Background Loss of arm-hand performance due to a hemiparesis as a result of stroke or cerebral palsy (CP), leads to large problems in daily life of these patients. Assessment of arm-hand performance is important in both clinical practice and research. To gain more insight in e.g. effectiveness of common therapies for different patient populations with similar clinical characteristics, consensus regarding the choice and use of outcome measures is paramount. To guide this choice, an overview of available instruments is necessary. The aim of this systematic review is to identify, evaluate and categorize instruments, reported to be valid and reliable, assessing arm-hand performance at the ICF activity level in patients with stroke or cerebral palsy. Methods A systematic literature search was performed to identify articles containing instruments assessing arm-hand skilled performance in patients with stroke or cerebral palsy. Instruments were identified and divided into the categories capacity, perceived performance and actual performance. A second search was performed to obtain information on their content and psychometrics. Results Regarding capacity, perceived performance and actual performance, 18, 9 and 3 instruments were included respectively. Only 3 of all included instruments were used and tested in both patient populations. The content of the instruments differed widely regarding the ICF levels measured, assessment of the amount of use versus the quality of use, the inclusion of unimanual and/or bimanual tasks and the inclusion of basic and/or extended tasks. Conclusions Although many instruments assess capacity and perceived performance, a dearth exists of instruments assessing actual performance. In addition, instruments appropriate for more than one patient population are sparse. For actual performance, new instruments have to be developed, with specific focus on the usability in different patient populations and the assessment of quality of use as well as

  5. Valid and reliable instruments for arm-hand assessment at ICF activity level in persons with hemiplegia: a systematic review

    Lemmens Ryanne JM


    Full Text Available Abstract Background Loss of arm-hand performance due to a hemiparesis as a result of stroke or cerebral palsy (CP, leads to large problems in daily life of these patients. Assessment of arm-hand performance is important in both clinical practice and research. To gain more insight in e.g. effectiveness of common therapies for different patient populations with similar clinical characteristics, consensus regarding the choice and use of outcome measures is paramount. To guide this choice, an overview of available instruments is necessary. The aim of this systematic review is to identify, evaluate and categorize instruments, reported to be valid and reliable, assessing arm-hand performance at the ICF activity level in patients with stroke or cerebral palsy. Methods A systematic literature search was performed to identify articles containing instruments assessing arm-hand skilled performance in patients with stroke or cerebral palsy. Instruments were identified and divided into the categories capacity, perceived performance and actual performance. A second search was performed to obtain information on their content and psychometrics. Results Regarding capacity, perceived performance and actual performance, 18, 9 and 3 instruments were included respectively. Only 3 of all included instruments were used and tested in both patient populations. The content of the instruments differed widely regarding the ICF levels measured, assessment of the amount of use versus the quality of use, the inclusion of unimanual and/or bimanual tasks and the inclusion of basic and/or extended tasks. Conclusions Although many instruments assess capacity and perceived performance, a dearth exists of instruments assessing actual performance. In addition, instruments appropriate for more than one patient population are sparse. For actual performance, new instruments have to be developed, with specific focus on the usability in different patient populations and the assessment of

  6. Statistical motor number estimation assuming a binomial distribution.

    Blok, Joleen H; Visser, Gerhard H; de Graaf, Sándor; Zwarts, Machiel J; Stegeman, Dick F


    The statistical method of motor unit number estimation (MUNE) uses the natural stochastic variation in a muscle's compound response to electrical stimulation to obtain an estimate of the number of recruitable motor units. The current method assumes that this variation follows a Poisson distribution. We present an alternative that instead assumes a binomial distribution. Results of computer simulations and of a pilot study on 19 healthy subjects showed that the binomial MUNE values are considerably higher than those of the Poisson method, and in better agreement with the results of other MUNE techniques. In addition, simulation results predict that the performance in patients with severe motor unit loss will be better for the binomial than Poisson method. The adapted method remains closer to physiology, because it can accommodate the increase in activation probability that results from rising stimulus intensity. It does not need recording windows as used with the Poisson method, and is therefore less user-dependent and more objective and quicker in its operation. For these reasons, we believe that the proposed modifications may lead to significant improvements in the statistical MUNE technique.

  7. Uniform convergence and a posteriori error estimation for assumed stress hybrid finite element methods

    Yu, Guozhu; Carstensen, Carsten


    Assumed stress hybrid methods are known to improve the performance of standard displacement-based finite elements and are widely used in computational mechanics. The methods are based on the Hellinger-Reissner variational principle for the displacement and stress variables. This work analyzes two existing 4-node hybrid stress quadrilateral elements due to Pian and Sumihara [Int. J. Numer. Meth. Engng, 1984] and due to Xie and Zhou [Int. J. Numer. Meth. Engng, 2004], which behave robustly in numerical benchmark tests. For the finite elements, the isoparametric bilinear interpolation is used for the displacement approximation, while different piecewise-independent 5-parameter modes are employed for the stress approximation. We show that the two schemes are free from Poisson-locking, in the sense that the error bound in the a priori estimate is independent of the relevant Lame constant $\\lambda$. We also establish the equivalence of the methods to two assumed enhanced strain schemes. Finally, we derive reliable ...

  8. Impact of changes to T and R 5-5A on jack-up system reliability levels

    Morandi, A.C.


    This report aims to develop a robust methodology to provide quantified guidance relating to the impact on reliability of changes to T and R 5-5A being presently considered by the industry. This will facilitate the industry consensus necessary for such changes to be fully incorporated into jack-up site assessment. Covers phase I of the work. Previews previous pushover and reliability analyses, and areas where modelling improvements are recommended were identified. Factors that affect jack-up system reliability were identified and a premise for performing reliability analyses of jack-ups utilizing state-of-the-art modelling is outlined. Two rig/location cases were selected that were as close to the T and R 5-5A calibration cases as changes to T and R 5-5A on unity checks were evaluated based on such cases. (author)

  9. Characterization of System Level Single Event Upset (SEU) Responses using SEU Data, Classical Reliability Models, and Space Environment Data

    Berg, Melanie; Label, Kenneth; Campola, Michael; Xapsos, Michael


    We propose a method for the application of single event upset (SEU) data towards the analysis of complex systems using transformed reliability models (from the time domain to the particle fluence domain) and space environment data.

  10. Plasma expansion into vacuum assuming a steplike electron energy distribution.

    Kiefer, Thomas; Schlegel, Theodor; Kaluza, Malte C


    The expansion of a semi-infinite plasma slab into vacuum is analyzed with a hydrodynamic model implying a steplike electron energy distribution function. Analytic expressions for the maximum ion energy and the related ion distribution function are derived and compared with one-dimensional numerical simulations. The choice of the specific non-Maxwellian initial electron energy distribution automatically ensures the conservation of the total energy of the system. The estimated ion energies may differ by an order of magnitude from the values obtained with an adiabatic expansion model supposing a Maxwellian electron distribution. Furthermore, good agreement with data from experiments using laser pulses of ultrashort durations τ(L)Maxwellian electron distribution is assumed.

  11. Examining roles pharmacists assume in disasters: a content analytic approach.

    Ford, Heath; Dallas, Cham E; Harris, Curt


    Numerous practice reports recommend roles pharmacists may adopt during disasters. This study examines the peer-reviewed literature for factors that explain the roles pharmacists assume in disasters and the differences in roles and disasters when stratified by time. Quantitative content analysis was used to gather data consisting of words and phrases from peer-reviewed pharmacy literature regarding pharmacists' roles in disasters. Negative binomial regression and Kruskal-Wallis nonparametric models were applied to the data. Pharmacists' roles in disasters have not changed significantly since the 1960s. Pharmaceutical supply remains their preferred role, while patient management and response integration roles decrease in context of common, geographically widespread disasters. Policy coordination roles, however, significantly increase in nuclear terrorism planning. Pharmacists' adoption of nonpharmaceutical supply roles may represent a problem of accepting a paradigm shift in nontraditional roles. Possible shortages of personnel in future disasters may change the pharmacists' approach to disaster management.

  12. Beyond an assumed mother–child symbiosis in nutritional guidelines

    Nielsen, Annemette; Michaelsen, Kim F.; Holm, Lotte


    of the child and the interest and focus of the mother. The aim of this qualitative study was to explore mothers’ concerns and feeding practices in the context of everyday life. A total of 45 mothers with children either seven months old or 13 months old participated. The results showed that the need to find......Researchers question the implications of the way in which “motherhood” is constructed in public health discourse. Current nutritional guidelines for Danish parents of young children are part of this discourse. They are shaped by an assumed symbiotic relationship between the nutritional needs...... practical solutions for the whole family in a busy everyday life, to socialise the child into the family and society at large, and to create personal relief from the strain small children put on time and energy all served as socially acceptable reasons for knowingly departing from nutritional...

  13. Tracing of the 1st IEC Secretariat Assumed by China


    @@ Introduction The IEC central office informed in 7/543/AC that the secretariat of TC 7 would be taken over by the Chinese National Committee on January 10, 2003 and affirmed subsequently in 7/544/AC that the secretariat of TC 7 has been taken over by the Chinese National Committee which appointing secretary in Shanghai Electric Cable Research Institute as no objection has been raised by the Standardization Management Board members. It's the first IEC secretariat that assumed by China, with great significance, just as commented by the media that the commitment indicate undoubtedly China is to play a much more active and important role in the world especially after its entry into world trade organization as well as the trend of global economic integration.

  14. The R and M 2000 Process and Reliability and Maintainability Management: Attitudes of Senior Level Managers in Aeronautical Systems Division


    maintain an Air Force Center of Excellence for Reliability and Maintainability ( CERM ). The Center’s charter is to develop R&M concepts, theory, and...AFIT’s role as the Air Force’s CERM and since ASD is located at Wright-Patterson along with AFIT, it seemed the natural choice for keeping the scope of...Integrity Program BCM Baseline Correlation Matrix CDR Critical Design Review CDS [F-16] Central Data System CERM Center of Excellence for Reliability

  15. Estimation of the critical effect level for pollution prevention based on oyster embryonic development toxicity test: the search for reliability.

    da Cruz, A C S; Couto, B C; Nascimento, I A; Pereira, S A; Leite, M B N L; Bertoletti, E; Zagatto, P


    In spite of the consideration that toxicity testing is a reduced approach to measure the effects of pollutants on ecosystems, the early-life-stage (ELS) tests have evident ecological relevance because they reflect the possible reproductive impairment of the natural populations. The procedure and validation of Crassostrea rhizophorae embryonic development test have shown that it meets the same precision as other U.S. EPA tests, where EC(50) is generally used as a toxicological endpoint. However, the recognition that EC(50) is not the best endpoint to assess contaminant effects led U.S. EPA to recently suggest EC(25) as an alternative to estimate xenobiotic effects for pollution prevention. To provide reliability to the toxicological test results on C. rhizophorae embryos, the present work aimed to establish the critical effect level for this test organism, based on its reaction to reference toxicants, by using the statistical method proposed by Norberg-King (Inhibition Concentration, version 2.0). Oyster embryos were exposed to graded series of reference toxicants (ZnSO(4) x 7H(2)O; AgNO(3); KCl; CdCl(2)H(2)O; phenol, 4-chlorophenol and dodecyl sodium sulphate). Based on the obtained results, the critical value for C. rhizophorae embryonic development test was estimated as EC(15). The present research enhances the emerging consensus that ELS tests data would be adequate for estimating the chronic safe concentrations of pollutants in the receiving waters. Based on recommended criteria and on the results of the present research, zinc sulphate and 4-chlorophenol have been pointed out, among the inorganic and organic compounds tested, as the best reference toxicants for C. rhizophorae ELS-test.

  16. Assumed Probability Density Functions for Shallow and Deep Convection

    Steven K Krueger


    Full Text Available The assumed joint probability density function (PDF between vertical velocity and conserved temperature and total water scalars has been suggested to be a relatively computationally inexpensive and unified subgrid-scale (SGS parameterization for boundary layer clouds and turbulent moments. This paper analyzes the performance of five families of PDFs using large-eddy simulations of deep convection, shallow convection, and a transition from stratocumulus to trade wind cumulus. Three of the PDF families are based on the double Gaussian form and the remaining two are the single Gaussian and a Double Delta Function (analogous to a mass flux model. The assumed PDF method is tested for grid sizes as small as 0.4 km to as large as 204.8 km. In addition, studies are performed for PDF sensitivity to errors in the input moments and for how well the PDFs diagnose some higher-order moments. In general, the double Gaussian PDFs more accurately represent SGS cloud structure and turbulence moments in the boundary layer compared to the single Gaussian and Double Delta Function PDFs for the range of grid sizes tested. This is especially true for small SGS cloud fractions. While the most complex PDF, Lewellen-Yoh, better represents shallow convective cloud properties (cloud fraction and liquid water mixing ratio compared to the less complex Analytic Double Gaussian 1 PDF, there appears to be no advantage in implementing Lewellen-Yoh for deep convection. However, the Analytic Double Gaussian 1 PDF better represents the liquid water flux, is less sensitive to errors in the input moments, and diagnoses higher order moments more accurately. Between the Lewellen-Yoh and Analytic Double Gaussian 1 PDFs, it appears that neither family is distinctly better at representing cloudy layers. However, due to the reduced computational cost and fairly robust results, it appears that the Analytic Double Gaussian 1 PDF could be an ideal family for SGS cloud and turbulence

  17. Reliability of the TekScan MatScan® system for the measurement of plantar forces and pressures during barefoot level walking in healthy adults

    Munteanu Shannon E


    Full Text Available Abstract Background Plantar pressure systems are increasingly being used to evaluate foot function in both research settings and in clinical practice. The purpose of this study was to investigate the reliability of the TekScan MatScan® system in assessing plantar forces and pressures during barefoot level walking. Methods Thirty participants were assessed for the reliability of measurements taken one week apart for the variables maximum force, peak pressure and average pressure. The following seven regions of the foot were investigated; heel, midfoot, 3rd-5th metatarsophalangeal joint, 2nd metatarsophalangeal joint, 1st metatarsophalangeal joint, hallux and the lesser toes. Results Reliability was assessed using both the mean and the median values of three repeated trials. The system displayed moderate to good reliability of mean and median calculations for the three analysed variables across all seven regions, as indicated by intra-class correlation coefficients ranging from 0.44 to 0.95 for the mean and 0.54 to 0.97 for the median, and coefficients of variation ranging from 5 to 20% for the mean and 3 to 23% for the median. Selecting the median value of three repeated trials yielded slightly more reliable results than the mean. Conclusions These findings indicate that the TekScan MatScan® system demonstrates generally moderate to good reliability.

  18. Optimal Control for TB disease with vaccination assuming endogeneous reactivation and exogeneous reinfection

    Anggriani, N.; Wicaksono, B. C.; Supriatna, A. K.


    Tuberculosis (TB) is one of the deadliest infectious disease in the world which caused by Mycobacterium tuberculosis. The disease is spread through the air via the droplets from the infectious persons when they are coughing. The World Health Organization (WHO) has paid a special attention to the TB by providing some solution, for example by providing BCG vaccine that prevent an infected person from becoming an active infectious TB. In this paper we develop a mathematical model of the spread of the TB which assumes endogeneous reactivation and exogeneous reinfection factors. We also assume that some of the susceptible population are vaccinated. Furthermore we investigate the optimal vaccination level for the disease.

  19. Reliability of a Shuttle Run Test for Children with Cerebral Palsy Who Are Classified at Gross Motor Function Classification System Level III

    Verschuren, Olaf; Bosma, Liesbeth; Takken, Tim


    For children and adolescents with cerebral palsy (CP) classified as Gross Motor Function Classification System (GMFCS) level III there is no running-based field test available to assess their cardiorespiratory fitness. The current study investigated whether a shuttle run test can be reliably (test-retest) performed in a group of children with…

  20. Reliability and Validity of the Therapy Intensity Level Scale : Analysis of Clinimetric Properties of a Novel Approach to Assess Management of Intracranial Pressure in Traumatic Brain Injury

    Zuercher, Patrick; Groen, Justus L.; Aries, Marcel J. H.; Steyerberg, Ewout W.; Maas, Andrew I. R.; Ercole, Ari; Menon, David K.


    We aimed to assess the reliability and validity of the Therapy Intensity Level scale (TIL) for intracranial pressure (ICP) management. We reviewed the medical records of 31 patients with traumatic brain injury (TBI) in two European intensive care units (ICUs). The ICP TIL was derived over a 4-day

  1. Improvement of level-1 PSA computer code package - Modeling and analysis for dynamic reliability of nuclear power plants

    Lee, Chang Hoon; Baek, Sang Yeup; Shin, In Sup; Moon, Shin Myung; Moon, Jae Phil; Koo, Hoon Young; Kim, Ju Shin [Seoul National University, Seoul (Korea, Republic of); Hong, Jung Sik [Seoul National Polytechnology University, Seoul (Korea, Republic of); Lim, Tae Jin [Soongsil University, Seoul (Korea, Republic of)


    The objective of this project is to develop a methodology of the dynamic reliability analysis for NPP. The first year`s research was focused on developing a procedure for analyzing failure data of running components and a simulator for estimating the reliability of series-parallel structures. The second year`s research was concentrated on estimating the lifetime distribution and PM effect of a component from its failure data in various cases, and the lifetime distribution of a system with a particular structure. Computer codes for performing these jobs were also developed. The objectives of the third year`s research is to develop models for analyzing special failure types (CCFs, Standby redundant structure) that were nor considered in the first two years, and to complete a methodology of the dynamic reliability analysis for nuclear power plants. The analysis of failure data of components and related researches for supporting the simulator must be preceded for providing proper input to the simulator. Thus this research is divided into three major parts. 1. Analysis of the time dependent life distribution and the PM effect. 2. Development of a simulator for system reliability analysis. 3. Related researches for supporting the simulator : accelerated simulation analytic approach using PH-type distribution, analysis for dynamic repair effects. 154 refs., 5 tabs., 87 figs. (author)

  2. Reliability and Construct Validity of the 6-Minute Racerunner Test in Children and Youth with Cerebral Palsy, GMFCS Levels III and IV.

    Bolster, Eline A M; Dallmeijer, Annet J; de Wolf, G Sander; Versteegt, Marieke; Schie, Petra E M van


    To determine the test-retest reliability and construct validity of a novel 6-Minute Racerunner Test (6MRT) in children and youth with cerebral palsy (CP) classified as Gross Motor Function Classification System (GMFCS) levels III and IV. The racerunner is a step-propelled tricycle. The participants were 38 children and youth with CP (mean age 11 y 2 m, SD 3 y 7 m; GMFCS III, n = 19; IV, n = 19). Racerunner capability was determined as the distance covered during the 6MRT on three occasions. The intraclass correlation coefficient (ICC), standard error of measurement (SEM), and smallest detectable differences (SDD) were calculated to assess test-retest reliability. The ICC for tests 2 and 3 were 0.89 (SDD 37%; 147 m) for children in level III and 0.91 for children in level IV (SDD 52%; 118 m). When the average of two separate test occasions was used, the SDDs were reduced to 26% (104 m; level III) and 37% (118 m; level IV). For tests 1 to 3, the mean distance covered increased from 345 m (SD 148 m) to 413 m (SD 137 m) for children in level III, and from 193 m (SD 100 m) to 239 m (SD 148 m) for children in level IV. Results suggest high test-retest reliability. However, large SDDs indicate that a single 6MRT measurement is only useful for individual evaluation when large improvements are expected, or when taking the average of two tests. The 6MRT discriminated the distance covered between children and youth in levels III and IV, supporting construct validity.

  3. Reliability and validity of the Yo-Yo intermittent recovery test level 1 in young soccer players.

    Deprez, Dieter; Coutts, Aaron James; Lenoir, Matthieu; Fransen, Job; Pion, Johan; Philippaerts, Renaat; Vaeyens, Roel


    The present study investigated the test-retest reliability from the Yo-Yo IR1 (distance and heart rate responses), and the ability of the Yo-Yo IR1 to differentiate between elite and non-elite youth soccer players. A total of 228 youth soccer players (11-17 years) participated: 78 non-elite players to examine the test-retest reliability within 1 week, added with 150 elite players to investigate the construct validity. The main finding was that the distance covered was adequately reproducible in the youngest age groups (U13 and U15) and highly reproducible in the oldest age group (U17). Also, the physiological responses were highly reproducible in all age groups. Moreover, the Yo-Yo IR1 test had a high-discriminative ability to distinguish between elite and non-elite young soccer players. Furthermore, age-related standards for the Yo-Yo IR1 established for elite and non-elite groups in this study may be used for comparison of other young soccer players.

  4. Reliability of the Late Life Function and Disability Instrument (LLFDI Brazilian Portuguese version in a sample of senior citizens with high educational level

    Adnaldo Paulo Cardoso


    Full Text Available Background: Late-Life Function and Disability Instrument (LLFDI, translated into Brazilian Portuguese, presents an innovative framework that incorporates components of functionality and disability to evaluate the elderly community. Whereas the quality of an instrument is determined by its measurement properties, including reliability, it is advisable to research such property after the instrument’s process of translation and cultural adaptation. Objectives:To evaluate the intra- and inter-examiner reliability of the LLFDI Brazilian Portuguese version. Methods: Indexes of intra-class correlation (ICC and conformity (CCC were used to test the intra- and inter-examiner reliability by administering the instrument to a sample of 45 volunteers (average age 70.13 ± 6.88 years, living in Belo Horizonte, Minas Gerais state. Results: High levels of intra-examiner (ICC = 0.91 and ICC = 0.97 and inter-examiner (CCC = 0.87 and CCC = 0.92 reliability were observed, respectively, in the Disability (full limitation and Function (full function components of the instrument. Conclusion: The LLFDI Brazilian Portuguese translated version presented stability in both instrument components, being therefore suitable for use in Brazil.

  5. A reliable low cost integrated wireless sensor network for water quality monitoring and level control system in UAE

    Abou-Elnour, Ali; Khaleeq, Hyder; Abou-Elnour, Ahmad


    In the present work, wireless sensor network and real-time controlling and monitoring system are integrated for efficient water quality monitoring for environmental and domestic applications. The proposed system has three main components (i) the sensor circuits, (ii) the wireless communication system, and (iii) the monitoring and controlling unit. LabView software has been used in the implementation of the monitoring and controlling system. On the other hand, ZigBee and myRIO wireless modules have been used to implement the wireless system. The water quality parameters are accurately measured by the present computer based monitoring system and the measurement results are instantaneously transmitted and published with minimum infrastructure costs and maximum flexibility in term of distance or location. The mobility and durability of the proposed system are further enhanced by fully powering via a photovoltaic system. The reliability and effectiveness of the system are evaluated under realistic operating conditions.

  6. C-reactive protein levels in patients on maintenance hemodialysis: reliability and reflection on the utility of single measurements.

    Stigant, Caroline E; Djurdjev, Ognjenka; Levin, Adeera


    Single C-reactive protein (CRP) values have been associated with death and cardiovascular disease in dialysis patients. We prospectively obtained multiple CRP values in stable patients, hypothesizing that values would remain stable in the absence of disease and that a single CRP value would be a reliable marker of risk. Four CRP values per week for three consecutive weeks were obtained in 10 clinically stable patients receiving conventional HD. Using prespecified cutoffs of 2.2 and 4.4 mg/l, the frequency of risk misclassification relative to the lowest CRP value obtained was determined. Within and between patient variability was also calculated. The median age was 54 years, and the average duration of dialysis was 41 months. Nine out of ten patients had at least one abnormal CRP value (>2.2 mg/l), six had all values elevated, and seven had an abnormal median CRP. The overall coefficient of reliability was 0.63 (95% CI 0.42-0.87). The misclassification rate varied with cutoff, and ranged from 0-83% and 0-58% using upper limit of normal (ULN) and twice ULN, respectively. The within patient variability was 0.37 for the entire cohort, and 0.33 when three patients with intercurrent acute inflammation were excluded. CRP exhibits short term variability in HD patients, resulting in a risk of misclassification depending on sampling time and chosen cutoff point. Single CRP values must be interpreted with caution, and multiple measurements, or use of other biomarkers, should be considered.

  7. Surprisal analysis of transcripts expression levels in the presence of noise: a reliable determination of the onset of a tumor phenotype.

    Ayelet Gross

    Full Text Available Towards a reliable identification of the onset in time of a cancer phenotype, changes in transcription levels in cell models were tested. Surprisal analysis, an information-theoretic approach grounded in thermodynamics, was used to characterize the expression level of mRNAs as time changed. Surprisal Analysis provides a very compact representation for the measured expression levels of many thousands of mRNAs in terms of very few - three, four - transcription patterns. The patterns, that are a collection of transcripts that respond together, can be assigned definite biological phenotypic role. We identify a transcription pattern that is a clear marker of eventual malignancy. The weight of each transcription pattern is determined by surprisal analysis. The weight of this pattern changes with time; it is never strictly zero but it is very low at early times and then rises rather suddenly. We suggest that the low weights at early time points are primarily due to experimental noise. We develop the necessary formalism to determine at what point in time the value of that pattern becomes reliable. Beyond the point in time when a pattern is deemed reliable the data shows that the pattern remain reliable. We suggest that this allows a determination of the presence of a cancer forewarning. We apply the same formalism to the weight of the transcription patterns that account for healthy cell pathways, such as apoptosis, that need to be switched off in cancer cells. We show that their weight eventually falls below the threshold. Lastly we discuss patient heterogeneity as an additional source of fluctuation and show how to incorporate it within the developed formalism.

  8. The Multilevel Latent Covariate Model: A New, More Reliable Approach to Group-Level Effects in Contextual Studies

    Ludtke, Oliver; Marsh, Herbert W.; Robitzsch, Alexander; Trautwein, Ulrich; Asparouhov, Tihomir; Muthen, Bengt


    In multilevel modeling (MLM), group-level (L2) characteristics are often measured by aggregating individual-level (L1) characteristics within each group so as to assess contextual effects (e.g., group-average effects of socioeconomic status, achievement, climate). Most previous applications have used a multilevel manifest covariate (MMC) approach,…

  9. Assessing the Reliability and Validity of a Physical Therapy Functional Measurement Tool--the Modified Iowa Level of Assistance Scale--in Acute Hospital Inpatients.

    Kimmel, Lara A; Elliott, Jane E; Sayer, James M; Holland, Anne E


    Functional outcome measurement tools exist for individual diagnoses (eg, stroke), but no prospectively validated mobility measure is available for physical therapists' use across the breadth of acute hospital inpatients. The modified Iowa Level of Assistance Scale (mILOA), a scale measuring assistance required to achieve functional tasks, has demonstrated functional change in inpatients with orthopedic conditions and trauma, although its psychometric properties are unknown. The aim of this study was to assess interrater reliability, known-groups validity, and responsiveness of the mILOA in acute hospital inpatients. This was a cohort, measurement-focused study. Patients at a large teaching hospital in Melbourne, Australia, were recruited. One hundred fifty-two inpatients who were functionally stable across 5 clinical groups had an mILOA score calculated during 2 independent physical therapy sessions to assess interrater reliability. Known-groups validity ("ready for discharge"/"not ready for discharge") and responsiveness also were assessed. The mean age of participants in the reliability phase of the study was 62.5 years (SD=17.7). The interrater reliability was excellent (intraclass correlation coefficient [2,1]=.975; 95% confidence interval=.965, .982), with a mean difference between scores of -.270 and limits of agreement of ±5.64. The mILOA score displayed a mean difference between 2 known groups of 15.3 points. Responsiveness was demonstrated with a minimal detectable change of 5.8 points. Participants were included in the study if able to give consent for themselves, thereby limiting generalizability. Construct validity was not assessed due to the lack of a gold standard. The mILOA has excellent interrater reliability and good known-groups validity and responsiveness to functional change across acute hospital inpatients with a variety of diagnoses. It may provide opportunities for physical therapists to collect a functional outcome measure to demonstrate

  10. Software reliability

    Bendell, A


    Software Reliability reviews some fundamental issues of software reliability as well as the techniques, models, and metrics used to predict the reliability of software. Topics covered include fault avoidance, fault removal, and fault tolerance, along with statistical methods for the objective assessment of predictive accuracy. Development cost models and life-cycle cost models are also discussed. This book is divided into eight sections and begins with a chapter on adaptive modeling used to predict software reliability, followed by a discussion on failure rate in software reliability growth mo

  11. Reliability of the thyroid stimulating hormone receptor antibodies level determination in diagnosing and prognosing of immunogenic hyperthyroidism

    Aleksić Aleksandar Z.


    Full Text Available Background/Aim. Graves disease (GD is defined as hyperthyroidism with diffuse goiter caused by immunogenic disturbances. Antibodies to the thyroid stimulating hormone (TSH receptors of thyroid gland (TRAb have crucial pathogenetic importance in the development and maintenance of autoimmune hyperthyroidism. The aim of this study was to identify sensitivity, specificity, positive an negative predictive value of TRAb level in sera of patients with GD as well as to estimate significance of TRAb level for remission and GD relapses occurrence. Methods. We studied prospectively and partly retrospectively 149 patients, 109 female and 40 male patients, 5-78 years old, in the period 1982-2007. There were 96 patients with GD. The control group consisted of 53 patients, 21 with hyperthyroidism of second etiology and 32 patients on amiodarone therapy, with or without thyroid dysfunction TRAb was measured by radioreceptor assay (TRAK Assay and DYNO Test TRAK Human Brahms Diagnostica GMBH. Results. According to the results the sensitivity (Sn of TRAb test was 80%, specificity (Sp 100%, positive predictive value (PP 100% and negative predictive value (NP 83%. Also, the Sn of hTRAb test was 94%, Sp 100%, PP 100% and NP 94%. Our results show that an increased level of TRAb/hTRAb at the beginning of the disease and the level at the end of medical therapy is associated with an increased number of GD relapses and a shorter remission duration. Conclusion. Detection and measurement of TRAb in serum is a very sensitive method for diagnosing GD and very highly specific in vitro method for differential diagnosis of various forms of hyperthyroidism. Clinical significance of differentiating various forms of hyperthyroidism, using this in vitro assay, lays in adequate therapeutic choice for these entities.

  12. The enzymatic nature of an anonymous protein sequence cannot reliably be inferred from superfamily level structural information alone.

    Roche, Daniel Barry; Brüls, Thomas


    As the largest fraction of any proteome does not carry out enzymatic functions, and in order to leverage 3D structural data for the annotation of increasingly higher volumes of sequence data, we wanted to assess the strength of the link between coarse grained structural data (i.e., homologous superfamily level) and the enzymatic versus non-enzymatic nature of protein sequences. To probe this relationship, we took advantage of 41 phylogenetically diverse (encompassing 11 distinct phyla) genomes recently sequenced within the GEBA initiative, for which we integrated structural information, as defined by CATH, with enzyme level information, as defined by Enzyme Commission (EC) numbers. This analysis revealed that only a very small fraction (about 1%) of domain sequences occurring in the analyzed genomes was found to be associated with homologous superfamilies strongly indicative of enzymatic function. Resorting to less stringent criteria to define enzyme versus non-enzyme biased structural classes or excluding highly prevalent folds from the analysis had only modest effect on this proportion. Thus, the low genomic coverage by structurally anchored protein domains strongly associated to catalytic activities indicates that, on its own, the power of coarse grained structural information to infer the general property of being an enzyme is rather limited.

  13. Are the Yo-Yo intermittent recovery test levels 1 and 2 both useful? Reliability, responsiveness and interchangeability in young soccer players.

    Fanchini, Maurizio; Castagna, Carlo; Coutts, Aaron J; Schena, Federico; McCall, Alan; Impellizzeri, Franco M


    Abstract The aim of this study was to compare the reliability, internal responsiveness and interchangeability of the Yo-Yo intermittent recovery test level 1 (YY1), level 2 (YY2) and submaximal YY1 (YY1-sub). Twenty-four young soccer players (age 17 ± 1 years; height 177 ± 7 cm; body mass 68 ± 6 kg) completed each test five times within pre- and in-season; distances covered and heart rates (HRs) were measured. Reliability was expressed as typical error of measurement (TEM) and intraclass correlation coefficient (ICC). Internal responsiveness was determined as effect size (ES) and signal-to-noise ratio (ESTEM). Interchangeability was determined with correlation between training-induced changes. The TEM and ICC for distances in the YY1 and YY2 and for HR in YY1-sub were 7.3% and 0.78, 7.1% and 0.93 and 2.2% and 0.78, respectively. The ESs and ESTEMs were 0.9 and 1.9 for YY1, 0.4 and 1.2 for YY2 and -0.3 and -0.3 for YY1-sub. Correlations between YY1 vs. YY2 and YY1-sub were 0.56 to 0.84 and -0.36 to -0.81, respectively. Correlations between change scores in YY1 vs. YY2 were 0.29 and -0.21 vs. YY1-sub. Peak HR was higher in YY1 vs. YY2. The YY1 and YY2 showed similar reliability; however, they were not interchangeable. The YY1 was more responsive to training compared to YY2 and YY1-sub.

  14. Synthesis of Reliable Telecommunication Networks

    Dusan Trstensky


    Full Text Available In many application, the network designer may to know to senthesise a reliable telecommunication network. Assume that a network, denoted Gm,e has the number of nodes n and the number of edges e, and the operational probability of each edge is known. The system reliability of the network is defined to be the reliability that every pair of nodes can communicate with each other. A network synthesis problem considered in this paper is to find a network G*n,e, that maximises system reliability over the class of all networks for the classes of networks Gn,n-1, Gn,m and Gn,n+1 respectively. In addition an upper bound of maximum reliability for the networks with n-node and e-edge (e>n+2 is derived in terms of node. Computational experiments for the reliability upper are also presented. the results show, that the proposed reliability upper bound is effective.

  15. LED system reliability

    Driel, W.D. van; Yuan, C.A.; Koh, S.; Zhang, G.Q.


    This paper presents our effort to predict the system reliability of Solid State Lighting (SSL) applications. A SSL system is composed of a LED engine with micro-electronic driver(s) that supplies power to the optic design. Knowledge of system level reliability is not only a challenging scientific ex

  16. Improving the reliability of aquatic toxicity testing of hydrophobic chemicals via equilibrium passive dosing - A multiple trophic level case study on bromochlorophene.

    Stibany, Felix; Ewald, Franziska; Miller, Ina; Hollert, Henner; Schäffer, Andreas


    The main objective of the present study was to improve the reliability and practicability of aquatic toxicity testing of hydrophobic chemicals based upon the model substance bromochlorophene (BCP). Therefore, we adapted a passive dosing format to test the toxicity of BCP at different concentrations and in multiple test systems with aquatic organisms of various trophic levels. At the same time, the method allowed for the accurate determination of exposure concentrations (i.e., in the presence of exposed organisms; Ctest) and freely dissolved concentrations (i.e., without organisms present; Cfree) of BCP in all tested media. We report on the joint adaptation of three ecotoxicity tests - algal growth inhibition, Daphnia magna immobilization, and fish-embryo toxicity - to a silicone O-ring based equilibrium passive dosing format. Effect concentrations derived by passive dosing methods were compared with corresponding effect concentrations derived by standard co-solvent setups. The passive dosing format led to EC50-values in the lower μgL(-1) range for algae, daphnids, and fish embryos, whereas increased effect concentrations were measured in the co-solvent setups for algae and daphnids. This effect once more shows that passive dosing might offer advantages over standard methods like co-solvent setups when it comes to a reliable risk assessment of hydrophobic substances. The presented passive dosing setup offers a facilitated, practical, and repeatable way to test hydrophobic chemicals on their toxicity to aquatic organisms, and is an ideal basis for the detailed investigation of this important group of chemicals.

  17. Technical reliability of geological disposal for high-level radioactive wastes in Japan. The second progress report. Introductory part and summaries



    Based on the Advisory Committee Report on Nuclear Fuel Cycle Backend Policy submitted to the Japanese Government in 1997, JNC documents the progress of research and development program in the form of the second progress report (the first one published in 1992). It summarizes an evaluation of the technical reliability and safety of the geological disposal concept for high-level radioactive wastes (HLW) in Japan and comprises seven chapters. Chapter I briefly describes the importance of HLW management in promoting nuclear energy utilization. According to the long-term program, the HLW separated from spent fuels at reprocessing plants is to be vitrified and stored for a period of 30 to 50 years to allow cooling, then be disposed of in a deep geological formation. Chapter II mainly explains the concepts of geological disposal in Japan. Chapters III to V are devoted to discussions on three important technical elements (the geological environment of Japan, engineering technology and safety assessment of the geological disposal system) which are necessary for reliable realization of the geological disposal concept. Chapter VI demonstrates the technical ground for site selection and for setup of safety standards of the disposal. Chapter VII summarizes together with plans for future research and development. (Ohno, S.)

  18. Power System Reliability Evaluation in Wind Power Leveling Load%风电拉平负荷下的发电系统可靠性评估

    吴政球; 唐民富; 张旭乐; 王韬


    Abstract: The wind energy always affects the reliability of power system for its randomness , intermittent and Fluctuation. A concept that Wind Power leveling Load(WPLL) is proposed to evaluate the reliability of power system when wind farms are grid - connected. Based on the fact that the wind power and loads are independent random variables, a combined probability distribution function of wind power and loads is deduced by the margin table method. And hence WPLL model is established. This established model is combined with the conventional generator outage capacity model to calculate the reliability indexes, which provides a novel method for the wind power system reliability assessment. Finally, the validity of the proposed model and algorithm is proved by an analysis of the selected sample system. Finally, analysis of the selected sample system illustrates the validity of proposed model and algorithm.%风能具有随机性、间歇性和波动性的特点,风力发电并网对发电系统的可靠性有一定影响,为了评估合风电场的发电系统可靠性,提出了风电功率拉平负荷的概念.基于风电功率与系统负荷为相互独立的随机变量的特点,采用裕度表的方式推导出风电场输出功率跟负荷的联合概率分布,建立了风电功率拉平负荷模型,与常规发电机组停运容量模型结合计算发电系统的可靠性指标,为含风电场的发电系统可靠性分析提供了新的思路和方法.算例计算分析结果表明了该方法的合理性和有效性.

  19. Serum Total Tryptase Level Confirms Itself as a More Reliable Marker of Mast Cells Burden in Mast Cell Leukaemia (Aleukaemic Variant

    P. Savini


    Full Text Available Mast cell leukemia (MCL is a very rare form of systemic mastocytosis (SM with a short median survival of 6 months. We describe a case of a 65-year-old woman with aleukaemic variant of MCL with a very high serum total tryptase level of 2255 μg/L at diagnosis, which occurred following an episode of hypotensive shock. She fulfilled the diagnostic criteria of SM, with a bone marrow smear infiltration of 50–60% of atypical mast cells (MCs. She tested negative for the KIT D816V mutation, without any sign of organ damage (no B- or C-findings and only few mediator-related symptoms. She was treated with antihistamine alone and then with imatinib for the appearance of anemia. She maintained stable tryptase level and a very indolent clinical course for twenty-two months; then, she suddenly progressed to acute MCL with a serum tryptase level up to 12960 μg/L. The patient died due to haemorrhagic diathesis twenty-four months after diagnosis. This clinical case maybe represents an example of the chronic form of mast cell leukemia, described as unpredictable disease, in which the serum total tryptase level has confirmed itself as a reliable marker of mast cells burden regardless of the presence of other signs or symptoms.

  20. Children's Everyday Learning by Assuming Responsibility for Others: Indigenous Practices as a Cultural Heritage Across Generations.

    Fernández, David Lorente


    This chapter uses a comparative approach to examine the maintenance of Indigenous practices related with Learning by Observing and Pitching In in two generations--parent generation and current child generation--in a Central Mexican Nahua community. In spite of cultural changes and the increase of Western schooling experience, these practices persist, to different degrees, as a Nahua cultural heritage with close historical relations to the key value of cuidado (stewardship). The chapter explores how children learn the value of cuidado in a variety of everyday activities, which include assuming responsibility in many social situations, primarily in cultivating corn, raising and protecting domestic animals, health practices, and participating in family ceremonial life. The chapter focuses on three main points: (1) Cuidado (assuming responsibility for), in the Nahua socio-cultural context, refers to the concepts of protection and "raising" as well as fostering other beings, whether humans, plants, or animals, to reach their potential and fulfill their development. (2) Children learn cuidado by contributing to family endeavors: They develop attention and self-motivation; they are capable of responsible actions; and they are able to transform participation to achieve the status of a competent member of local society. (3) This collaborative participation allows children to continue the cultural tradition and to preserve a Nahua heritage at a deeper level in a community in which Nahuatl language and dress have disappeared, and people do not identify themselves as Indigenous.

  1. Estimating option values of solar radiation management assuming that climate sensitivity is uncertain.

    Arino, Yosuke; Akimoto, Keigo; Sano, Fuminori; Homma, Takashi; Oda, Junichiro; Tomoda, Toshimasa


    Although solar radiation management (SRM) might play a role as an emergency geoengineering measure, its potential risks remain uncertain, and hence there are ethical and governance issues in the face of SRM's actual deployment. By using an integrated assessment model, we first present one possible methodology for evaluating the value arising from retaining an SRM option given the uncertainty of climate sensitivity, and also examine sensitivities of the option value to SRM's side effects (damages). Reflecting the governance challenges on immediate SRM deployment, we assume scenarios in which SRM could only be deployed with a limited degree of cooling (0.5 °C) only after 2050, when climate sensitivity uncertainty is assumed to be resolved and only when the sensitivity is found to be high (T2x = 4 °C). We conduct a cost-effectiveness analysis with constraining temperature rise as the objective. The SRM option value is originated from its rapid cooling capability that would alleviate the mitigation requirement under climate sensitivity uncertainty and thereby reduce mitigation costs. According to our estimates, the option value during 1990-2049 for a +2.4 °C target (the lowest temperature target level for which there were feasible solutions in this model study) relative to preindustrial levels were in the range between $2.5 and $5.9 trillion, taking into account the maximum level of side effects shown in the existing literature. The result indicates that lower limits of the option values for temperature targets below +2.4 °C would be greater than $2.5 trillion.

  2. Reliability Engineering

    Lazzaroni, Massimo


    This book gives a practical guide for designers and users in Information and Communication Technology context. In particular, in the first Section, the definition of the fundamental terms according to the international standards are given. Then, some theoretical concepts and reliability models are presented in Chapters 2 and 3: the aim is to evaluate performance for components and systems and reliability growth. Chapter 4, by introducing the laboratory tests, puts in evidence the reliability concept from the experimental point of view. In ICT context, the failure rate for a given system can be

  3. 25 CFR 224.65 - How may a tribe assume additional activities under a TERA?


    ... 25 Indians 1 2010-04-01 2010-04-01 false How may a tribe assume additional activities under a TERA... Procedures for Obtaining Tribal Energy Resource Agreements Tera Requirements § 224.65 How may a tribe assume additional activities under a TERA? A tribe may assume additional activities related to the development...

  4. Reliability analysis of ceramic matrix composite laminates

    Thomas, David J.; Wetherhold, Robert C.


    At a macroscopic level, a composite lamina may be considered as a homogeneous orthotropic solid whose directional strengths are random variables. Incorporation of these random variable strengths into failure models, either interactive or non-interactive, allows for the evaluation of the lamina reliability under a given stress state. Using a non-interactive criterion for demonstration purposes, laminate reliabilities are calculated assuming previously established load sharing rules for the redistribution of load as the failure of laminae occur. The matrix cracking predicted by ACK theory is modeled to allow a loss of stiffness in the fiber direction. The subsequent failure in the fiber direction is controlled by a modified bundle theory. Results using this modified bundle model are compared with previous models which did not permit separate consideration of matrix cracking, as well as to results obtained from experimental data.

  5. Round-robin evaluation of a solid-phase microextraction-gas chromatographic method for reliable determination of trace level ethylene oxide in sterilized medical devices.

    Harper, Thomas; Cushinotto, Lisa; Blaszko, Nancy; Arinaga, Julie; Davis, Frank; Cummins, Calvin; DiCicco, Michael


    Medical devices that are sterilized with ethylene oxide (EtO) retain small quantities of EtO residuals, which may cause negative systemic and local irritating effects, and must be accurately quantified to ensure non-toxicity. The goal of this round-robin study is to investigate the capability of a novel solid-phase microextraction-gas chromatographic (SPME-GC) method for trace-level EtO residuals analysis: three independent laboratories conducted a guided experiment using this SPME-GC method, in assessing method performance, ruggedness and the feasibility of SPME fibers. These were satisfactory across the independent laboratories, at the 0.05-5.00 ppm EtO range. This method was then successfully applied to analyze EtO residuals in several sterilized/aerated medical devices of various polymeric composition, reliably detecting and quantifying the trace levels of EtO residuals present ( approximately 0.05 ppm EtO). SPME is a feasible alternative for quantifying trace-level EtO residuals in sterilized medical devices, thereby lowering the limit of quantification (LOQ) by as much as two to three orders of magnitude over the current GC methodology of direct liquid injection.

  6. Reliability and Validity of a Survey of Cat Caregivers on Their Cats’ Socialization Level in the Cat’s Normal Environment

    Margaret Slater


    Full Text Available Stray cats routinely enter animal welfare organizations each year and shelters are challenged with determining the level of human socialization these cats may possess as quickly as possible. However, there is currently no standard process to guide this determination. This study describes the development and validation of a caregiver survey designed to be filled out by a cat’s caregiver so it accurately describes a cat’s personality, background, and full range of behavior with people when in its normal environment. The results from this survey provided the basis for a socialization score that ranged from unsocialized to well socialized with people. The quality of the survey was evaluated based on inter-rater and test-retest reliability and internal consistency and estimates of construct and criterion validity. In general, our results showed moderate to high levels of inter-rater (median of 0.803, range 0.211–0.957 and test-retest agreement (median 0.92, range 0.211–0.999. Cronbach’s alpha showed high internal consistency (0.962. Estimates of validity did not highlight any major shortcomings. This survey will be used to develop and validate an effective assessment process that accurately differentiates cats by their socialization levels towards humans based on direct observation of cats’ behavior in an animal shelter.

  7. High mortality risk among individuals assumed to be TB-negative can be predicted using a simple test

    Rabna, Paulo; Andersen, Andreas; Wejse, Christian


    OBJECTIVES: To determine mortality among assumed TB negative (aTBneg) individuals in Guinea-Bissau and to investigate whether plasma levels of soluble urokinase receptor (suPAR) can be used to determine post-consultation mortality risk. METHODS: This prospective West-African cohort study included...

  8. Inflection points for network reliability

    Brown, J.I.; Koç, Y.; Kooij, R.E.


    Given a finite, undirected graph G (possibly with multiple edges), we assume that the vertices are operational, but the edges are each independently operational with probability p. The (all-terminal) reliability, Rel(G,p), of G is the probability that the spanning subgraph of operational edges is co

  9. Reliability and Validity of the Therapy Intensity Level Scale: Analysis of Clinimetric Properties of a Novel Approach to Assess Management of Intracranial Pressure in Traumatic Brain Injury.

    Zuercher, Patrick; Groen, Justus L; Aries, Marcel J H; Steyerberg, Ewout W; Maas, Andrew I R; Ercole, Ari; Menon, David K


    We aimed to assess the reliability and validity of the Therapy Intensity Level scale (TIL) for intracranial pressure (ICP) management. We reviewed the medical records of 31 patients with traumatic brain injury (TBI) in two European intensive care units (ICUs). The ICP TIL was derived over a 4-day period for 4-h (TIL4) and 24-h epochs (TIL24). TIL scores were compared with historical schemes for TIL measurement, with each other, and with clinical variables. TIL24 scores in ICU patients with TBI were compared with two control groups: patients with extracranial trauma necessitating intensive care (Trauma_ICU; n = 20) and patients with TBI not needing ICU care (TBI_WARD; n = 19), to further determine the discriminative validity of the TIL for ICP-related ICU interventions. Interrater and intraobserver agreement were excellent for TIL4 and TIL24 (Cohen κ: 0.98-0.99; intraclass correlation coefficient: 0.99-1; p intensity level of ICP management in patients with TBI.

  10. Technical reliability of geological disposal for high-level radioactive wastes in Japan. The second progress report. Part 1. Geological environment of Japan



    Based on the Advisory Committee Report on Nuclear Fuel Cycle Backend Policy submitted to the Japanese Government in 1997, JNC documents the progress of research and development program in the form of the second progress report (the first one published in 1992). It summarizes an evaluation of the technical reliability and safety of the geological disposal concept for high-level radioactive wastes (HLW) in Japan. The present document, the part 1 of the progress report, describes first in detail the role of geological environment in high-level radioactive wastes disposal, the features of Japanese geological environment, and programs to proceed the investigation in geological environment. The following chapter summarizes scientific basis for possible existence of stable geological environment, stable for a long period needed for the HLW disposal in Japan including such natural phenomena as volcano and faults. The results of the investigation of the characteristics of bed-rocks and groundwater are presented. These are important for multiple barrier system construction of deep geological disposal. The report furthermore describes the present status of technical and methodological progress in investigating geological environment and finally on the results of natural analog study in Tono uranium deposits area. (Ohno, S.)

  11. Engineering evaluation of alternatives: Managing the assumed leak from single-shell Tank 241-T-101

    Brevick, C.H. [ICF Kaiser Hanford Co., Richland, WA (United States); Jenkins, C. [Westinghouse Hanford Co., Richland, WA (United States)


    At mid-year 1992, the liquid level gage for Tank 241-T-101 indicated that 6,000 to 9,000 gal had leaked. Because of the liquid level anomaly, Tank 241-T-101 was declared an assumed leaker on October 4, 1992. SSTs liquid level gages have been historically unreliable. False readings can occur because of instrument failures, floating salt cake, and salt encrustation. Gages frequently self-correct and tanks show no indication of leak. Tank levels cannot be visually inspected and verified because of high radiation fields. The gage in Tank 241-T-101 has largely corrected itself since the mid-year 1992 reading. Therefore, doubt exists that a leak has occurred, or that the magnitude of the leak poses any immediate environmental threat. While reluctance exists to use valuable DST space unnecessarily, there is a large safety and economic incentive to prevent or mitigate release of tank liquid waste into the surrounding environment. During the assessment of the significance of the Tank 241-T-101 liquid level gage readings, Washington State Department of Ecology determined that Westinghouse Hanford Company was not in compliance with regulatory requirements, and directed transfer of the Tank 241-T-101 liquid contents into a DST. Meanwhile, DOE directed WHC to examine reasonable alternatives/options for safe interim management of Tank 241-T-101 wastes before taking action. The five alternatives that could be used to manage waste from a leaking SST are: (1) No-Action, (2) In-Tank Stabilization, (3) External Tank Stabilization, (4) Liquid Retrieval, and (5) Total Retrieval. The findings of these examinations are reported in this study.

  12. 76 FR 4933 - Environmental Review Procedures for Entities Assuming HUD Environmental Review Responsibilities...


    ... Responsibilities; Notice of Proposed Information Collection: Comment Request AGENCY: Office of the Assistant...: Environmental Review Procedures for Entities Assuming HUD Environmental Responsibilities. OMB Control...

  13. Reliability of the TekScan MatScan® system for the measurement of plantar forces and pressures during barefoot level walking in healthy adults

    Munteanu Shannon E; Menz Hylton B; Zammit Gerard V


    Abstract Background Plantar pressure systems are increasingly being used to evaluate foot function in both research settings and in clinical practice. The purpose of this study was to investigate the reliability of the TekScan MatScan® system in assessing plantar forces and pressures during barefoot level walking. Methods Thirty participants were assessed for the reliability of measurements taken one week apart for the variables maximum force, peak pressure and average pressure. The following...

  14. Technical reliability of geological disposal for high-level radioactive wastes in Japan. The second progress report. Part 2. Engineering technology for geological disposal



    Based on the Advisory Committee Report on Nuclear Fuel Cycle Backend Policy submitted to the Japanese Government in 1997, JNC documents the progress of research and development program in the form of the second progress report (the first one published in 1992). It summarizes an evaluation of the technical reliability and safety of the deep geological disposal concept for high-level radioactive wastes (HLW) in Japan. The present document, part 2 of the progress report, concerns engineering aspect with reference to Japanese geological disposal plan, according to which the vitrified HLW will be disposed of into a deep, stable rock mass with thick containers and surrounding buffer materials at the depth of several hundred meters. It discusses on multi-barrier systems consisting of a series of engineered and natural barriers that will isolate radioactive nuclides effectively and retard their migrations to the biosphere environment. Performance of repository components, including specifications of containers for vitrified HLW and their overpacks under design as well as buffer material such as Japanese bentonite to be placed in between are described referring also to such possible problems as corrosion arising from the supposed system. It also presents plans and designs for underground disposal facilities, and the presumed management of the underground facilities. (Ohno, S.)

  15. Technical reliability of geological disposal for high-level radioactive wastes in Japan. The second progress report. Part 3. Safety assessment for geological disposal systems



    Based on the Advisory Committee Report on Nuclear Fuel Cycle Backend Policy submitted to the Japanese Government in 1997, JNC documents the progress of research and development program in the form of the second progress report (the first one published in 1992). It summarizes an evaluation of the technical reliability and safety of the geological disposal concept for high-level radioactive wastes (HLW) in Japan. The present document, the part 3 of the progress report, concerns safety assessment for geological disposal systems definitely introduced in part 1 and 2 of this series and consists of 9 chapters. Chapter I concerns the methodology for safety assessment while Chapter II deals with diversity and uncertainty about the scenario, the adequate model and the required data of the systems above. Chapter III summarizes the components of the geological disposal system. Chapter IV refers to the relationship between radioactive wastes and human life through groundwater, i.e. nuclide migration. In Chapter V is made a reference case which characterizes the geological environmental data using artificial barrier specifications. (Ohno. S.)

  16. Pre-Service Teachers' Personal Epistemic Beliefs and the Beliefs They Assume Their Pupils to Have

    Rebmann, Karin; Schloemer, Tobias; Berding, Florian; Luttenberger, Silke; Paechter, Manuela


    In their workaday life, teachers are faced with multiple complex tasks. How they carry out these tasks is also influenced by their epistemic beliefs and the beliefs they assume their pupils hold. In an empirical study, pre-service teachers' epistemic beliefs and those they assume of their pupils were investigated in the setting of teacher…

  17. Microelectronics Reliability


    convey any rights or permission to manufacture, use, or sell any patented invention that may relate to them. This report was cleared for public release...testing for reliability prediction of devices exhibiting multiple failure mechanisms. Also presented was an integrated accelerating and measuring ...13  Table 2  T, V, F and matrix versus  measured  FIT

  18. THM Coupled Modeling in Near Field of an Assumed HLW Deep Geological Disposal Repository

    Shen Zhenyao; Li Guoding; Li Shushen


    One of the most suitable ways under study for the disposal of high-level radioactive waste (HLW) is isolation in deep geological repositories. It is very important to research the thermo-hydro-mechanical (THM) coupled processes associated with an HLW disposal repository. Non-linear coupled equations, which are used to describe the THM coupled process and are suited to saturated-unsaturated porous media, are presented in this paper. A numerical method to solve these equations is put forward, and a finite element code is developed. This code is suited to the plane strain or axis-symmetry problem. Then this code is used to simulate the THM coupled process in the near field of an ideal disposal repository. The temperature vs. time, hydraulic head vs. time and stress vs. time results show that, in this assumed condition, the impact of temperature is very long (over 10 000 a) and the impact of the water head is short (about 90 d). Since the stress is induced by temperature and hydraulic head in this condition, the impact time of stress is the same as that of temperature. The results show that THM coupled processes are very important in the safety analysis of an HLW deep geological disposal repository.

  19. Wetware, Hardware, or Software Incapacitation: Observational Methods to Determine When Autonomy Should Assume Control

    Trujillo, Anna C.; Gregory, Irene M.


    Control-theoretic modeling of human operator's dynamic behavior in manual control tasks has a long, rich history. There has been significant work on techniques used to identify the pilot model of a given structure. This research attempts to go beyond pilot identification based on experimental data to develop a predictor of pilot behavior. Two methods for pre-dicting pilot stick input during changing aircraft dynamics and deducing changes in pilot behavior are presented This approach may also have the capability to detect a change in a subject due to workload, engagement, etc., or the effects of changes in vehicle dynamics on the pilot. With this ability to detect changes in piloting behavior, the possibility now exists to mediate human adverse behaviors, hardware failures, and software anomalies with autono-my that may ameliorate these undesirable effects. However, appropriate timing of when au-tonomy should assume control is dependent on criticality of actions to safety, sensitivity of methods to accurately detect these adverse changes, and effects of changes in levels of auto-mation of the system as a whole.

  20. Artificially introduced aneuploid chromosomes assume a conserved position in colon cancer cells.

    Kundan Sengupta

    Full Text Available BACKGROUND: Chromosomal aneuploidy is a defining feature of carcinomas. For instance, in colon cancer, an additional copy of Chromosome 7 is not only observed in early pre-malignant polyps, but is faithfully maintained throughout progression to metastasis. These copy number changes show a positive correlation with average transcript levels of resident genes. An independent line of research has also established that specific chromosomes occupy a well conserved 3D position within the interphase nucleus. METHODOLOGY/PRINCIPAL FINDINGS: We investigated whether cancer-specific aneuploid chromosomes assume a 3D-position similar to that of its endogenous homologues, which would suggest a possible correlation with transcriptional activity. Using 3D-FISH and confocal laser scanning microscopy, we show that Chromosomes 7, 18, or 19 introduced via microcell-mediated chromosome transfer into the parental diploid colon cancer cell line DLD-1 maintain their conserved position in the interphase nucleus. CONCLUSIONS: Our data is therefore consistent with the model that each chromosome has an associated zip code (possibly gene density that determines its nuclear localization. Whether the nuclear localization determines or is determined by the transcriptional activity of resident genes has yet to be ascertained.

  1. A Concept Analysis: Assuming Responsibility for Self-Care among Adolescents with Type 1 Diabetes

    Hanna, Kathleen M.; Decker, Carol L.


    Purpose This concept analysis clarifies “assuming responsibility for self-care” by adolescents with type 1 diabetes. Methods Walker and Avant’s (2005) methodology guided the analysis. Results Assuming responsibility for self-care was defined as a process specific to diabetes within the context of development. It is daily, gradual, individualized to person, and unique to task. The goal is ownership that involves autonomy in behaviors and decision-making. Practice Implications Adolescents with type 1 diabetes need to be assessed for assuming responsibility for self-care. This achievement has implications for adolescents’ diabetes management, short- and long-term health, and psychosocial quality of life. PMID:20367781

  2. Differential responses of sexual and asexual Artemia to genotoxicity by a reference mutagen: Is the comet assay a reliable predictor of population level responses?

    Sukumaran, Sandhya; Grant, Alastair


    The impact of chronic genotoxicity to natural populations is always questioned due to their reproductive surplus. We used a comet assay to quantify primary DNA damage after exposure to a reference mutagen ethyl methane sulfonate in two species of crustacean with different reproductive strategies (sexual Artemia franciscana and asexual Artemia parthenogenetica). We then assessed whether this predicted individual performance and population growth rate over three generations. Artemia were exposed to different chronic concentrations (0.78mM, 1.01mM, 1.24mM and 1.48mM) of ethyl methane sulfonate from instar 1 onwards for 3 h, 24 h, 7 days, 14 days and 21 days and percentage tail DNA values were used for comparisons between species. The percentage tail DNA values showed consistently elevated values up to 7 days and showed a reduction from 14 days onwards in A. franciscana. Whilst in A. parthenogenetica such a reduction was evident on 21 days assessment. The values of percentage tail DNA after 21 days were compared with population level fitness parameters, growth, survival, fecundity and population growth rate to know whether primary DNA damage as measured by comet assay is a reliable biomarker. Substantial increase in tail DNA values was associated with substantial reductions in all the fitness parameters in the parental generation of A. franciscana and parental, F1 and F2 generations of A. parthenogenetica. So comet results were more predictive in asexual species over generations. These results pointed to the importance of predicting biomarker responses from multigenerational consequences considering life history traits and reproductive strategies in ecological risk assessments.

  3. Delivery Time Reliability Model of Logistics Network

    Liusan Wu; Qingmei Tan; Yuehui Zhang


    Natural disasters like earthquake and flood will surely destroy the existing traffic network, usually accompanied by delivery delay or even network collapse. A logistics-network-related delivery time reliability model defined by a shortest-time entropy is proposed as a means to estimate the actual delivery time reliability. The less the entropy is, the stronger the delivery time reliability remains, and vice versa. The shortest delivery time is computed separately based on two different assum...

  4. Preparing for Upheaval in North Korea: Assuming North Korean Regime Collapse


    defense agreement between North Korea and China but also pro-Chinese North Korean elites’ requests for Chinese help are likely to justify Chinese...PREPARING FOR UPHEAVAL IN NORTH KOREA : ASSUMING NORTH KOREAN REGIME COLLAPSE by Kwonwoo Kim December 2013 Thesis Advisor: Wade Huntley Second...REPORT TYPE AND DATES COVERED Master’s Thesis 4. TITLE AND SUBTITLE PREPARING FOR UPHEAVAL IN NORTH KOREA : ASSUMING NORTH KOREAN REGIME COLLAPSE 5

  5. Phase field modeling of brittle fracture for enhanced assumed strain shells at large deformations: formulation and finite element implementation

    Reinoso, J.; Paggi, M.; Linder, C.


    Fracture of technological thin-walled components can notably limit the performance of their corresponding engineering systems. With the aim of achieving reliable fracture predictions of thin structures, this work presents a new phase field model of brittle fracture for large deformation analysis of shells relying on a mixed enhanced assumed strain (EAS) formulation. The kinematic description of the shell body is constructed according to the solid shell concept. This enables the use of fully three-dimensional constitutive models for the material. The proposed phase field formulation integrates the use of the (EAS) method to alleviate locking pathologies, especially Poisson thickness and volumetric locking. This technique is further combined with the assumed natural strain method to efficiently derive a locking-free solid shell element. On the computational side, a fully coupled monolithic framework is consistently formulated. Specific details regarding the corresponding finite element formulation and the main aspects associated with its implementation in the general purpose packages FEAP and ABAQUS are addressed. Finally, the applicability of the current strategy is demonstrated through several numerical examples involving different loading conditions, and including linear and nonlinear hyperelastic constitutive models.

  6. Phase field modeling of brittle fracture for enhanced assumed strain shells at large deformations: formulation and finite element implementation

    Reinoso, J.; Paggi, M.; Linder, C.


    Fracture of technological thin-walled components can notably limit the performance of their corresponding engineering systems. With the aim of achieving reliable fracture predictions of thin structures, this work presents a new phase field model of brittle fracture for large deformation analysis of shells relying on a mixed enhanced assumed strain (EAS) formulation. The kinematic description of the shell body is constructed according to the solid shell concept. This enables the use of fully three-dimensional constitutive models for the material. The proposed phase field formulation integrates the use of the (EAS) method to alleviate locking pathologies, especially Poisson thickness and volumetric locking. This technique is further combined with the assumed natural strain method to efficiently derive a locking-free solid shell element. On the computational side, a fully coupled monolithic framework is consistently formulated. Specific details regarding the corresponding finite element formulation and the main aspects associated with its implementation in the general purpose packages FEAP and ABAQUS are addressed. Finally, the applicability of the current strategy is demonstrated through several numerical examples involving different loading conditions, and including linear and nonlinear hyperelastic constitutive models.

  7. The Effects on Tsunami Hazard Assessment in Chile of Assuming Earthquake Scenarios with Spatially Uniform Slip

    Carvajal, Matías; Gubler, Alejandra


    We investigated the effect that along-dip slip distribution has on the near-shore tsunami amplitudes and on coastal land-level changes in the region of central Chile (29°-37°S). Here and all along the Chilean megathrust, the seismogenic zone extends beneath dry land, and thus, tsunami generation and propagation is limited to its seaward portion, where the sensitivity of the initial tsunami waveform to dislocation model inputs, such as slip distribution, is greater. We considered four distributions of earthquake slip in the dip direction, including a spatially uniform slip source and three others with typical bell-shaped slip patterns that differ in the depth range of slip concentration. We found that a uniform slip scenario predicts much lower tsunami amplitudes and generally less coastal subsidence than scenarios that assume bell-shaped distributions of slip. Although the finding that uniform slip scenarios underestimate tsunami amplitudes is not new, it has been largely ignored for tsunami hazard assessment in Chile. Our simulations results also suggest that uniform slip scenarios tend to predict later arrival times of the leading wave than bell-shaped sources. The time occurrence of the largest wave at a specific site is also dependent on how the slip is distributed in the dip direction; however, other factors, such as local bathymetric configurations and standing edge waves, are also expected to play a role. Arrival time differences are especially critical in Chile, where tsunamis arrive earlier than elsewhere. We believe that the results of this study will be useful to both public and private organizations for mapping tsunami hazard in coastal areas along the Chilean coast, and, therefore, help reduce the risk of loss and damage caused by future tsunamis.

  8. Reliability of the information about the history of diagnosis and treatment of hypertension. Differences in regard to sex, age, and educational level. The pró-saúde study

    Faerstein Eduardo


    Full Text Available OBJECTIVE: To assess the intraobserver reliability of the information about the history of diagnosis and treatment of hypertension. METHODS: A multidimensional health questionnaire, which was filled out by the interviewees, was applied twice with an interval of 2 weeks, in July '99, to 192 employees of the University of the State of Rio de Janeiro (UERJ, stratified by sex, age, and educational level. The intraobserver reliability of the answers provided was estimated by the kappa statistic and by the coefficient of intraclass correlation (CICC. RESULTS: The general kappa (k statistic was 0.75 (95% CI=0.73-0.77. Reliability was higher among females (k=0.88, 95% CI=0.85-0.91 than among males (k=0.62, 95% CI=0.59-0.65.The reliability was higher among individuals 40 years of age or older (k=0.79; 95% CI=0.73-0.84 than those from 18 to 39 years (k=0.52; 95% CI=0.45-0.57. Finally, the kappa statistic was higher among individuals with a university educational level (k=0.86; 95% CI=0.81-0.91 than among those with high school educational level (k=0.61; 95% CI=0.53-0.70 or those with middle school educational level (k=0.68; 95% CI=0.64-0.72. The coefficient of intraclass correlation estimated by the intraobserver agreement in regard to age at the time of the diagnosis of hypertension was 0.74. A perfect agreement between the 2 answers (k=1.00 was observed for 22 interviewees who reported prior prescription of antihypertensive medication. CONCLUSION: In the population studied, estimates of the reliability of the history of medical diagnosis of hypertension and its treatment ranged from substantial to almost perfect reliability.

  9. High reliability organizations

    Gallis, R.; Zwetsloot, G.I.J.M.


    High Reliability Organizations (HRO’s) are organizations that constantly face serious and complex (safety) risks yet succeed in realising an excellent safety performance. In such situations acceptable levels of safety cannot be achieved by traditional safety management only. HRO’s manage safety


    Bowerman, P. N.


    RELAV (Reliability/Availability Analysis Program) is a comprehensive analytical tool to determine the reliability or availability of any general system which can be modeled as embedded k-out-of-n groups of items (components) and/or subgroups. Both ground and flight systems at NASA's Jet Propulsion Laboratory have utilized this program. RELAV can assess current system performance during the later testing phases of a system design, as well as model candidate designs/architectures or validate and form predictions during the early phases of a design. Systems are commonly modeled as System Block Diagrams (SBDs). RELAV calculates the success probability of each group of items and/or subgroups within the system assuming k-out-of-n operating rules apply for each group. The program operates on a folding basis; i.e. it works its way towards the system level from the most embedded level by folding related groups into single components. The entire folding process involves probabilities; therefore, availability problems are performed in terms of the probability of success, and reliability problems are performed for specific mission lengths. An enhanced cumulative binomial algorithm is used for groups where all probabilities are equal, while a fast algorithm based upon "Computing k-out-of-n System Reliability", Barlow & Heidtmann, IEEE TRANSACTIONS ON RELIABILITY, October 1984, is used for groups with unequal probabilities. Inputs to the program include a description of the system and any one of the following: 1) availabilities of the items, 2) mean time between failures and mean time to repairs for the items from which availabilities are calculated, 3) mean time between failures and mission length(s) from which reliabilities are calculated, or 4) failure rates and mission length(s) from which reliabilities are calculated. The results are probabilities of success of each group and the system in the given configuration. RELAV assumes exponential failure distributions for

  11. Assuming a Pharmacy Organization Leadership Position: A Guide for Pharmacy Leaders.

    Shay, Blake; Weber, Robert J


    Important and influential pharmacy organization leadership positions, such as president, board member, or committee chair, are volunteer positions and require a commitment of personal and professional time. These positions provide excellent opportunities for leadership development, personal promotion, and advancement of the profession. In deciding to assume a leadership position, interested individuals must consider the impact on their personal and professional commitments and relationships, career planning, employer support, current and future department projects, employee support, and personal readiness. This article reviews these factors and also provides an assessment tool that leaders can use to determine their readiness to assume leadership positions. By using an assessment tool, pharmacy leaders can better understand their ability to assume an important and influential leadership position while achieving job and personal goals.

  12. Reliability Sensitivity Analysis for Location Scale Family

    洪东跑; 张海瑞


    Many products always operate under various complex environment conditions. To describe the dynamic influence of environment factors on their reliability, a method of reliability sensitivity analysis is proposed. In this method, the location parameter is assumed as a function of relevant environment variables while the scale parameter is assumed as an un- known positive constant. Then, the location parameter function is constructed by using the method of radial basis function. Using the varied environment test data, the log-likelihood function is transformed to a generalized linear expression by de- scribing the indicator as Poisson variable. With the generalized linear model, the maximum likelihood estimations of the model coefficients are obtained. With the reliability model, the reliability sensitivity is obtained. An instance analysis shows that the method is feasible to analyze the dynamic variety characters of reliability along with environment factors and is straightforward for engineering application.

  13. Grid reliability

    Saiz, P; Rocha, R; Andreeva, J


    We are offering a system to track the efficiency of different components of the GRID. We can study the performance of both the WMS and the data transfers At the moment, we have set different parts of the system for ALICE, ATLAS, CMS and LHCb. None of the components that we have developed are VO specific, therefore it would be very easy to deploy them for any other VO. Our main goal is basically to improve the reliability of the GRID. The main idea is to discover as soon as possible the different problems that have happened, and inform the responsible. Since we study the jobs and transfers issued by real users, we see the same problems that users see. As a matter of fact, we see even more problems than the end user does, since we are also interested in following up the errors that GRID components can overcome by themselves (like for instance, in case of a job failure, resubmitting the job to a different site). This kind of information is very useful to site and VO administrators. They can find out the efficien...


    WANG Jinyan; CHEN Jun; LI Minghui


    In this paper one-point quadrature "assumed strain" mixed element formulation based on the Hu-Washizu variational principle is presented. Special care is taken to avoid hourglass modes and volumetric locking as well as shear locking. The assumed strain fields are constructed so that those portions of the fields which lead to volumetric and shear locking phenomena are eliminated by projection, while the implementation of the proposed URI scheme is straightforward to suppress hourglass modes. In order to treat geometric nonlinearities simply and efficiently, a corotational coordinate system is used. Several numerical examples are given to demonstrate the performance of the suggested formulation, including nonlinear static/dynamic mechanical problems.

  15. Numerical Simulation of Wind Fields Calculated from Assumed Mode S Data Link Inputs.


    U) JAN 82 A CARRO . R C G0FF UNLSIIDFAA/CT-a/?, FAA-RD-81/100 N I31 uuuHu.. 2 DOT/FAA/RD81/lOO Numerical Simulation of Wind Fields Calculated From...Assumed Mode S Data Link Inputs Anthony Carro R. Craig Goff ~IIE~Prepared By FAA Technical Center Atlantic City Airport, N.J. 08405 January 1982 Final...FROM January 1982 ASSUMED MODE S DATA LINK INPUTS 6. Performing Organization Code 8. Performing Organization Report No. 7. Author{ s) Anthony Carro and K

  16. How Public High School Students Assume Cooperative Roles to Develop Their EFL Speaking Skills

    Julie Natalie Parra Espinel


    Full Text Available This study describes an investigation we carried out in order to identify how the specific roles that 7th grade public school students assumed when they worked cooperatively were related to their development of speaking skills in English. Data were gathered through interviews, field notes, students’ reflections and audio recordings. The findings revealed that students who were involved in cooperative activities chose and assumed roles taking into account preferences, skills and personality traits. In the same manner, when learners worked together, their roles were affected by each other and they put into practice some social strategies with the purpose of supporting their embryonic speaking development.

  17. Assumed white blood cell count of 8,000 cells/μL overestimates malaria parasite density in the Brazilian Amazon.

    Eduardo R Alves-Junior

    Full Text Available Quantification of parasite density is an important component in the diagnosis of malaria infection. The accuracy of this estimation varies according to the method used. The aim of this study was to assess the agreement between the parasite density values obtained with the assumed value of 8,000 cells/μL and the automated WBC count. Moreover, the same comparative analysis was carried out for other assumed values of WBCs. The study was carried out in Brazil with 403 malaria patients who were infected in different endemic areas of the Brazilian Amazon. The use of a fixed WBC count of 8,000 cells/μL to quantify parasite density in malaria patients led to overestimated parasitemia and resulted in low reliability when compared to the automated WBC count. Assumed values ranging between 5,000 and 6,000 cells/μL, and 5,500 cells/μL in particular, showed higher reliability and more similar values of parasite density when compared between the 2 methods. The findings show that assumed WBC count of 5,500 cells/μL could lead to a more accurate estimation of parasite density for malaria patients in this endemic region.

  18. Photovoltaic module reliability workshop

    Mrig, L. (ed.)


    The paper and presentations compiled in this volume form the Proceedings of the fourth in a series of Workshops sponsored by Solar Energy Research Institute (SERI/DOE) under the general theme of photovoltaic module reliability during the period 1986--1990. The reliability Photo Voltaic (PV) modules/systems is exceedingly important along with the initial cost and efficiency of modules if the PV technology has to make a major impact in the power generation market, and for it to compete with the conventional electricity producing technologies. The reliability of photovoltaic modules has progressed significantly in the last few years as evidenced by warranties available on commercial modules of as long as 12 years. However, there is still need for substantial research and testing required to improve module field reliability to levels of 30 years or more. Several small groups of researchers are involved in this research, development, and monitoring activity around the world. In the US, PV manufacturers, DOE laboratories, electric utilities and others are engaged in the photovoltaic reliability research and testing. This group of researchers and others interested in this field were brought together under SERI/DOE sponsorship to exchange the technical knowledge and field experience as related to current information in this important field. The papers presented here reflect this effort.

  19. On the Estimation of Complex Speech DFT Coefficients Without Assuming Independent Real and Imaginary Parts

    Erkelens, J.S.; Hendriks, R.C.; Heusdens, R.


    This letter considers the estimation of speech signals contaminated by additive noise in the discrete Fourier transform (DFT) domain. Existing complex-DFT estimators assume independency of the real and imaginary parts of the speech DFT coefficients, although this is not in line with measurements. In

  20. Application of the Perturbation Method for Determination of Eigenvalues and Eigenvectors for the Assumed Static Strain

    Major Izabela


    Full Text Available The paper presents the perturbation method which was used for computation of eigenvalues and eigenvectors for the assumed homogeneous state of strain in the hyperelastic Murnaghan material. The values calculated might be used for determination of the rate of propagation of unit vectors of wave amplitude for other non-linear




    This study was performed because of observed differences between dye dilution cardiac output and the Fick cardiac output, calculated from estimated oxygen consumption according to LaFarge and Miettinen, and to find a better formula for assumed oxygen consumption. In 250 patients who underwent left a

  2. Partial sums of the M\\"obius function in arithmetic progressions assuming GRH

    Halupczok, Karin


    We consider Mertens' function M(x,q,a) in arithmetic progression, Assuming the generalized Riemann hypothesis (GRH), we show an upper bound that is uniform for all moduli which are not too large. For the proof, a former method of K. Soundararajan is extended to L-series.

  3. A Model for Teacher Effects from Longitudinal Data without Assuming Vertical Scaling

    Mariano, Louis T.; McCaffrey, Daniel F.; Lockwood, J. R.


    There is an increasing interest in using longitudinal measures of student achievement to estimate individual teacher effects. Current multivariate models assume each teacher has a single effect on student outcomes that persists undiminished to all future test administrations (complete persistence [CP]) or can diminish with time but remains…

  4. Study on reliability technology of contactor relay

    LIU Guo-jin; ZHAO Jing-ying; WANG Hai-tao; YANG Chen-guang; SUN Shun-li


    In this paper, the reliability of contactor relay is studied. There are three main parts about reliability test and analysis. First, in order to analyze reliability level of contact relay, the failure ratio ranks are established as index base on the product level. Second, the reliability test method is put forward. The sample plan of reliability compliance test is gained from reliability sample theory. The failure criterion is ensured according to the failure modes of contactor relay. Third, after reliability test experiment, the analysis of failure physics is made and the failure reason is found.

  5. Reliability practice

    Kuper, F.G.; Fan, X.J.


    The technology trends of Microelectronics and Microsystems are mainly characterized by miniaturization down to the nano-scale, increasing levels of system and function integration, and the introduction of new materials, while the business trends are mainly characterized by cost reduction, shorter-ti

  6. Structural Optimization with Reliability Constraints

    Sørensen, John Dalsgaard; Thoft-Christensen, Palle


    During the last 25 years considerable progress has been made in the fields of structural optimization and structural reliability theory. In classical deterministic structural optimization all variables are assumed to be deterministic. Due to the unpredictability of loads and strengths of actual...... structures it is now widely accepted that structural problems are non-deterministic. Therefore, some of the variables have to be modelled as random variables/processes and a reliability-based design philosophy should be used, Comell [1], Moses [2], Ditlevsen [3] and Thoft-Christensen & Baker [ 4......]. In this paper we consider only structures which can be modelled as systems of elasto-plastic elements, e.g. frame and truss structures. In section 2 a method to evaluate the reliability of such structural systems is presented. Based on a probabilistic point of view a modern structural optimization problem...


    GUZELBEY Ibrahim H.; KANBER Bahattin; AKPOLAT Abdullah


    In this study, the stress based finite element method is coupled with the boundary element method in two different ways. In the first one, the ordinary distribution matrix is used for coupling. In the second one, the stress traction equilibrium is used at the interface line of both regions as a new coupling process. This new coupling procedure is presented without a distribution matrix. Several case studies are solved for the validation of the developed coupling procedure. The results of case studies are compared with the distribution matrix coupling, displacement based finite element method, assumed stress finite element method, boundary element method, ANSYS and analytical results whenever possible. It is shown that the coupling of the stress traction equilibrium with assumed stress finite elements gives as accurate results as those by the distribution matrix coupling.

  8. Frontiers of reliability

    Basu, Asit P; Basu, Sujit K


    This volume presents recent results in reliability theory by leading experts in the world. It will prove valuable for researchers, and users of reliability theory. It consists of refereed invited papers on a broad spectrum of topics in reliability. The subjects covered include Bayesian reliability, Bayesian reliability modeling, confounding in a series system, DF tests, Edgeworth approximation to reliability, estimation under random censoring, fault tree reduction for reliability, inference about changes in hazard rates, information theory and reliability, mixture experiment, mixture of Weibul

  9. Three-dimensional base isolation system for assumed FBR reactor building

    Tokuda, N.; Kashiwazaki, A.; Omata, I. [Ishikawajima-Harima Heavy Industries Co. Ltd., Yokohama (Japan); Ohnaka, T. [Yokohama Rubber Co. Ltd., Hiratsuka (Japan)


    A three-dimensional base isolation system for an assumed FBR reactor building is proposed, where a horizontally isolated building by laminated rubber bearings is supported by an intermediate slab which is vertically isolated by using air springs of high pressure. From some fundamental investigations on the above system, it is concluded that the system can be sufficiently practical by using the current industrially available techniques. (author). 4 refs., 6 figs., 1 tab.

  10. Reliability engineering theory and practice

    Birolini, Alessandro


    This book shows how to build in, evaluate, and demonstrate reliability and availability of components, equipment, systems. It presents the state-of-theart of reliability engineering, both in theory and practice, and is based on the author's more than 30 years experience in this field, half in industry and half as Professor of Reliability Engineering at the ETH, Zurich. The structure of the book allows rapid access to practical results. This final edition extend and replace all previous editions. New are, in particular, a strategy to mitigate incomplete coverage, a comprehensive introduction to human reliability with design guidelines and new models, and a refinement of reliability allocation, design guidelines for maintainability, and concepts related to regenerative stochastic processes. The set of problems for homework has been extended. Methods & tools are given in a way that they can be tailored to cover different reliability requirement levels and be used for safety analysis. Because of the Appendice...

  11. The impact of assumed error variances on surface soil moisture and snow depth hydrologic data assimilation

    Accurate knowledge of antecedent soil moisture and snow depth conditions is often important for obtaining reliable hydrological simulations of stream flow. Data assimilation (DA) methods can be used to integrate remotely-sensed (RS) soil moisture and snow depth retrievals into a hydrology model and...

  12. Reliability of semiology description.

    Heo, Jae-Hyeok; Kim, Dong Wook; Lee, Seo-Young; Cho, Jinwhan; Lee, Sang-Kun; Nam, Hyunwoo


    Seizure semiology is important for classifying patients' epilepsy. Physicians usually get most of the seizure information from observers though there have been few reports on the reliability of the observers' description. This study aims at determining the reliability of observers' description of the semiology. We included 92 patients who had their habitual seizures recorded during video-EEG monitoring. We compared the semiology described by the observers with that recorded on the videotape, and reviewed which characteristics of the observers affected the reliability of their reported data. The classification of seizures and the individual components of the semiology based only on the observer-description was somewhat discordant compared with the findings from the videotape (correct classification, 85%). The descriptions of some ictal behaviors such as oroalimentary automatism, tonic/dystonic limb posturing, and head versions were relatively accurate, but those of motionless staring and hand automatism were less accurate. The specified directions by the observers were relatively correct. The accuracy of the description was related to the educational level of the observers. Much of the information described by well-educated observers is reliable. However, every physician should keep in mind the limitations of this information and use this information cautiously.

  13. On reliability optimization for power generation systems


    The reliability level of a power generation system is an important problem which is concerned by both electricity producers and electricity consumers. Why? It is known that the high reliability level may result in additional utility cost, and the low reliability level may result in additional consumer's cost, so the optimum reliability level should be determined such that the total cost can reach its minimum. Four optimization models for power generation system reliability are constructed, and the proven efficient solutions for these models are also given.

  14. 现行输电塔设计规范可靠度水准的评估与分析%Evaluation and Analysis on Reliability Level of Current Design Specification for Power Transmission Tower

    王松涛; 高斐略; 李正良; 范文亮; 徐彬


    In order to make systematic and correct evaluation on reliability level of design specification for power transmission tower structure,relevant research on reliability of design expressions of different stress components of the power transmis-sion tower was developed based on DL/T 5 1 54-2002 technical regulations for structure design on overhead power transmis-sion line tower. Statistic parameters of loads and resistance of tower components and influencing laws were analyzed as well. Considering wind speed,structure importance factors and affect of wind load reappearing period,refined evaluation on reli-ability level of design expressions were made. In addition,reliability level of current specification for the tower in our coun-try was investigated.%为了对输电线路杆塔结构设计规范进行较为系统且准确的可靠水准评估,基于 DL/T 5154—2002《架空送电线路杆塔结构设计技术规定》,对输电塔不同受力构件设计表达式的可靠度展开相关研究,分析了杆塔构件荷载、抗力的统计参数及其影响规律。考虑风速、结构重要性系数及风荷载重现期的影响,对设计表达式的可靠度水平进行了精细化评估,考察了我国杆塔现行规范的可靠度水平。

  15. Federal and state management of inland wetlands: Are states ready to assume control?

    Glubiak, Peter G.; Nowka, Richard H.; Mitsch, William J.


    As inland wetlands face increasing pressure for development, both the federal government and individual states have begun reevaluating their respective wetland regulatory schemes. This article focuses first on the effectiveness of the past, present, and proposed federal regulations, most notably the Section 404, Dredge and Fill Permit Program, in dealing with shrinking wetland resources. The article then addresses the status of state involvement in this largely federal area, as well as state preparedness to assume primacy should federal priorities change. Finally, the subject of comprehensive legislation for wetland protection is investigated, and the article concludes with some procedural suggestions for developing a model law.

  16. Phase tuning in Michelson-Morley experiments performed in vacuum, assuming length contraction

    Levy, Joseph


    In agreement with Michelson-Morley experiments performed in vacuum, we show that, assuming the existence of a fundamental aether frame and of a length contraction affecting the material bodies in the direction of the Earth absolute velocity, the light signals, travelling along the arms of the interferometer arrive in phase whatever their orientation, a result which responds to an objection opposed to the non-entrained aether theory. This result constitutes a strong argument in support of length contraction and of the existence of a model of aether non-entrained by the motion of celestial bodies.

  17. Delta-Reliability

    Eugster, P.; Guerraoui, R.; Kouznetsov, P.


    This paper presents a new, non-binary measure of the reliability of broadcast algorithms, called Delta-Reliability. This measure quantifies the reliability of practical broadcast algorithms that, on the one hand, were devised with some form of reliability in mind, but, on the other hand, are not considered reliable according to the ``traditional'' notion of broadcast reliability [HT94]. Our specification of Delta-Reliability suggests a further step towards bridging the gap between theory and...

  18. Reliability computation from reliability block diagrams

    Chelson, P. O.; Eckstein, E. Y.


    Computer program computes system reliability for very general class of reliability block diagrams. Four factors are considered in calculating probability of system success: active block redundancy, standby block redundancy, partial redundancy, and presence of equivalent blocks in the diagram.

  19. Aseismic Slips Preceding Ruptures Assumed for Anomalous Seismicities and Crustal Deformations

    Ogata, Y.


    If aseismic slips occurs on a fault or its deeper extension, both seismicity and geodetic records around the source should be affected. Such anomalies are revealed to have occurred during the last several years leading up to the October 2004 Chuetsu Earthquake of M6.8, the March 2007 Noto Peninsula Earthquake of M6.9, and the July 2007 Chuetsu-Oki Earthquake of M6.8, which occurred successively in the near-field, central Japan. Seismic zones of negative and positive increments of the Coulomb failure stress, assuming such slips, show seismic quiescence and activation, respectively, relative to the predicted rate by the ETAS model. These are further supported by transient crustal movement around the source preceding the rupture. Namely, time series of the baseline distance records between a numbers of the permanent GPS stations deviated from the predicted trend, with the trend of different slope that is basically consistent with the horizontal displacements of the stations due to the assumed slips. References Ogata, Y. (2007) Seismicity and geodetic anomalies in a wide area preceding the Niigata-Ken-Chuetsu Earthquake of October 23, 2004, central Japan, J. Geophys. Res. 112, in press.

  20. Perceiving others' personalities: examining the dimensionality, assumed similarity to the self, and stability of perceiver effects.

    Srivastava, Sanjay; Guglielmo, Steve; Beer, Jennifer S


    In interpersonal perception, "perceiver effects" are tendencies of perceivers to see other people in a particular way. Two studies of naturalistic interactions examined perceiver effects for personality traits: seeing a typical other as sympathetic or quarrelsome, responsible or careless, and so forth. Several basic questions were addressed. First, are perceiver effects organized as a global evaluative halo, or do perceptions of different traits vary in distinct ways? Second, does assumed similarity (as evidenced by self-perceiver correlations) reflect broad evaluative consistency or trait-specific content? Third, are perceiver effects a manifestation of stable beliefs about the generalized other, or do they form in specific contexts as group-specific stereotypes? Findings indicated that perceiver effects were better described by a differentiated, multidimensional structure with both trait-specific content and a higher order global evaluation factor. Assumed similarity was at least partially attributable to trait-specific content, not just to broad evaluative similarity between self and others. Perceiver effects were correlated with gender and attachment style, but in newly formed groups, they became more stable over time, suggesting that they grew dynamically as group stereotypes. Implications for the interpretation of perceiver effects and for research on personality assessment and psychopathology are discussed.

  1. Reliability Assessment of Wind Turbines

    Sørensen, John Dalsgaard


    (and safe). In probabilistic design the single components are designed to a level of reliability, which accounts for an optimal balance between failure consequences, cost of operation & maintenance, material costs and the probability of failure. Furthermore, using a probabilistic design basis...... but manufactured in series production based on many component tests, some prototype tests and zeroseries wind turbines. These characteristics influence the reliability assessment where focus in this paper is on the structural components. Levelized Cost Of Energy is very important for wind energy, especially when...... comparing to other energy sources. Therefore much focus is on cost reductions and improved reliability both for offshore and onshore wind turbines. The wind turbine components should be designed to have sufficient reliability level with respect to both extreme and fatigue loads but also not be too costly...

  2. An Application-Level Semantic Reliable Multicast Architecture for the Internet%应用层语义可靠的自适应多播

    谭焜; 史元春; 廖春元; 徐光祐


    Internet上的可靠多播具有很大的应用前景,但是同时也面临着挑战,主要是因为Internet的异构性以及IP Multicast仍不能广域内得以实现.提出了一种应用层语义可靠的自适应多播体系结构(application semantics reliable multicast,简称ASRM).ASRM不以IP Multicast为基础,而是采用一种混合IP单播和多播的方法来实现多点数据通信.ASRM采用一种简单、自然,且具有更好的伸缩性的方法来命名多播会话,并可以在转发过程中,根据变换模型、变换规则和用户意愿进行自适应的数据传输,从而解决异构性问题.ASRM适用于广域范围内的小规模可靠多播应用.

  3. Fast and reliable method for As speciation in urine samples containing low levels of As by LC-ICP-MS: Focus on epidemiological studies.

    Carioni, V M O; McElroy, J A; Guthrie, J M; Ngwenyama, R A; Brockman, J D


    The speciation analysis of As in urine samples can provide important information for epidemiological studies. Considering that these studies involve hundreds or thousands of samples, a fast and reliable method using a simple LC system with short-length mixed bed ion exchange chromatographic column coupled to ICP-MS for As speciation in human urine samples was developed in this work. Separation of AB+TMAO, DMA, AC, MMA and As(III)+As(V) was accomplished within 5min with good resolution when ammonium carbonate solutions were used as mobile phases and H2O2 was added to samples to quantitatively convert As(III)-As(V). Repeatability studies yielded RSD values from 2.0% to 4.8% for all species evaluated. Limits of detection (LOD) for As species ranged from 0.003 to 0.051ngg(-1). Application of the method to human urine samples from a non-contaminated area showed that the sum of species measured corresponded to 62-125% of the total As in the sample. The recovery values for these species in urine SRM 2669 were in the range of 89-112% and demonstrated the suitability of the proposed method for epidemiological studies. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. 42 CFR 137.292 - How do Self-Governance Tribes assume environmental responsibilities for construction projects...


    ... 42 Public Health 1 2010-10-01 2010-10-01 false How do Self-Governance Tribes assume environmental...-Governance Tribes assume environmental responsibilities for construction projects under section 509 of the Act ? Self-Governance Tribes assume environmental responsibilities by: (a) Adopting a resolution...

  5. Reliability-Based Code Calibration

    Faber, M.H.; Sørensen, John Dalsgaard


    The present paper addresses fundamental concepts of reliability based code calibration. First basic principles of structural reliability theory are introduced and it is shown how the results of FORM based reliability analysis may be related to partial safety factors and characteristic values....... Thereafter the code calibration problem is presented in its principal decision theoretical form and it is discussed how acceptable levels of failure probability (or target reliabilities) may be established. Furthermore suggested values for acceptable annual failure probabilities are given for ultimate...... and serviceability limit states. Finally the paper describes the Joint Committee on Structural Safety (JCSS) recommended procedure - CodeCal - for the practical implementation of reliability based code calibration of LRFD based design codes....

  6. Interactive Reliability-Based Optimal Design

    Sørensen, John Dalsgaard; Thoft-Christensen, Palle; Siemaszko, A.


    Interactive design/optimization of large, complex structural systems is considered. The objective function is assumed to model the expected costs. The constraints are reliability-based and/or related to deterministic code requirements. Solution of this optimization problem is divided in four main...... be used in interactive optimization....

  7. An assumed pdf approach for the calculation of supersonic mixing layers

    Baurle, R. A.; Drummond, J. P.; Hassan, H. A.


    In an effort to predict the effect that turbulent mixing has on the extent of combustion, a one-equation turbulence model is added to an existing Navier-Stokes solver with finite-rate chemistry. To average the chemical-source terms appearing in the species-continuity equations, an assumed pdf approach is also used. This code was used to analyze the mixing and combustion caused by the mixing layer formed by supersonic coaxial H2-air streams. The chemistry model employed allows for the formation of H2O2 and HO2. Comparisons are made with recent measurements using laser Raman diagnostics. Comparisons include temperature and its rms, and concentrations of H2, O2, N2, H2O, and OH. In general, good agreement with experiment was noted.

  8. Analysis of an object assumed to contain “Red Mercury”

    Obhođaš, Jasmina; Sudac, Davorin; Blagus, Saša; Valković, Vladivoj


    After having been informed about an attempt of illicit trafficking, the Organized Crime Division of the Zagreb Police Authority confiscated in November 2003 a hand size metal cylinder suspected to contain "Red Mercury" (RM). The sample assumed to contain RM was analyzed with two nondestructive analytical methods in order to obtain information about the nature of the investigated object, namely, activation analysis with 14.1 MeV neutrons and EDXRF analysis. The activation analysis with 14.1 MeV neutrons showed that the container and its contents were characterized by the following chemical elements: Hg, Fe, Cr and Ni. By using EDXRF analysis, it was shown that the elements Fe, Cr and Ni were constituents of the capsule. Therefore, it was concluded that these three elements were present in the capsule only, while the content of the unknown material was Hg. Antimony as a hypothetical component of red mercury was not detected.

  9. VLSI Reliability in Europe

    Verweij, Jan F.


    Several issue's regarding VLSI reliability research in Europe are discussed. Organizations involved in stimulating the activities on reliability by exchanging information or supporting research programs are described. Within one such program, ESPRIT, a technical interest group on IC reliability was

  10. Reliability, validity, and applicability of isolated and combined sport-specific tests of conditioning capacities in top-level junior water polo athletes.

    Uljevic, Ognjen; Esco, Michael R; Sekulic, Damir


    Standard testing procedures are of limited applicability in water sports, such as water polo. The aim of this investigation was to construct and validate methods for determining water polo-specific conditioning capacities. We constructed 4 combined-capacity tests that were designed to mimic real-game water polo performances: sprint swimming performance, shooting performance, jumping performance, and precision performance. In all cases, combined-capacity tests included a period of standardized exhaustion followed by the performance of the targeted quality (swimming, shooting, jumping, and precision). In the first part of the study, single-capacity tests (sprint swim, in-water jump, drive shoot, and precision performance) were tested and later included in the combined-capacity tests. Study subjects consisted of 54 young male water polo players (15-18 years of age, 185.6 ± 6.7 cm, and 83.1 ± 9.9 kg). Most of the tests evaluated were found to be reliable with Cronbach alpha values ranging from 0.83 to 0.96 and coefficients of variation from 21 to 2% (for the single-capacity tests) and 0.75 to 0.93 test-retest correlation (intraclass correlation coefficients) with Bland-Altman tight limits of agreement (for combined-capacity tests). The combined-capacity tests discriminated qualitative groups of junior water polo players (national squad vs. team athletes) more effectively than single-capacity tests. This is most likely because combined-capacity tests more closely represent the complex fitness capacities required in real game situations. Strength and conditioning practitioners and coaches working with water polo athletes should consider incorporating these validated tests into their assessment protocols.

  11. Reliability Analysis of Wind Turbines

    Toft, Henrik Stensgaard; Sørensen, John Dalsgaard


    In order to minimise the total expected life-cycle costs of a wind turbine it is important to estimate the reliability level for all components in the wind turbine. This paper deals with reliability analysis for the tower and blades of onshore wind turbines placed in a wind farm. The limit states...... consideres are in the ultimate limit state (ULS) extreme conditions in the standstill position and extreme conditions during operating. For wind turbines, where the magnitude of the loads is influenced by the control system, the ultimate limit state can occur in both cases. In the fatigue limit state (FLS......) the reliability level for a wind turbine placed in a wind farm is considered, and wake effects from neighbouring wind turbines is taken into account. An illustrative example with calculation of the reliability for mudline bending of the tower is considered. In the example the design is determined according...

  12. Reliability of Circumplex Axes

    Micha Strack


    Full Text Available We present a confirmatory factor analysis (CFA procedure for computing the reliability of circumplex axes. The tau-equivalent CFA variance decomposition model estimates five variance components: general factor, axes, scale-specificity, block-specificity, and item-specificity. Only the axes variance component is used for reliability estimation. We apply the model to six circumplex types and 13 instruments assessing interpersonal and motivational constructs—Interpersonal Adjective List (IAL, Interpersonal Adjective Scales (revised; IAS-R, Inventory of Interpersonal Problems (IIP, Impact Messages Inventory (IMI, Circumplex Scales of Interpersonal Values (CSIV, Support Action Scale Circumplex (SAS-C, Interaction Problems With Animals (IPI-A, Team Role Circle (TRC, Competing Values Leadership Instrument (CV-LI, Love Styles, Organizational Culture Assessment Instrument (OCAI, Customer Orientation Circle (COC, and System for Multi-Level Observation of Groups (behavioral adjectives; SYMLOG—in 17 German-speaking samples (29 subsamples, grouped by self-report, other report, and metaperception assessments. The general factor accounted for a proportion ranging from 1% to 48% of the item variance, the axes component for 2% to 30%; and scale specificity for 1% to 28%, respectively. Reliability estimates varied considerably from .13 to .92. An application of the Nunnally and Bernstein formula proposed by Markey, Markey, and Tinsley overestimated axes reliabilities in cases of large-scale specificities but otherwise works effectively. Contemporary circumplex evaluations such as Tracey’s RANDALL are sensitive to the ratio of the axes and scale-specificity components. In contrast, the proposed model isolates both components.

  13. Electronic parts reliability data 1997

    Denson, William; Jaworski, Paul; Mahar, David


    This document contains reliability data on both commercial and military electronic components for use in reliability analyses. It contains failure rate data on integrated circuits, discrete semiconductors (diodes, transistors, optoelectronic devices), resistors, capacitors, and inductors/transformers, all of which were obtained from the field usage of electronic components. At 2,000 pages, the format of this document is the same as RIAC's popular NPRD document which contains reliability data on nonelectronic component and electronic assembly types. Data includes part descriptions, quality level, application environments, point estimates of failure rate, data sources, number of failures, total operating hours, miles, or cycles, and detailed part characteristics.

  14. Tsunami Waveform Inversion without Assuming Fault Models- Application to Recent Three Earthquakes around Japan

    Namegaya, Y.; Ueno, T.; Satake, K.; Tanioka, Y.


    Tsunami waveform inversion is often used to study the source of tsunamigenic earthquakes. In this method, subsurface fault planes are divided into small subfaults, and the slip distribution, then seafloor deformation are estimated. However, it is sometimes difficult to judge the actual fault plane for offshore earthquake such as those along the eastern margin of Japan Sea. We developed an inversion method to estimate vertical seafloor deformation directly from observed tsunami waveforms. The tsunami source area is divided into many nodes, and the vertical seafloor deformation is calculated around each node by using the B-spline functions. The tsunami waveforms are calculated from each node, and used as the Green’s functions for inversion. To stabilize inversion or avoid overestimation of data errors, we introduce smoothing equations like Laplace’s equations. The optimum smoothing strength is estimated from the Akaike’s Bayesian information criterion (ABIC) Method. Advantage of this method is to estimate the vertical seafloor deformation can be estimated without assuming a fault plane. We applied the method to three recent earthquakes around Japan: the 2007 Chuetsu-oki, 2007 Noto Hanto, and 2003 Tokachi-oki earthquakes. The Chuetsu-oki earthquake (M6.8) occurred off the Japan Sea coast of central Japan on 16 July 2007. For this earthquake, complicated aftershock distribution makes it difficult to judge which of the southeast dipping fault or the northwest dipping fault was the actual fault plane. The tsunami inversion result indicates that the uplifted area extends about 10 km from the coastline, and there are two peaks of uplift: about 40 cm in the south and about 20 cm in the north. TheNoto Hanto earthquake (M6.9) occurred off Noto peninsula, also along the Japan Sea coast of central Japan, on 25 March 2007. The inversion result indicates that the uplifted area extends about 10 km off the coast, and the largest uplift amount is more than 40 cm. Location of

  15. Reliability and Security - Convergence or Divergence



    Full Text Available Reliability, as every technical field, must adapt to the new demands imposed by reality. Started initially as a field designed to control and ensure the smooth functionality of an element or technical system, reliability has reached the stage where the discussion is about the reliability management, similar to the other top-level fields. Security has its own contribution to the reliability of a system; a reliable system is a system with reliable security. In order for a system to be reliable, that means clear and safe, all its components must be reliable. In the following pages we will talk about the two main facts - reliability and security - to determine both the convergence and the divergence points.

  16. Radial diffusion in Saturn's radiation belts - A modeling analysis assuming satellite and ring E absorption

    Hood, L. L.


    A modeling analysis is carried out of six experimental phase space density profiles for nearly equatorially mirroring protons using methods based on the approach of Thomsen et al. (1977). The form of the time-averaged radial diffusion coefficient D(L) that gives an optimal fit to the experimental profiles is determined under the assumption that simple satellite plus Ring E absorption of inwardly diffusing particles and steady-state radial diffusion are the dominant physical processes affecting the proton data in the L range that is modeled. An extension of the single-satellite model employed by Thomsen et al. to a model that includes multisatellite and ring absorption is described, and the procedures adopted for estimating characteristic satellite and ring absorption times are defined. The results obtained in applying three representative solid-body absorption models to evaluate D(L) in the range where L is between 4 and 16 are reported, and a study is made of the sensitivity of the preferred amplitude and L dependence for D(L) to the assumed model parameters. The inferred form of D(L) is then compared with that which would be predicted if various proposed physical mechanisms for driving magnetospheric radial diffusion are operative at Saturn.

  17. Automated Assume-Guarantee Reasoning for Omega-Regular Systems and Specifications

    Chaki, Sagar; Gurfinkel, Arie


    We develop a learning-based automated Assume-Guarantee (AG) reasoning framework for verifying omega-regular properties of concurrent systems. We study the applicability of non-circular (AGNC) and circular (AG-C) AG proof rules in the context of systems with infinite behaviors. In particular, we show that AG-NC is incomplete when assumptions are restricted to strictly infinite behaviors, while AG-C remains complete. We present a general formalization, called LAG, of the learning based automated AG paradigm. We show how existing approaches for automated AG reasoning are special instances of LAG.We develop two learning algorithms for a class of systems, called infinite regular systems, that combine finite and infinite behaviors. We show that for infinity-regular systems, both AG-NC and AG-C are sound and complete. Finally, we show how to instantiate LAG to do automated AG reasoning for infinite regular, and omega-regular, systems using both AG-NC and AG-C as proof rules

  18. Cardiovascular Responses during Head-Down Crooked Kneeling Position Assumed in Muslim Prayers

    Adamu Ahmad Rufa’i


    Full Text Available Background: Movement dysfunction may be expressed in terms of symptoms experienced in non-physiological postures, and head-down crooked kneeling (HDCK is a posture frequently assumed by Muslims during prayer activities. The purpose of this study was to investigate the cardiovascular responses in the HDCK posture. Methods: Seventy healthy volunteers, comprising 35 males and 35 females, participated in the study. Cardiovascular parameters of blood pressure and pulse rate of the participants were measured in rested sitting position and then at one and three minutes into the HDCK posture. Two-way ANOVA was used to determine the differences between cardiovascular responses at rest and in the HDCK posture, and the Student t test was utilized to determine gender difference in cardiovascular responses at rest and at one and three minutes into the HDCK posture. Results: The study showed a significant decrease in systolic and diastolic blood pressures at one minute into the HDCK posture and an increase in pulse rate at one and three minutes into the HDCK posture, as compared to the resting values. Rate pressure product also rose at one minute into the HDCK posture, whereas pulse pressure increased at one and three minutes into the HDCK posture, as compared with the resting values. However, no significant change was observed in the mean arterial pressure values. Conclusion: The findings from this study suggest that no adverse cardiovascular event can be expected to occur for the normal duration of this posture during Muslim prayer activities.

  19. Cardiovascular Responses during Head-Down Crooked Kneeling Position Assumed in Muslim Prayers

    Ahmad Rufa’i, Adamu; Hamu Aliyu, Hadeezah; Yunoos Oyeyemi, Adetoyeje; Lukman Oyeyemi, Adewale


    Background: Movement dysfunction may be expressed in terms of symptoms experienced in non-physiological postures, and head-down crooked kneeling (HDCK) is a posture frequently assumed by Muslims during prayer activities. The purpose of this study was to investigate the cardiovascular responses in the HDCK posture. Methods: Seventy healthy volunteers, comprising 35 males and 35 females, participated in the study. Cardiovascular parameters of blood pressure and pulse rate of the participants were measured in rested sitting position and then at one and three minutes into the HDCK posture. Two-way ANOVA was used to determine the differences between cardiovascular responses at rest and in the HDCK posture, and the Student t test was utilized to determine gender difference in cardiovascular responses at rest and at one and three minutes into the HDCK posture. Results: The study showed a significant decrease in systolic and diastolic blood pressures at one minute into the HDCK posture and an increase in pulse rate at one and three minutes into the HDCK posture, as compared to the resting values. Rate pressure product also rose at one minute into the HDCK posture, whereas pulse pressure increased at one and three minutes into the HDCK posture, as compared with the resting values. However, no significant change was observed in the mean arterial pressure values. Conclusion: The findings from this study suggest that no adverse cardiovascular event can be expected to occur for the normal duration of this posture during Muslim prayer activities. PMID:24031108

  20. Reliability Generalization: "Lapsus Linguae"

    Smith, Julie M.


    This study examines the proposed Reliability Generalization (RG) method for studying reliability. RG employs the application of meta-analytic techniques similar to those used in validity generalization studies to examine reliability coefficients. This study explains why RG does not provide a proper research method for the study of reliability,…

  1. Accelerated Wafer-Level Integrated Circuit Reliability Testing for Electromigration in Metal Interconnects with Enhanced Thermal Modeling, Structure Design, Control of Stress, and Experimental Measurements.

    Shih, Chih-Ching

    Wafer-level electromigration tests have been developed recently to fulfill the need for rapid testing in integrated circuit production facilities. We have developed an improved thermal model-TEARS (Thermal Energy Accounts for the Resistance of the System) that supports these tests. Our model is enhanced by treatments for determination of the thermal conductivity of metal, K_{m}, heat sinking effects of the voltage probes and current lead terminations, and thermoelectric power. Our TEARS analysis of multi-element SWEAT (Standard Wafer-level Electromigration Acceleration Test) structures yields design criteria for the length of current injection leads and choice of voltage probe locations to isolate test units from the heat sinking effect of current lead terminations. This also provides greater insight into the current for thermal runaway. From our TEARS model and Black's equation for lifetime prediction, we have developed an algorithm for a fast and accurate control of stress in SWEAT tests. We have developed a lookup table approach for precise electromigration characterizations without complicated calculations. It decides the peak temperature in the metal, T_ {max}, and the thermal conductivity of the insulator, K_{i}, from an experimental resistance measurement at a given current. We introduce a characteristic temperature, T _{EO}, which is much simpler to use than conventional temperature coefficient of the electrical resistivity of metal for calibration and transfer of calibration data of metallic films as their own temperature sensors. The use of T_{EO} also allows us to establish system specifications for a desirable accuracy in temperature measurement. Our experimental results are the first to show the effects of series elemental SWEAT units on the system failure distribution, spatial failure distribution in SWEAT structures, and bimodal distributions for straight-line structures. The adaptive approach of our TEARS based SWEAT test decides the value of Black

  2. Reliability analysis in intelligent machines

    Mcinroy, John E.; Saridis, George N.


    Given an explicit task to be executed, an intelligent machine must be able to find the probability of success, or reliability, of alternative control and sensing strategies. By using concepts for information theory and reliability theory, new techniques for finding the reliability corresponding to alternative subsets of control and sensing strategies are proposed such that a desired set of specifications can be satisfied. The analysis is straightforward, provided that a set of Gaussian random state variables is available. An example problem illustrates the technique, and general reliability results are presented for visual servoing with a computed torque-control algorithm. Moreover, the example illustrates the principle of increasing precision with decreasing intelligence at the execution level of an intelligent machine.

  3. Internal Structure and Mineralogy of Differentiated Asteroids Assuming Chondritic Bulk Composition: The Case of Vesta

    Toplis, M. J.; Mizzon, H.; Forni, O.; Monnereau, M.; Prettyman, T. H.; McSween, H. Y.; McCoy, T. J.; Mittlefehldt, D. W.; DeSanctis, M. C.; Raymond, C. A.; Russell, C. T.


    Bulk composition (including oxygen content) is a primary control on the internal structure and mineralogy of differentiated asteroids. For example, oxidation state will affect core size, as well as Mg# and pyroxene content of the silicate mantle. The Howardite-Eucrite-Diogenite class of meteorites (HED) provide an interesting test-case of this idea, in particular in light of results of the Dawn mission which provide information on the size, density and differentiation state of Vesta, the parent body of the HED's. In this work we explore plausible bulk compositions of Vesta and use mass-balance and geochemical modelling to predict possible internal structures and crust/mantle compositions and mineralogies. Models are constrained to be consistent with known HED samples, but the approach has the potential to extend predictions to thermodynamically plausible rock types that are not necessarily present in the HED collection. Nine chondritic bulk compositions are considered (CI, CV, CO, CM, H, L, LL, EH, EL). For each, relative proportions and densities of the core, mantle, and crust are quantified. Considering that the basaltic crust has the composition of the primitive eucrite Juvinas and assuming that this crust is in thermodynamic equilibrium with the residual mantle, it is possible to calculate how much iron is in metallic form (in the core) and how much in oxidized form (in the mantle and crust) for a given bulk composition. Of the nine bulk compositions tested, solutions corresponding to CI and LL groups predicted a negative metal fraction and were not considered further. Solutions for enstatite chondrites imply significant oxidation relative to the starting materials and these solutions too are considered unlikely. For the remaining bulk compositions, the relative proportion of crust to bulk silicate is typically in the range 15 to 20% corresponding to crustal thicknesses of 15 to 20 km for a porosity-free Vesta-sized body. The mantle is predicted to be largely

  4. CR reliability testing

    Honeyman-Buck, Janice C.; Rill, Lynn; Frost, Meryll M.; Staab, Edward V.


    The purpose of this work was to develop a method for systematically testing the reliability of a CR system under realistic daily loads in a non-clinical environment prior to its clinical adoption. Once digital imaging replaces film, it will be very difficult to revert back should the digital system become unreliable. Prior to the beginning of the test, a formal evaluation was performed to set the benchmarks for performance and functionality. A formal protocol was established that included all the 62 imaging plates in the inventory for each 24-hour period in the study. Imaging plates were exposed using different combinations of collimation, orientation, and SID. Anthropomorphic phantoms were used to acquire images of different sizes. Each combination was chosen randomly to simulate the differences that could occur in clinical practice. The tests were performed over a wide range of times with batches of plates processed to simulate the temporal constraints required by the nature of portable radiographs taken in the Intensive Care Unit (ICU). Current patient demographics were used for the test studies so automatic routing algorithms could be tested. During the test, only three minor reliability problems occurred, two of which were not directly related to the CR unit. One plate was discovered to cause a segmentation error that essentially reduced the image to only black and white with no gray levels. This plate was removed from the inventory to be replaced. Another problem was a PACS routing problem that occurred when the DICOM server with which the CR was communicating had a problem with disk space. The final problem was a network printing failure to the laser cameras. Although the units passed the reliability test, problems with interfacing to workstations were discovered. The two issues that were identified were the interpretation of what constitutes a study for CR and the construction of the look-up table for a proper gray scale display.

  5. Structural reliability codes for probabilistic design

    Ditlevsen, Ove Dalager


    difficulties of ambiguity and definition show up when attempting to make the transition from a given authorized partial safety factor code to a superior probabilistic code. For any chosen probabilistic code format there is a considerable variation of the reliability level over the set of structures defined...... considerable variation of the reliability measure as defined by a specific probabilistic code format. Decision theoretical principles are applied to get guidance about which of these different reliability levels of existing practice to choose as target reliability level. Moreover, it is shown that the chosen...... probabilistic code format has not only strong influence on the formal reliability measure, but also on the formal cost of failure to be associated if a design made to the target reliability level is considered to be optimal. In fact, the formal cost of failure can be different by several orders of size for two...

  6. Assuring reliability program effectiveness.

    Ball, L. W.


    An attempt is made to provide simple identification and description of techniques that have proved to be most useful either in developing a new product or in improving reliability of an established product. The first reliability task is obtaining and organizing parts failure rate data. Other tasks are parts screening, tabulation of general failure rates, preventive maintenance, prediction of new product reliability, and statistical demonstration of achieved reliability. Five principal tasks for improving reliability involve the physics of failure research, derating of internal stresses, control of external stresses, functional redundancy, and failure effects control. A final task is the training and motivation of reliability specialist engineers.

  7. The Accelerator Reliability Forum

    Lüdeke, Andreas; Giachino, R


    A high reliability is a very important goal for most particle accelerators. The biennial Accelerator Reliability Workshop covers topics related to the design and operation of particle accelerators with a high reliability. In order to optimize the over-all reliability of an accelerator one needs to gather information on the reliability of many different subsystems. While a biennial workshop can serve as a platform for the exchange of such information, the authors aimed to provide a further channel to allow for a more timely communication: the Particle Accelerator Reliability Forum [1]. This contribution will describe the forum and advertise it’s usage in the community.

  8. 25 CFR 224.64 - How may a tribe assume management of development of different types of energy resources?


    ... Requirements § 224.64 How may a tribe assume management of development of different types of energy resources... 25 Indians 1 2010-04-01 2010-04-01 false How may a tribe assume management of development of different types of energy resources? 224.64 Section 224.64 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT...

  9. Validation of Fick cardiac output calculated with assumed oxygen consumption : a study of cardiac output during epoprostenol

    Bergstra, A; van den Heuvel, A F M; Zijlstra, F; Berger, R M F; Mook, G A; van Veldhuisen, D J


    OBJECTIVE: To test the validity of using assumed oxygen consumption for Fick cardiac output during administration of epoprostenol. METHODS: In 24 consecutive patients Fick cardiac output calculated with assumed oxygen consumption according to LaFarge and Miettinen (COLM) and according to Bergstra et

  10. 42 CFR 137.291 - May Self-Governance Tribes carry out construction projects without assuming these Federal...


    ...-Governance Tribes carry out construction projects without assuming these Federal environmental... 42 Public Health 1 2010-10-01 2010-10-01 false May Self-Governance Tribes carry out construction projects without assuming these Federal environmental responsibilities? 137.291 Section 137.291...

  11. An Allocation Scheme for Estimating the Reliability of a Parallel-Series System

    Zohra Benkamra


    Full Text Available We give a hybrid two-stage design which can be useful to estimate the reliability of a parallel-series and/or by duality a series-parallel system. When the components' reliabilities are unknown, one can estimate them by sample means of Bernoulli observations. Let T be the total number of observations allowed for the system. When T is fixed, we show that the variance of the system reliability estimate can be lowered by allocation of the sample size T at components' level. This leads to a discrete optimization problem which can be solved sequentially, assuming T is large enough. First-order asymptotic optimality is proved systematically and validated T Monte Carlo simulation.

  12. Enlightenment on Computer Network Reliability From Transportation Network Reliability

    Hu Wenjun; Zhou Xizhao


    Referring to transportation network reliability problem, five new computer network reliability definitions are proposed and discussed. They are computer network connectivity reliability, computer network time reliability, computer network capacity reliability, computer network behavior reliability and computer network potential reliability. Finally strategies are suggested to enhance network reliability.

  13. Supply chain reliability modelling

    Eugen Zaitsev


    Full Text Available Background: Today it is virtually impossible to operate alone on the international level in the logistics business. This promotes the establishment and development of new integrated business entities - logistic operators. However, such cooperation within a supply chain creates also many problems related to the supply chain reliability as well as the optimization of the supplies planning. The aim of this paper was to develop and formulate the mathematical model and algorithms to find the optimum plan of supplies by using economic criterion and the model for the probability evaluating of non-failure operation of supply chain. Methods: The mathematical model and algorithms to find the optimum plan of supplies were developed and formulated by using economic criterion and the model for the probability evaluating of non-failure operation of supply chain. Results and conclusions: The problem of ensuring failure-free performance of goods supply channel analyzed in the paper is characteristic of distributed network systems that make active use of business process outsourcing technologies. The complex planning problem occurring in such systems that requires taking into account the consumer's requirements for failure-free performance in terms of supply volumes and correctness can be reduced to a relatively simple linear programming problem through logical analysis of the structures. The sequence of the operations, which should be taken into account during the process of the supply planning with the supplier's functional reliability, was presented.

  14. A Method for Reviewing the Accuracy and Reliability of a Five-Level Triage Process (Canadian Triage and Acuity Scale in a Community Emergency Department Setting: Building the Crowding Measurement Infrastructure

    Michael K. Howlett


    Full Text Available Objectives. Triage data are widely used to evaluate patient flow, disease severity, and emergency department (ED workload, factors used in ED crowding evaluation and management. We defined an indicator-based methodology that can be easily used to review the accuracy of Canadian Triage and Acuity Scale (CTAS performance. Methods. A trained nurse reviewer (NR retrospectively triaged two separate month’s ED charts relative to a set of clinical indicators based on CTAS Chief Complaints. Interobserver reliability and accuracy were compared using Kappa and comparative statistics. Results. There were 2838 patients in Trial 1 and 3091 in Trial 2. The rate of inconsistent triage was 14% and 16% (Kappa 0.596 and 0.604. Clinical Indicators “pain scale, chest pain, musculoskeletal injury, respiratory illness, and headache” captured 68% and 62% of visits. Conclusions. We have demonstrated a system to measure the levels of process accuracy and reliability for triage over time. We identified five key clinical indicators which captured over 60% of visits. A simple method for quality review uses a small set of indicators, capturing a majority of cases. Performance consistency and data collection using indicators may be important areas to direct training efforts.

  15. Fatigue reliability and calibration of fatigue design factors of wave energy converters

    Ambühl, Simon; Ferri, Francesco; Kofoed, Jens Peter


    Target reliability levels, which are chosen dependent on the consequences in case of structural collapse, are used in this paper to calibrate partial safety factors for structural details of wave energy converters (WECs). The consequences in case of structural failure are similar for WECs...... and offshore wind turbines (no fatalities, low environmental pollution). Therefore, it can be assumed that the target reliability levels for WEC applications can be overtaken from offshore wind turbine studies. The partial safety factors cannot be directly overtaken from offshore wind turbines because the load...... is considered in order to extend and maintain a certain target safety level. This paper uses the Wavestar prototype located at Hanstholm (DK) as case study in order to calibrate FDFs for welded and bolted details in steel structures of an offshore bottom-fixed WEC with hydraulic floaters....

  16. Human Reliability Program Overview

    Bodin, Michael


    This presentation covers the high points of the Human Reliability Program, including certification/decertification, critical positions, due process, organizational structure, program components, personnel security, an overview of the US DOE reliability program, retirees and academia, and security program integration.

  17. Power electronics reliability analysis.

    Smith, Mark A.; Atcitty, Stanley


    This report provides the DOE and industry with a general process for analyzing power electronics reliability. The analysis can help with understanding the main causes of failures, downtime, and cost and how to reduce them. One approach is to collect field maintenance data and use it directly to calculate reliability metrics related to each cause. Another approach is to model the functional structure of the equipment using a fault tree to derive system reliability from component reliability. Analysis of a fictitious device demonstrates the latter process. Optimization can use the resulting baseline model to decide how to improve reliability and/or lower costs. It is recommended that both electric utilities and equipment manufacturers make provisions to collect and share data in order to lay the groundwork for improving reliability into the future. Reliability analysis helps guide reliability improvements in hardware and software technology including condition monitoring and prognostics and health management.

  18. Reliability Assessment of Wind Turbines

    Sørensen, John Dalsgaard


    Wind turbines can be considered as structures that are in between civil engineering structures and machines since they consist of structural components and many electrical and machine components together with a control system. Further, a wind turbine is not a one-of-a-kind structure...... but manufactured in series production based on many component tests, some prototype tests and zeroseries wind turbines. These characteristics influence the reliability assessment where focus in this paper is on the structural components. Levelized Cost Of Energy is very important for wind energy, especially when...... comparing to other energy sources. Therefore much focus is on cost reductions and improved reliability both for offshore and onshore wind turbines. The wind turbine components should be designed to have sufficient reliability level with respect to both extreme and fatigue loads but also not be too costly...

  19. Reliability assessment of Wind turbines

    Sørensen, John Dalsgaard


    Wind turbines can be considered as structures that are in between civil engineering structures and machines since they consist of structural components and many electrical and machine components together with a control system. Further, a wind turbine is not a one-of-a-kind structure...... but manufactured in series production based on many component tests, some prototype tests and zeroseries wind turbines. These characteristics influence the reliability assessment where focus in this paper is on the structural components. Levelized Cost Of Energy is very important for wind energy, especially when...... comparing to other energy sources. Therefore much focus is on cost reductions and improved reliability both for offshore and onshore wind turbines. The wind turbine components should be designed to have sufficient reliability level with respect to both extreme and fatigue loads but also not be too costly...

  20. Reliability assessment of wave Energy devices

    Ambühl, Simon; Kramer, Morten; Kofoed, Jens Peter


    Energy from waves may play a key role in sustainable electricity production in the future. Optimal reliability levels for components used for Wave Energy Devices (WEDs) need to be defined to be able to decrease their cost of electricity. Optimal reliability levels can be found using probabilistic...

  1. Viking Lander reliability program

    Pilny, M. J.


    The Viking Lander reliability program is reviewed with attention given to the development of the reliability program requirements, reliability program management, documents evaluation, failure modes evaluation, production variation control, failure reporting and correction, and the parts program. Lander hardware failures which have occurred during the mission are listed.

  2. Radioiodination of monoclonal antibodies, proteins and peptides for diagnosis and therapy. A review of standardized, reliable and safe procedures for clinical grade levels kBq to GBq in the Goettingen/Marburg experience

    Behr, Th.M.; Gotthardt, M.; Behe, M. [Marburg Univ. (Germany). Dept. of Nuclear Medicine; Becker, W. [Goettingen Univ. (Germany). Dept. of Nuclear Medicine


    Simple and reliable methodologies for radioiodination of proteins and peptides are described. The labeling systems are easy to assemble, capable of radioiodinating any protein or, with slight modifications, also peptide (molecular mass 1000-300,000) from kBq to GBq levels of activity for use in diagnosis and/or therapy. Furthermore, the procedures are feasible in any nuclear medicine department. Gigabecquerel amounts of activity can be handled safely. The most favored iodination methodology relies on the lodogen system, a mild oxidating agent without reducing agents. Thus, protein degradation is minimized. Labeling yields are between 60 and 90%, and immunoreactivities remain {>=}85%. Other radioiodination methods (chloramine-T, Bolton-Hunter) are described and briefly discussed. (orig.)

  3. Reliability based fatigue design and maintenance procedures

    Hanagud, S.


    A stochastic model has been developed to describe a probability for fatigue process by assuming a varying hazard rate. This stochastic model can be used to obtain the desired probability of a crack of certain length at a given location after a certain number of cycles or time. Quantitative estimation of the developed model was also discussed. Application of the model to develop a procedure for reliability-based cost-effective fail-safe structural design is presented. This design procedure includes the reliability improvement due to inspection and repair. Methods of obtaining optimum inspection and maintenance schemes are treated.

  4. Disjoint sum forms in reliability theory

    B. Anrig


    Full Text Available The structure function f of a binary monotone system is assumed to be known and given in a disjunctive normal form, i.e. as the logical union of products of the indicator variables of the states of its subsystems. Based on this representation of f, an improved Abraham algorithm is proposed for generating the disjoint sum form of f. This form is the base for subsequent numerical reliability calculations. The approach is generalized to multivalued systems. Examples are discussed.

  5. Self-other agreement and assumed similarity in neuroticism, extraversion, and trait affect: distinguishing the effects of form and content.

    Beer, Andrew; Watson, David; McDade-Montez, Elizabeth


    Trait Negative Affect (NA) and Positive Affect (PA) are strongly associated with Neuroticism and Extraversion, respectively. Nevertheless, measures of the former tend to show substantially weaker self-other agreement-and stronger assumed similarity correlations-than scales assessing the latter. The current study separated the effects of item content versus format on agreement and assumed similarity using two different sets of Neuroticism and Extraversion measures and two different indicators of NA and PA (N = 381 newlyweds). Neuroticism and Extraversion consistently showed stronger agreement than NA and PA; in addition, however, scales with more elaborated items yielded significantly higher agreement correlations than those based on single adjectives. Conversely, the trait affect scales yielded stronger assumed similarity correlations than the personality scales; these coefficients were strongest for the adjectival measures of trait affect. Thus, our data establish a significant role for both content and format in assumed similarity and self-other agreement.

  6. The SE sector of the Middle Weichselian Eurasian Ice Sheet was much smaller than assumed

    Räsänen, Matti E.; Huitti, Janne V.; Bhattarai, Saroj; Harvey, Jerry; Huttunen, Sanna


    Quaternary climatic and glacial history must be known in order to understand future environments. Reconstructions of the last Weichselian glacial cycle 117,000-11,700 years (kyr) ago propose that S Finland, adjacent Russia and the Baltic countries in the SE sector of the Eurasian Ice Sheet (EIS), were glaciated during the Middle Weichselian time [marine isotope stage (MIS) 4, 71-57 kyr ago] and that this glaciation was preceded in S Finland by an Early Weichselian interstadial (MIS 5c, 105-93 kyr ago) with pine forest. We apply glacial sequence stratigraphy to isolated Late Pleistocene onshore outcrop sections and show, that these events did not take place. The one Late Weichselian glaciation (MIS 2, 29-11 kyr ago) was preceded in S Finland by a nearly 90 kyr non-glacial period, featuring tundra with permafrost and probably birch forest. Our new Middle Weichselian paleoenvironmental scenario revises the configuration and hydrology of the S part of EIS and gives new setting for the evolution of Scandinavian biota. If future development during the coming glacial cycle proves to be similar, the high-level nuclear waste stored in the bedrock of SW Finland should be located deeper than currently planned, i.e. below any possible future permafrost.

  7. Reliability and radiation effects in compound semiconductors

    Johnston, Allan


    This book discusses reliability and radiation effects in compound semiconductors, which have evolved rapidly during the last 15 years. Johnston's perspective in the book focuses on high-reliability applications in space, but his discussion of reliability is applicable to high reliability terrestrial applications as well. The book is important because there are new reliability mechanisms present in compound semiconductors that have produced a great deal of confusion. They are complex, and appear to be major stumbling blocks in the application of these types of devices. Many of the reliability problems that were prominent research topics five to ten years ago have been solved, and the reliability of many of these devices has been improved to the level where they can be used for ten years or more with low failure rates. There is also considerable confusion about the way that space radiation affects compound semiconductors. Some optoelectronic devices are so sensitive to damage in space that they are very difficu...

  8. Measuring reliability under epistemic uncertainty:Review on non-probabilistic reliability metrics

    Kang Rui; Zhang Qingyuan; Zeng Zhiguo; Enrico Zio; Li Xiaoyang


    In this paper, a systematic review of non-probabilistic reliability metrics is conducted to assist the selection of appropriate reliability metrics to model the influence of epistemic uncertainty. Five frequently used non-probabilistic reliability metrics are critically reviewed, i.e., evidence-theory-based reliability metrics, interval-analysis-based reliability metrics, fuzzy-interval-analysis-based reliability metrics, possibility-theory-based reliability metrics (posbist reliability) and uncertainty-theory-based reliability metrics (belief reliability). It is pointed out that a qualified reli-ability metric that is able to consider the effect of epistemic uncertainty needs to (1) compensate the conservatism in the estimations of the component-level reliability metrics caused by epistemic uncertainty, and (2) satisfy the duality axiom, otherwise it might lead to paradoxical and confusing results in engineering applications. The five commonly used non-probabilistic reliability metrics are compared in terms of these two properties, and the comparison can serve as a basis for the selection of the appropriate reliability metrics.

  9. Reliability Based Ship Structural Design

    Dogliani, M.; Østergaard, C.; Parmentier, G.;


    with developments of models of load effects and of structural collapse adopted in reliability formulations which aim at calibrating partial safety factors for ship structural design. New probabilistic models of still-water load effects are developed both for tankers and for containerships. New results are presented......This paper deals with the development of different methods that allow the reliability-based design of ship structures to be transferred from the area of research to the systematic application in current design. It summarises the achievements of a three-year collaborative research project dealing...... structure of several tankers and containerships. The results of the reliability analysis were the basis for the definition of a target safety level which was used to asses the partial safety factors suitable for in a new design rules format to be adopted in modern ship structural design. Finally...

  10. New Approaches to Reliability Assessment

    Ma, Ke; Wang, Huai; Blaabjerg, Frede


    of energy. New approaches for reliability assessment are being taken in the design phase of power electronics systems based on the physics-of-failure in components. In this approach, many new methods, such as multidisciplinary simulation tools, strength testing of components, translation of mission profiles......Power electronics are facing continuous pressure to be cheaper and smaller, have a higher power density, and, in some cases, also operate at higher temperatures. At the same time, power electronics products are expected to have reduced failures because it is essential for reducing the cost......, and statistical analysis, are involved to enable better prediction and design of reliability for products. This article gives an overview of the new design flow in the reliability engineering of power electronics from the system-level point of view and discusses some of the emerging needs for the technology...

  11. Novel approach for evaluation of service reliability for electricity customers

    JIANG; John; N


    Understanding reliability value for electricity customer is important to market-based reliability management. This paper proposes a novel approach to evaluate the reliability for electricity customers by using indifference curve between economic compensation for power interruption and service reliability of electricity. Indifference curve is formed by calculating different planning schemes of network expansion for different reliability requirements of customers, which reveals the information about economic values for different reliability levels for electricity customers, so that the reliability based on market supply demand mechanism can be established and economic signals can be provided for reliability management and enhancement.

  12. Novel approach for evaluation of service reliability for electricity customers

    KANG ChongQing; GAO Yan; JIANG John N; ZHONG Jin; XIA Qing


    Understanding reliability value for electricity customer is important to market-based reliability man-agement. This paper proposes a novel approach to evaluate the reliability for electricity customers by using indifference curve between economic compensation for power interruption and service reliability of electricity. Indifference curve is formed by calculating different planning schemes of network ex-pansion for different reliability requirements of customers, which reveals the information about eco-nomic values for different reliability levels for electricity customers, so that the reliability based on market supply demand mechanism can be established and economic signals can be provided for reli-ability management and enhancement.

  13. Reliability of fixed and jack-up structures: a comparative study

    Morandi, A.C.; Frieze, P.A. [MSL Engineering, Sunninghill, Ascot (United Kingdom); Birkinshaw, M. [Health and Safety Executive, London (United Kingdom); Smith, D.; Dixon, A.T. [Health and Safety Executive, Bootle (United Kingdom)


    This paper presents results of a comparison between the structural reliability of a jacket designed to the limit of API RP 2A-LRFD and of a jack-up designed to the limit of the SNAME T and R Bulletin 5-5A. Both platforms were assumed as operating in the same location and at the same water depth when evaluating metocean and geotechnical data. Component strength was evaluated on the basis of the latest ISO formulations and system strength was evaluated by pushover analysis using CAP/SeaStar. Reliability was evaluated using the Response Surface method. It was found that, for both platforms, failure probability at system level was about an order of magnitude smaller than at component level. The jack-up critical failure probabilities tended to be about an order of magnitude greater than the corresponding jacket results. (Author)

  14. Reliability and safety engineering

    Verma, Ajit Kumar; Karanki, Durga Rao


    Reliability and safety are core issues that must be addressed throughout the life cycle of engineering systems. Reliability and Safety Engineering presents an overview of the basic concepts, together with simple and practical illustrations. The authors present reliability terminology in various engineering fields, viz.,electronics engineering, software engineering, mechanical engineering, structural engineering and power systems engineering. The book describes the latest applications in the area of probabilistic safety assessment, such as technical specification optimization, risk monitoring and risk informed in-service inspection. Reliability and safety studies must, inevitably, deal with uncertainty, so the book includes uncertainty propagation methods: Monte Carlo simulation, fuzzy arithmetic, Dempster-Shafer theory and probability bounds. Reliability and Safety Engineering also highlights advances in system reliability and safety assessment including dynamic system modeling and uncertainty management. Cas...

  15. Optimal Reliability-Based Code Calibration

    Sørensen, John Dalsgaard; Kroon, I. B.; Faber, M. H.


    Calibration of partial safety factors is considered in general, including classes of structures where no code exists beforehand. The partial safety factors are determined such that the difference between the reliability for the different structures in the class considered and a target reliability...... level is minimized. Code calibration on a decision theoretical basis is also considered and it is shown how target reliability indices can be calibrated. Results from code calibration for rubble mound breakwater designs are shown....

  16. Estimation of the Reliability of Distributed Applications

    Marian Pompiliu CRISTESCU; Laurentiu CIOVICA


    In this paper the reliability is presented as an important feature for use in mission-critical distributed applications. Certain aspects of distributed systems make the requested level of reliability more difficult. An obvious benefit of distributed systems is that they serve the global business and social environment in which we live and work. Another benefit is that they can improve the quality of services, in terms of reliability, availability and performance, for the complex systems. The ...

  17. Measurement System Reliability Assessment

    Kłos Ryszard


    Full Text Available Decision-making in problem situations is based on up-to-date and reliable information. A great deal of information is subject to rapid changes, hence it may be outdated or manipulated and enforce erroneous decisions. It is crucial to have the possibility to assess the obtained information. In order to ensure its reliability it is best to obtain it with an own measurement process. In such a case, conducting assessment of measurement system reliability seems to be crucial. The article describes general approach to assessing reliability of measurement systems.

  18. Reliable knowledge discovery

    Dai, Honghua; Smirnov, Evgueni


    Reliable Knowledge Discovery focuses on theory, methods, and techniques for RKDD, a new sub-field of KDD. It studies the theory and methods to assure the reliability and trustworthiness of discovered knowledge and to maintain the stability and consistency of knowledge discovery processes. RKDD has a broad spectrum of applications, especially in critical domains like medicine, finance, and military. Reliable Knowledge Discovery also presents methods and techniques for designing robust knowledge-discovery processes. Approaches to assessing the reliability of the discovered knowledge are introduc

  19. Reliability of fluid systems

    Kopáček Jaroslav


    Full Text Available This paper focuses on the importance of detection reliability, especially in complex fluid systems for demanding production technology. The initial criterion for assessing the reliability is the failure of object (element, which is seen as a random variable and their data (values can be processed using by the mathematical methods of theory probability and statistics. They are defined the basic indicators of reliability and their applications in calculations of serial, parallel and backed-up systems. For illustration, there are calculation examples of indicators of reliability for various elements of the system and for the selected pneumatic circuit.

  20. Circuit design for reliability

    Cao, Yu; Wirth, Gilson


    This book presents physical understanding, modeling and simulation, on-chip characterization, layout solutions, and design techniques that are effective to enhance the reliability of various circuit units.  The authors provide readers with techniques for state of the art and future technologies, ranging from technology modeling, fault detection and analysis, circuit hardening, and reliability management. Provides comprehensive review on various reliability mechanisms at sub-45nm nodes; Describes practical modeling and characterization techniques for reliability; Includes thorough presentation of robust design techniques for major VLSI design units; Promotes physical understanding with first-principle simulations.

  1. Bayesian system reliability assessment under fuzzy environments

    Wu, H.-C


    The Bayesian system reliability assessment under fuzzy environments is proposed in this paper. In order to apply the Bayesian approach, the fuzzy parameters are assumed as fuzzy random variables with fuzzy prior distributions. The (conventional) Bayes estimation method will be used to create the fuzzy Bayes point estimator of system reliability by invoking the well-known theorem called 'Resolution Identity' in fuzzy sets theory. On the other hand, we also provide the computational procedures to evaluate the membership degree of any given Bayes point estimate of system reliability. In order to achieve this purpose, we transform the original problem into a nonlinear programming problem. This nonlinear programming problem is then divided into four subproblems for the purpose of simplifying computation. Finally, the subproblems can be solved by using any commercial optimizers, e.g. GAMS or LINGO.




    Full Text Available Reliability is the probability that a system, component or device will perform without failure for a specified period of time under specified operating conditions. The concept of reliability is of great importance in the design of various machine members. Conventional engineering design uses a deterministic approach. It disregards the fact that the material properties, the dimensions of the components and the externally applied loads are statistical in nature. In conventional design this uncertainties are covered with a factor of safety, which is not always successful. The growing trend towards reducing uncertainty and increasing reliability is to use the probabilistic approach. In the present work a three shaft four speed gear box and six speed gear box are designed using reliability principles. For the specified reliability of the system (Gear box, component reliability (Gear pair is calculated by considering the system as a series system. Design is considered to be safe and adequate if the probability of failure of gear box is less than or equal to a specified quantity in each of the two failure modes. . All the parameters affecting the design are considered as random variables and all the random variables are assumed to follow normal distribution. A computer program in C++ is developed to calculate the face widths in bending and surface failure modes. The larger one out of the two values is considered. By changing the variations in the design parameters, variations in the face widths are studied.

  3. Inversion assuming weak scattering

    Xenaki, Angeliki; Gerstoft, Peter; Mosegaard, Klaus


    The study of weak scattering from inhomogeneous media or interface roughness has long been of interest in sonar applications. In an acoustic backscattering model of a stationary field of volume inhomogeneities, a stochastic description of the field is more useful than a deterministic description...... due to the complex nature of the field. A method based on linear inversion is employed to infer information about the statistical properties of the scattering field from the obtained cross-spectral matrix. A synthetic example based on an active high-frequency sonar demonstrates that the proposed...

  4. Structural reliability of existing city bridges

    Hellebrandt, L.; Steenbergen, R.; Vrouwenvelder, T.; Blom, K.


    Full probabilistic reliability analysis may be valuable for assessing existing structures. Measures for increasing the safety level are quite costly for existing structures and may be unnecessary when such a decision is grounded on a conservative analysis for determining the structural reliability.

  5. Principles of Bridge Reliability

    Thoft-Christensen, Palle; Nowak, Andrzej S.

    The paper gives a brief introduction to the basic principles of structural reliability theory and its application to bridge engineering. Fundamental concepts like failure probability and reliability index are introduced. Ultimate as well as serviceability limit states for bridges are formulated...

  6. Improving machinery reliability

    Bloch, Heinz P


    This totally revised, updated and expanded edition provides proven techniques and procedures that extend machinery life, reduce maintenance costs, and achieve optimum machinery reliability. This essential text clearly describes the reliability improvement and failure avoidance steps practiced by best-of-class process plants in the U.S. and Europe.

  7. Hawaii Electric System Reliability

    Loose, Verne William [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Silva Monroy, Cesar Augusto [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)


    This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers’ views of reliability “worth” and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and for application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers’ views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.

  8. Hawaii electric system reliability.

    Silva Monroy, Cesar Augusto; Loose, Verne William


    This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers' views of reliability %E2%80%9Cworth%E2%80%9D and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and for application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers' views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.

  9. Chapter 9: Reliability

    Algora, Carlos; Espinet-Gonzalez, Pilar; Vazquez, Manuel; Bosco, Nick; Miller, David; Kurtz, Sarah; Rubio, Francisca; McConnell,Robert


    This chapter describes the accumulated knowledge on CPV reliability with its fundamentals and qualification. It explains the reliability of solar cells, modules (including optics) and plants. The chapter discusses the statistical distributions, namely exponential, normal and Weibull. The reliability of solar cells includes: namely the issues in accelerated aging tests in CPV solar cells, types of failure and failures in real time operation. The chapter explores the accelerated life tests, namely qualitative life tests (mainly HALT) and quantitative accelerated life tests (QALT). It examines other well proven and experienced PV cells and/or semiconductor devices, which share similar semiconductor materials, manufacturing techniques or operating conditions, namely, III-V space solar cells and light emitting diodes (LEDs). It addresses each of the identified reliability issues and presents the current state of the art knowledge for their testing and evaluation. Finally, the chapter summarizes the CPV qualification and reliability standards.


    Muhammad Ismail


    Full Text Available The present research article deals with an economic reliability efficient group acceptance sampling plan for time truncated tests which are based on the total number of failures assuming that the life time of a product follows the family for Pareto distribution. This research is proposed when a multiple number of products as a group can be observed simultaneously in a tester. The minimum termination time required for a given group size and acceptance number is determined such that the producer and consumer risks are satisfied for specific standard of quality level, while the number of groups and the number of testers are pre-assumed. Comparison studies are made between the proposed plan and the existing plan on the basis of minimum termination time. Two real examples are also discussed.

  11. Impact of 5% NaCl Salt Spray Pretreatment on the Long-Term Reliability of Wafer-Level Packages with Sn-Pb and Sn-Ag-Cu Solder Interconnects

    Liu, Bo; Lee, Tae-Kyu; Liu, Kuo-Chuan


    Understanding the sensitivity of Pb-free solder joint reliability to various environmental conditions, such as corrosive gases, low temperatures, and high-humidity environments, is a critical topic in the deployment of Pb-free products in various markets and applications. The work reported herein concerns the impact of a marine environment on Sn-Pb and Sn-Ag-Cu interconnects. Both Sn-Pb and Sn-Ag-Cu solder alloy wafer-level packages, with and without pretreatment by 5% NaCl salt spray, were thermally cycled to failure. The salt spray test did not reduce the characteristic lifetime of the Sn-Pb solder joints, but it did reduce the lifetime of the Sn-Ag-Cu solder joints by over 43%. Although both materials showed strong resistance to corrosion, the localized nature of the corroded area at critical locations in the solder joint caused significant degradation in the Sn-Ag-Cu solder joints. The mechanisms leading to these results as well as the extent, microstructural evolution, and dependency of the solder alloy degradation are discussed.

  12. Solid State Lighting Reliability Components to Systems

    Fan, XJ


    Solid State Lighting Reliability: Components to Systems begins with an explanation of the major benefits of solid state lighting (SSL) when compared to conventional lighting systems including but not limited to long useful lifetimes of 50,000 (or more) hours and high efficacy. When designing effective devices that take advantage of SSL capabilities the reliability of internal components (optics, drive electronics, controls, thermal design) take on critical importance. As such a detailed discussion of reliability from performance at the device level to sub components is included as well as the integrated systems of SSL modules, lamps and luminaires including various failure modes, reliability testing and reliability performance. This book also: Covers the essential reliability theories and practices for current and future development of Solid State Lighting components and systems Provides a systematic overview for not only the state-of-the-art, but also future roadmap and perspectives of Solid State Lighting r...

  13. Defect distribution and reliability assessment of wind turbine blades

    Stensgaard Toft, Henrik; Branner, Kim; Berring, Peter


    In this paper, two stochastic models for the distribution of defects in wind turbine blades are proposed. The first model assumes that the individual defects are completely randomly distributed in the blade. The second model assumes that the defects occur in clusters of different size, based...... on the assumption that one error in the production process tends to trigger several defects. For both models, additional information, such as number, type, and size of the defects, is included as stochastic variables. In a numerical example, the reliability is estimated for a generic wind turbine blade model both...... the reliability for the wind turbine blade using Bayesian statistics....


    Adrian Stere PARIS


    Full Text Available The mechanical reliability uses many statistical tools to find the factors of influence and their levels inthe optimization of parameters on the basis of experimental data. Design of Experiments (DOE techniquesenables designers to determine simultaneously the individual and interactive effects of many factors that couldaffect the output results in any design. The state-of-the-art in the domain implies extended use of software and abasic mathematical knowledge, mainly applying ANOVA and the regression analysis of experimental data.

  15. Statistical Degradation Models for Reliability Analysis in Non-Destructive Testing

    Chetvertakova, E. S.; Chimitova, E. V.


    In this paper, we consider the application of the statistical degradation models for reliability analysis in non-destructive testing. Such models enable to estimate the reliability function (the dependence of non-failure probability on time) for the fixed critical level using the information of the degradation paths of tested items. The most widely used models are the gamma and Wiener degradation models, in which the gamma or normal distributions are assumed as the distribution of degradation increments, respectively. Using the computer simulation technique, we have analysed the accuracy of the reliability estimates, obtained for considered models. The number of increments can be enlarged by increasing the sample size (the number of tested items) or by increasing the frequency of measuring degradation. It has been shown, that the sample size has a greater influence on the accuracy of the reliability estimates in comparison with the measuring frequency. Moreover, it has been shown that another important factor, influencing the accuracy of reliability estimation, is the duration of observing degradation process.

  16. Reliability of single-item ratings of quality in higher education: a replication.

    Ginns, Paul; Barrie, Simon


    Single-item ratings of the quality of instructors or subjects are widely used by higher education institutions, yet such ratings are commonly assumed to have inadequate psychometric properties. Recent research has demonstrated that reliability of such ratings can indeed be estimated, using either the correction for attenuation formula or factor analytic methods. This study replicates prior research on the reliability of single-item ratings of quality of instruction, using a different, more student-focussed approach to teaching and learning evaluation than used by previous researchers. Class average data from 1,097 classes, representing responses from 59,815 students, were analysed. At the "class" level of analysis, both methods of estimation suggested the single item of quality had high reliability: .96 using the correction for attenuation formula, and .94 using the factor analytic method. An alternative method of calculating reliability, which takes into account the hierarchical nature of the data, likewise suggested high estimated reliability (.92) of the single-item rating. These results indicate the suitability of the overall class rating for quality improvement in higher education, with a large sample.

  17. 9 CFR 72.15 - Owners assume responsibility; must execute agreement prior to dipping or treatment waiving all...


    ....15 Animals and Animal Products ANIMAL AND PLANT HEALTH INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE INTERSTATE TRANSPORTATION OF ANIMALS (INCLUDING POULTRY) AND ANIMAL PRODUCTS TEXAS (SPLENETIC) FEVER IN... 9 Animals and Animal Products 1 2010-01-01 2010-01-01 false Owners assume responsibility;...

  18. 42 CFR 137.286 - Do Self-Governance Tribes become Federal agencies when they assume these Federal environmental...


    ... Self-Governance Tribes are required to assume Federal environmental responsibilities for projects in... performing these Federal environmental responsibilities, Self-Governance Tribes will be considered the... 42 Public Health 1 2010-10-01 2010-10-01 false Do Self-Governance Tribes become Federal...

  19. 42 CFR 137.300 - Since Federal environmental responsibilities are new responsibilities, which may be assumed by...


    ... Federal environmental responsibilities assumed by the Self-Governance Tribe. ... 42 Public Health 1 2010-10-01 2010-10-01 false Since Federal environmental responsibilities are... additional funds available to Self-Governance Tribes to carry out these formerly inherently...

  20. Photovoltaic system reliability

    Maish, A.B.; Atcitty, C. [Sandia National Labs., NM (United States); Greenberg, D. [Ascension Technology, Inc., Lincoln Center, MA (United States)] [and others


    This paper discusses the reliability of several photovoltaic projects including SMUD`s PV Pioneer project, various projects monitored by Ascension Technology, and the Colorado Parks project. System times-to-failure range from 1 to 16 years, and maintenance costs range from 1 to 16 cents per kilowatt-hour. Factors contributing to the reliability of these systems are discussed, and practices are recommended that can be applied to future projects. This paper also discusses the methodology used to collect and analyze PV system reliability data.

  1. Structural Reliability Methods

    Ditlevsen, Ove Dalager; Madsen, H. O.

    of structural reliability, including the theoretical basis for these methods. Partial safety factor codes under current practice are briefly introduced and discussed. A probabilistic code format for obtaining a formal reliability evaluation system that catches the most essential features of the nature......The structural reliability methods quantitatively treat the uncertainty of predicting the behaviour and properties of a structure given the uncertain properties of its geometry, materials, and the actions it is supposed to withstand. This book addresses the probabilistic methods for evaluation...

  2. Becoming a high reliability organization.

    Christianson, Marlys K; Sutcliffe, Kathleen M; Miller, Melissa A; Iwashyna, Theodore J


    Aircraft carriers, electrical power grids, and wildland firefighting, though seemingly different, are exemplars of high reliability organizations (HROs)--organizations that have the potential for catastrophic failure yet engage in nearly error-free performance. HROs commit to safety at the highest level and adopt a special approach to its pursuit. High reliability organizing has been studied and discussed for some time in other industries and is receiving increasing attention in health care, particularly in high-risk settings like the intensive care unit (ICU). The essence of high reliability organizing is a set of principles that enable organizations to focus attention on emergent problems and to deploy the right set of resources to address those problems. HROs behave in ways that sometimes seem counterintuitive--they do not try to hide failures but rather celebrate them as windows into the health of the system, they seek out problems, they avoid focusing on just one aspect of work and are able to see how all the parts of work fit together, they expect unexpected events and develop the capability to manage them, and they defer decision making to local frontline experts who are empowered to solve problems. Given the complexity of patient care in the ICU, the potential for medical error, and the particular sensitivity of critically ill patients to harm, high reliability organizing principles hold promise for improving ICU patient care.

  3. Reliability of power electronic converter systems

    Chung, Henry Shu-hung; Blaabjerg, Frede; Pecht, Michael


    This book outlines current research into the scientific modeling, experimentation, and remedial measures for advancing the reliability, availability, system robustness, and maintainability of Power Electronic Converter Systems (PECS) at different levels of complexity.

  4. Optimal, Reliability-Based Code Calibration

    Sørensen, John Dalsgaard


    acceptable levels of failure probability (or target reliabilities) may be established. Furthermore suggested values for acceptable annual failure probabilities are given for the ultimate and the serviceability limit states. Finally the paper describes a procedure for the practical implementation...

  5. 无漏电流高效可靠三电平双Buck并网逆变器%High-efficiency High-reliability Dual Buck Three-level Grid-connected Inverters Without Leakage Currents

    洪峰; 刘周成; 万运强; 尹培培; 赵鑫; 王成华


    漏电流降低了非隔离型并网逆变装置的安全性和可靠性。已有研究通过将双极性调制全桥逆变器或半桥逆变器三电平化解决该问题。提出基于三电平双Buck逆变器的新思路,并重点研究其中的三电平双Buck全桥电路。该拓扑是将双Buck半桥逆变器中的输入均压电容用一个工频开关的桥臂取代得到,使得对地寄生电容电压在半周期内保持不变,有效地抑制漏电流至几乎可以忽略的程度。同时,该拓扑保持了双Buck电路无桥臂直通、体二极管不工作等特点,又降低了器件电压应力,使得桥臂输出变为单极性调制波。通过与各种无漏电流结构的综合比较可知,该拓扑除在器件总量上比H5结构多一个功率管外,在有源器件数量、通态电流经过器件数量、高频开关器件数量、是否需要均压控制等方面,均有一定优势,有助于降低系统复杂度,提高可靠性与变换效率。仿真与实验验证了上述分析。%Leakage currents reduce the reliability and the security of non-isolated grid-connected inverters. Three-level topologies based on the dual-polarity modified full bridge inverter or the half bridge inverter were the existing methods to resolve this problem. The use of three-level topologies based on the dual buck inverter is a new way proposed by this paper. As the input divider capacitors of the dual buck half bridge inverter have been replaced by one working period switching bridge, a so-called three-level dual buck full bridge inverter is obtained. Because the voltage of stray capacitor remains unchanged in each half period, there is no leakage current. And there is no shoot-through problem, and none of the body-diodes in this topology needs to work. At the same time, the stresses of devices were reduced, and the outputs of bridge-legs become uni-polarity. The new topology was compared with no leakage current existed topologies. The total

  6. Reliable Electronic Equipment

    N. A. Nayak


    Full Text Available The reliability aspect of electronic equipment's is discussed. To obtain optimum results, close cooperation between the components engineer, the design engineer and the production engineer is suggested.

  7. Reliability prediction techniques

    Whittaker, B.; Worthington, B.; Lord, J.F.; Pinkard, D.


    The paper demonstrates the feasibility of applying reliability assessment techniques to mining equipment. A number of techniques are identified and described and examples of their use in assessing mining equipment are given. These techniques include: reliability prediction; failure analysis; design audit; maintainability; availability and the life cycle costing. Specific conclusions regarding the usefulness of each technique are outlined. The choice of techniques depends upon both the type of equipment being assessed and its stage of development, with numerical prediction best suited for electronic equipment and fault analysis and design audit suited to mechanical equipment. Reliability assessments involve much detailed and time consuming work but it has been demonstrated that the resulting reliability improvements lead to savings in service costs which more than offset the cost of the evaluation.

  8. The rating reliability calculator

    Solomon David J


    Full Text Available Abstract Background Rating scales form an important means of gathering evaluation data. Since important decisions are often based on these evaluations, determining the reliability of rating data can be critical. Most commonly used methods of estimating reliability require a complete set of ratings i.e. every subject being rated must be rated by each judge. Over fifty years ago Ebel described an algorithm for estimating the reliability of ratings based on incomplete data. While his article has been widely cited over the years, software based on the algorithm is not readily available. This paper describes an easy-to-use Web-based utility for estimating the reliability of ratings based on incomplete data using Ebel's algorithm. Methods The program is available public use on our server and the source code is freely available under GNU General Public License. The utility is written in PHP, a common open source imbedded scripting language. The rating data can be entered in a convenient format on the user's personal computer that the program will upload to the server for calculating the reliability and other statistics describing the ratings. Results When the program is run it displays the reliability, number of subject rated, harmonic mean number of judges rating each subject, the mean and standard deviation of the averaged ratings per subject. The program also displays the mean, standard deviation and number of ratings for each subject rated. Additionally the program will estimate the reliability of an average of a number of ratings for each subject via the Spearman-Brown prophecy formula. Conclusion This simple web-based program provides a convenient means of estimating the reliability of rating data without the need to conduct special studies in order to provide complete rating data. I would welcome other researchers revising and enhancing the program.

  9. Reliability of power connections

    BRAUNOVIC Milenko


    Despite the use of various preventive maintenance measures, there are still a number of problem areas that can adversely affect system reliability. Also, economical constraints have pushed the designs of power connections closer to the limits allowed by the existing standards. The major parameters influencing the reliability and life of Al-Al and Al-Cu connections are identified. The effectiveness of various palliative measures is determined and the misconceptions about their effectiveness are dealt in detail.

  10. Multidisciplinary System Reliability Analysis

    Mahadevan, Sankaran; Han, Song; Chamis, Christos C. (Technical Monitor)


    The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code, developed under the leadership of NASA Glenn Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multidisciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

  11. Reliability models applicable to space telescope solar array assembly system

    Patil, S. A.


    A complex system may consist of a number of subsystems with several components in series, parallel, or combination of both series and parallel. In order to predict how well the system will perform, it is necessary to know the reliabilities of the subsystems and the reliability of the whole system. The objective of the present study is to develop mathematical models of the reliability which are applicable to complex systems. The models are determined by assuming k failures out of n components in a subsystem. By taking k = 1 and k = n, these models reduce to parallel and series models; hence, the models can be specialized to parallel, series combination systems. The models are developed by assuming the failure rates of the components as functions of time and as such, can be applied to processes with or without aging effects. The reliability models are further specialized to Space Telescope Solar Arrray (STSA) System. The STSA consists of 20 identical solar panel assemblies (SPA's). The reliabilities of the SPA's are determined by the reliabilities of solar cell strings, interconnects, and diodes. The estimates of the reliability of the system for one to five years are calculated by using the reliability estimates of solar cells and interconnects given n ESA documents. Aging effects in relation to breaks in interconnects are discussed.

  12. Sensitivity Analysis of Component Reliability



    In a system, Every component has its unique position within system and its unique failure characteristics. When a component's reliability is changed, its effect on system reliability is not equal. Component reliability sensitivity is a measure of effect on system reliability while a component's reliability is changed. In this paper, the definition and relative matrix of component reliability sensitivity is proposed, and some of their characteristics are analyzed. All these will help us to analyse or improve the system reliability.

  13. The Reliability Value of Storage in a Volatile Environment

    ParandehGheibi, Ali; Ozdaglar, Asuman; Dahleh, Munther A


    This paper examines the value of storage in securing reliability of a system with uncertain supply and demand, and supply friction. The storage is frictionless as a supply source, but once used, it cannot be filled up instantaneously. The focus application is a power supply network in which the base supply and demand are assumed to match perfectly, while deviations from the base are modeled as random shocks with stochastic arrivals. Due to friction, the random surge shocks cannot be tracked by the main supply sources. Storage, when available, can be used to compensate, fully or partially, for the surge in demand or loss of supply. The problem of optimal utilization of storage with the objective of maximizing system reliability is formulated as minimization of the expected discounted cost of blackouts over an infinite horizon. It is shown that when the stage cost is linear in the size of the blackout, the optimal policy is myopic in the sense that all shocks are compensated by storage up to the available level...

  14. Is it reasonable to assume a uniformly distributed cooling-rate along the microslide of a directional solidification stage?



    It is commonly assumed that the cooling-rate along the microslide of a directional solidification stage is uniformly distributed, an assumption which is typically applied in low cooling-rates studies. A new directional solidification stage has recently been presented, which is specified to achieve high cooling-rates of up to 1.8 x 104 degrees C min-1, where cooling-rates are still assumed to be uniformly distributed. The current study presents a closed-form solution to the temperature distribution and to the cooling-rate in the microslide. Thermal analysis shows that the cooling-rate is by no means uniformly distributed and can vary by several hundred percent along the microslide in some cases. Therefore, the mathematical solution presented in this study is essential for experimental planning of high cooling-rate experiments.

  15. Reliability in individual monitoring service.

    Mod Ali, N


    As a laboratory certified to ISO 9001:2008 and accredited to ISO/IEC 17025, the Secondary Standard Dosimetry Laboratory (SSDL)-Nuclear Malaysia has incorporated an overall comprehensive system for technical and quality management in promoting a reliable individual monitoring service (IMS). Faster identification and resolution of issues regarding dosemeter preparation and issuing of reports, personnel enhancement, improved customer satisfaction and overall efficiency of laboratory activities are all results of the implementation of an effective quality system. Review of these measures and responses to observed trends provide continuous improvement of the system. By having these mechanisms, reliability of the IMS can be assured in the promotion of safe behaviour at all levels of the workforce utilising ionising radiation facilities. Upgradation of in the reporting program through a web-based e-SSDL marks a major improvement in Nuclear Malaysia's IMS reliability on the whole. The system is a vital step in providing a user friendly and effective occupational exposure evaluation program in the country. It provides a higher level of confidence in the results generated for occupational dose monitoring of the IMS, thus, enhances the status of the radiation protection framework of the country.

  16. Solving reliability analysis problems in the polar space

    Ghasem Ezzati; Musa Mammadov; Siddhivinayak Kulkarni


    An optimization model that is widely used in engineering problems is Reliability-Based Design Optimization (RBDO). Input data of the RBDO is non-deterministic and constraints are probabilistic. The RBDO aims at minimizing cost ensuring that reliability is at least an accepted level. Reliability analysis is an important step in two-level RBDO approaches. Although many methods have been introduced to apply in reliability analysis loop of the RBDO, there are still many drawbacks in their efficie...

  17. Scaled CMOS Technology Reliability Users Guide

    White, Mark


    The desire to assess the reliability of emerging scaled microelectronics technologies through faster reliability trials and more accurate acceleration models is the precursor for further research and experimentation in this relevant field. The effect of semiconductor scaling on microelectronics product reliability is an important aspect to the high reliability application user. From the perspective of a customer or user, who in many cases must deal with very limited, if any, manufacturer's reliability data to assess the product for a highly-reliable application, product-level testing is critical in the characterization and reliability assessment of advanced nanometer semiconductor scaling effects on microelectronics reliability. A methodology on how to accomplish this and techniques for deriving the expected product-level reliability on commercial memory products are provided.Competing mechanism theory and the multiple failure mechanism model are applied to the experimental results of scaled SDRAM products. Accelerated stress testing at multiple conditions is applied at the product level of several scaled memory products to assess the performance degradation and product reliability. Acceleration models are derived for each case. For several scaled SDRAM products, retention time degradation is studied and two distinct soft error populations are observed with each technology generation: early breakdown, characterized by randomly distributed weak bits with Weibull slope (beta)=1, and a main population breakdown with an increasing failure rate. Retention time soft error rates are calculated and a multiple failure mechanism acceleration model with parameters is derived for each technology. Defect densities are calculated and reflect a decreasing trend in the percentage of random defective bits for each successive product generation. A normalized soft error failure rate of the memory data retention time in FIT/Gb and FIT/cm2 for several scaled SDRAM generations is

  18. "Reliability Of Fiber Optic Lans"

    Code n, Michael; Scholl, Frederick; Hatfield, W. Bryan


    Fiber optic Local Area Network Systems are being used to interconnect increasing numbers of nodes. These nodes may include office computer peripherals and terminals, PBX switches, process control equipment and sensors, automated machine tools and robots, and military telemetry and communications equipment. The extensive shared base of capital resources in each system requires that the fiber optic LAN meet stringent reliability and maintainability requirements. These requirements are met by proper system design and by suitable manufacturing and quality procedures at all levels of a vertically integrated manufacturing operation. We will describe the reliability and maintainability of Codenoll's passive star based systems. These include LAN systems compatible with Ethernet (IEEE 802.3) and MAP (IEEE 802.4), and software compatible with IBM Token Ring (IEEE 802.5). No single point of failure exists in this system architecture.

  19. Reliability of stiffened structural panels: Two examples

    Stroud, W. Jefferson; Davis, D. Dale, Jr.; Maring, Lise D.; Krishnamurthy, Thiagaraja; Elishakoff, Isaac


    The reliability of two graphite-epoxy stiffened panels that contain uncertainties is examined. For one panel, the effect of an overall bow-type initial imperfection is studied. The size of the bow is assumed to be a random variable. The failure mode is buckling. The benefits of quality control are explored by using truncated distributions. For the other panel, the effect of uncertainties in a strain-based failure criterion is studied. The allowable strains are assumed to be random variables. A geometrically nonlinear analysis is used to calculate a detailed strain distribution near an elliptical access hole in a wing panel that was tested to failure. Calculated strains are used to predict failure. Results are compared with the experimental failure load of the panel.


    Тамаргазін, О. А.; Національний авіаційний університет; Власенко, П. О.; Національний авіаційний університет


    Airline's operational structure for Reliability program implementation — engineering division, reliability  division, reliability control division, aircraft maintenance division, quality assurance division — was considered. Airline's Reliability program structure is shown. Using of Reliability program for reducing costs on aircraft maintenance is proposed. Рассмотрена организационная структура авиакомпании по выполнению Программы надежности - инженерный отдел, отделы по надежности авиацио...

  1. Ultra reliability at NASA

    Shapiro, Andrew A.


    Ultra reliable systems are critical to NASA particularly as consideration is being given to extended lunar missions and manned missions to Mars. NASA has formulated a program designed to improve the reliability of NASA systems. The long term goal for the NASA ultra reliability is to ultimately improve NASA systems by an order of magnitude. The approach outlined in this presentation involves the steps used in developing a strategic plan to achieve the long term objective of ultra reliability. Consideration is given to: complex systems, hardware (including aircraft, aerospace craft and launch vehicles), software, human interactions, long life missions, infrastructure development, and cross cutting technologies. Several NASA-wide workshops have been held, identifying issues for reliability improvement and providing mitigation strategies for these issues. In addition to representation from all of the NASA centers, experts from government (NASA and non-NASA), universities and industry participated. Highlights of a strategic plan, which is being developed using the results from these workshops, will be presented.

  2. Quality and reliability management and its applications


    Integrating development processes, policies, and reliability predictions from the beginning of the product development lifecycle to ensure high levels of product performance and safety, this book helps companies overcome the challenges posed by increasingly complex systems in today’s competitive marketplace.   Examining both research on and practical aspects of product quality and reliability management with an emphasis on applications, the book features contributions written by active researchers and/or experienced practitioners in the field, so as to effectively bridge the gap between theory and practice and address new research challenges in reliability and quality management in practice.    Postgraduates, researchers and practitioners in the areas of reliability engineering and management, amongst others, will find the book to offer a state-of-the-art survey of quality and reliability management and practices.

  3. Updating the reference population to achieve constant genomic prediction reliability across generations.

    Pszczola, M; Calus, M P L


    The reliability of genomic breeding values (DGV) decays over generations. To keep the DGV reliability at a constant level, the reference population (RP) has to be continuously updated with animals from new generations. Updating RP may be challenging due to economic reasons, especially for novel traits involving expensive phenotyping. Therefore, the goal of this study was to investigate a minimal RP update size to keep the reliability at a constant level across generations. We used a simulated dataset resembling a dairy cattle population. The trait of interest was not included itself in the selection index, but it was affected by selection pressure by being correlated with an index trait that represented the overall breeding goal. The heritability of the index trait was assumed to be 0.25 and for the novel trait the heritability equalled 0.2. The genetic correlation between the two traits was 0.25. The initial RP (n=2000) was composed of cows only with a single observation per animal. Reliability of DGV using the initial RP was computed by evaluating contemporary animals. Thereafter, the RP was used to evaluate animals which were one generation younger from the reference individuals. The drop in the reliability when evaluating younger animals was then assessed and the RP was updated to re-gain the initial reliability. The update animals were contemporaries of evaluated animals (EVA). The RP was updated in batches of 100 animals/update. First, the animals most closely related to the EVA were chosen to update RP. The results showed that, approximately, 600 animals were needed every generation to maintain the DGV reliability at a constant level across generations. The sum of squared relationships between RP and EVA and the sum of off-diagonal coefficients of the inverse of the genomic relationship matrix for RP, separately explained 31% and 34%, respectively, of the variation in the reliability across generations. Combined, these parameters explained 53% of the

  4. Reliability Centered Maintenance - Methodologies

    Kammerer, Catherine C.


    Journal article about Reliability Centered Maintenance (RCM) methodologies used by United Space Alliance, LLC (USA) in support of the Space Shuttle Program at Kennedy Space Center. The USA Reliability Centered Maintenance program differs from traditional RCM programs because various methodologies are utilized to take advantage of their respective strengths for each application. Based on operational experience, USA has customized the traditional RCM methodology into a streamlined lean logic path and has implemented the use of statistical tools to drive the process. USA RCM has integrated many of the L6S tools into both RCM methodologies. The tools utilized in the Measure, Analyze, and Improve phases of a Lean Six Sigma project lend themselves to application in the RCM process. All USA RCM methodologies meet the requirements defined in SAE JA 1011, Evaluation Criteria for Reliability-Centered Maintenance (RCM) Processes. The proposed article explores these methodologies.

  5. Gearbox Reliability Collaborative Update (Presentation)

    Sheng, S.; Keller, J.; Glinsky, C.


    This presentation was given at the Sandia Reliability Workshop in August 2013 and provides information on current statistics, a status update, next steps, and other reliability research and development activities related to the Gearbox Reliability Collaborative.

  6. Is it growing exponentially fast? -- Impact of assuming exponential growth for characterizing and forecasting epidemics with initial near-exponential growth dynamics.

    Chowell, Gerardo; Viboud, Cécile


    The increasing use of mathematical models for epidemic forecasting has highlighted the importance of designing models that capture the baseline transmission characteristics in order to generate reliable epidemic forecasts. Improved models for epidemic forecasting could be achieved by identifying signature features of epidemic growth, which could inform the design of models of disease spread and reveal important characteristics of the transmission process. In particular, it is often taken for granted that the early growth phase of different growth processes in nature follow early exponential growth dynamics. In the context of infectious disease spread, this assumption is often convenient to describe a transmission process with mass action kinetics using differential equations and generate analytic expressions and estimates of the reproduction number. In this article, we carry out a simulation study to illustrate the impact of incorrectly assuming an exponential-growth model to characterize the early phase (e.g., 3-5 disease generation intervals) of an infectious disease outbreak that follows near-exponential growth dynamics. Specifically, we assess the impact on: 1) goodness of fit, 2) bias on the growth parameter, and 3) the impact on short-term epidemic forecasts. Designing transmission models and statistical approaches that more flexibly capture the profile of epidemic growth could lead to enhanced model fit, improved estimates of key transmission parameters, and more realistic epidemic forecasts.

  7. System Reliability Analysis: Foundations.


    performance formulas for systems subject to pre- ventive maintenance are given. V * ~, , 9 D -2 SYSTEM RELIABILITY ANALYSIS: FOUNDATIONS Richard E...reliability in this case is V P{s can communicate with the terminal t = h(p) Sp2(((((p p)p) p)p)gp) + p(l -p)(((pL p)p)(p 2 JLp)) + p(l -p)((p(p p...For undirected networks, the basic reference is A. Satyanarayana and Kevin Wood (1982). For directed networks, the basic reference is Avinash

  8. Comparing nadir and limb observations of polar mesospheric clouds: The effect of the assumed particle size distribution

    Bailey, Scott M.; Thomas, Gary E.; Hervig, Mark E.; Lumpe, Jerry D.; Randall, Cora E.; Carstens, Justin N.; Thurairajah, Brentha; Rusch, David W.; Russell, James M.; Gordley, Larry L.


    Nadir viewing observations of Polar Mesospheric Clouds (PMCs) from the Cloud Imaging and Particle Size (CIPS) instrument on the Aeronomy of Ice in the Mesosphere (AIM) spacecraft are compared to Common Volume (CV), limb-viewing observations by the Solar Occultation For Ice Experiment (SOFIE) also on AIM. CIPS makes multiple observations of PMC-scattered UV sunlight from a given location at a variety of geometries and uses the variation of the radiance with scattering angle to determine a cloud albedo, particle size distribution, and Ice Water Content (IWC). SOFIE uses IR solar occultation in 16 channels (0.3-5 μm) to obtain altitude profiles of ice properties including the particle size distribution and IWC in addition to temperature, water vapor abundance, and other environmental parameters. CIPS and SOFIE made CV observations from 2007 to 2009. In order to compare the CV observations from the two instruments, SOFIE observations are used to predict the mean PMC properties observed by CIPS. Initial agreement is poor with SOFIE predicting particle size distributions with systematically smaller mean radii and a factor of two more albedo and IWC than observed by CIPS. We show that significantly improved agreement is obtained if the PMC ice is assumed to contain 0.5% meteoric smoke by mass, in agreement with previous studies. We show that the comparison is further improved if an adjustment is made in the CIPS data processing regarding the removal of Rayleigh scattered sunlight below the clouds. This change has an effect on the CV PMC, but is negligible for most of the observed clouds outside the CV. Finally, we examine the role of the assumed shape of the ice particle size distribution. Both experiments nominally assume the shape is Gaussian with a width parameter roughly half of the mean radius. We analyze modeled ice particle distributions and show that, for the column integrated ice distribution, Log-normal and Exponential distributions better represent the range

  9. On Integral Upper Limits Assuming Power-law Spectra and the Sensitivity in High-energy Astronomy

    Ahnen, Max L.


    The high-energy non-thermal universe is dominated by power-law-like spectra. Therefore, results in high-energy astronomy are often reported as parameters of power-law fits, or, in the case of a non-detection, as an upper limit assuming the underlying unseen spectrum behaves as a power law. In this paper, I demonstrate a simple and powerful one-to-one relation of the integral upper limit in the two-dimensional power-law parameter space into the spectrum parameter space and use this method to unravel the so-far convoluted question of the sensitivity of astroparticle telescopes.

  10. Measurement Practices for Reliability and Power Quality

    Kueck, JD


    This report provides a distribution reliability measurement ''toolkit'' that is intended to be an asset to regulators, utilities and power users. The metrics and standards discussed range from simple reliability, to power quality, to the new blend of reliability and power quality analysis that is now developing. This report was sponsored by the Office of Electric Transmission and Distribution, U.S. Department of Energy (DOE). Inconsistencies presently exist in commonly agreed-upon practices for measuring the reliability of the distribution systems. However, efforts are being made by a number of organizations to develop solutions. In addition, there is growing interest in methods or standards for measuring power quality, and in defining power quality levels that are acceptable to various industries or user groups. The problems and solutions vary widely among geographic areas and among large investor-owned utilities, rural cooperatives, and municipal utilities; but there is still a great degree of commonality. Industry organizations such as the National Rural Electric Cooperative Association (NRECA), the Electric Power Research Institute (EPRI), the American Public Power Association (APPA), and the Institute of Electrical and Electronics Engineers (IEEE) have made tremendous strides in preparing self-assessment templates, optimization guides, diagnostic techniques, and better definitions of reliability and power quality measures. In addition, public utility commissions have developed codes and methods for assessing performance that consider local needs. There is considerable overlap among these various organizations, and we see real opportunity and value in sharing these methods, guides, and standards in this report. This report provides a ''toolkit'' containing synopses of noteworthy reliability measurement practices. The toolkit has been developed to address the interests of three groups: electric power users, utilities, and

  11. Reliability engineering theory and practice

    Birolini, Alessandro


    This book shows how to build in and assess reliability, availability, maintainability, and safety (RAMS) of components, equipment, and systems. It presents the state of the art of reliability (RAMS) engineering, in theory & practice, and is based on over 30 years author's experience in this field, half in industry and half as Professor of Reliability Engineering at the ETH, Zurich. The book structure allows rapid access to practical results. Methods & tools are given in a way that they can be tailored to cover different RAMS requirement levels. Thanks to Appendices A6 - A8 the book is mathematically self-contained, and can be used as a textbook or as a desktop reference with a large number of tables (60), figures (210), and examples / exercises^ 10,000 per year since 2013) were the motivation for this final edition, the 13th since 1985, including German editions. Extended and carefully reviewed to improve accuracy, it represents the continuous improvement effort to satisfy reader's needs and confidenc...

  12. Reliability Assessment Based on Design and Manufacturing Tolerances for Control Burst Mechanism of Small Arms

    S.K. Basu


    Full Text Available Very often specified tolerance is made greater than process tolerance, depending upon (i the manufacturing process capability, and (ii the 'aspiration level' of the designer in effecting a specified tolerance. This applies to multiple components merging into an assembly. In assembly tolerance, errors due to mating are inherent. Common errors arise due to clearance, misalignment in planes and distortion that may cause side stack. Such errors affect the functional performance of the subsystem and consequently become the main cause of failure. Probability distribution of the assembly tolerance and probability distribution of stacked up tolerance of the Components in actual practice leave a common zone of interaction, based on which the in-built reliability changes. From the designer's tolerance, one may have an idea about the 'aspiration level' of assembly tolerance stacking error. Assuming both these parameters, viz., actual stacking error and designer's aspiration level of stacking error to follow the normal probability distribution, it is possible to get the reliability of the product assembly. The paper presents a real life case study for assessing the reliability of sub-assembly at the initial stages of development for control burst mechanism (CBM of rifle.

  13. Expert system aids reliability

    Johnson, A.T. [Tennessee Gas Pipeline, Houston, TX (United States)


    Quality and Reliability are key requirements in the energy transmission industry. Tennessee Gas Co. a division of El Paso Energy, has applied Gensym`s G2, object-oriented Expert System programming language as a standard tool for maintaining and improving quality and reliability in pipeline operation. Tennessee created a small team of gas controllers and engineers to develop a Proactive Controller`s Assistant (ProCA) that provides recommendations for operating the pipeline more efficiently, reliably and safely. The controller`s pipeline operating knowledge is recreated in G2 in the form of Rules and Procedures in ProCA. Two G2 programmers supporting the Gas Control Room add information to the ProCA knowledge base daily. The result is a dynamic, constantly improving system that not only supports the pipeline controllers in their operations, but also the measurement and communications departments` requests for special studies. The Proactive Controller`s Assistant development focus is in the following areas: Alarm Management; Pipeline Efficiency; Reliability; Fuel Efficiency; and Controller Development.

  14. Reliability based structural design

    Vrouwenvelder, A.C.W.M.


    According to ISO 2394, structures shall be designed, constructed and maintained in such a way that they are suited for their use during the design working life in an economic way. To fulfil this requirement one needs insight into the risk and reliability under expected and non-expected actions. A ke

  15. Reliability based structural design

    Vrouwenvelder, A.C.W.M.


    According to ISO 2394, structures shall be designed, constructed and maintained in such a way that they are suited for their use during the design working life in an economic way. To fulfil this requirement one needs insight into the risk and reliability under expected and non-expected actions. A ke

  16. The value of reliability

    Fosgerau, Mogens; Karlström, Anders


    We derive the value of reliability in the scheduling of an activity of random duration, such as travel under congested conditions. Using a simple formulation of scheduling utility, we show that the maximal expected utility is linear in the mean and standard deviation of trip duration, regardless...

  17. Parametric Mass Reliability Study

    Holt, James P.


    The International Space Station (ISS) systems are designed based upon having redundant systems with replaceable orbital replacement units (ORUs). These ORUs are designed to be swapped out fairly quickly, but some are very large, and some are made up of many components. When an ORU fails, it is replaced on orbit with a spare; the failed unit is sometimes returned to Earth to be serviced and re-launched. Such a system is not feasible for a 500+ day long-duration mission beyond low Earth orbit. The components that make up these ORUs have mixed reliabilities. Components that make up the most mass-such as computer housings, pump casings, and the silicon board of PCBs-typically are the most reliable. Meanwhile components that tend to fail the earliest-such as seals or gaskets-typically have a small mass. To better understand the problem, my project is to create a parametric model that relates both the mass of ORUs to reliability, as well as the mass of ORU subcomponents to reliability.

  18. Avionics Design for Reliability


    Consultant P.O. Box 181, Hazelwood. Missouri 63042, U.S.A. soup ""•.• • CONTENTS Page LIST OF SPEAKERS iii INTRODUCTION AND OVERVIEW-RELIABILITY UNDER... primordial , d’autant plus quo dans co cam ia procg- dure do st~lection en fiabilitg eat assez peu efficaco. La ripartition des pannes suit

  19. Wind Energy - How Reliable.


    The reliability of a wind energy system depends on the size of the propeller and the size of the back-up energy storage. Design of the optimum system...speed incidents which generate a significant part of the wind energy . A nomogram is presented, based on some continuous wind speed measurements

  20. The reliability horizon

    Visser, M


    The ``reliability horizon'' for semi-classical quantum gravity quantifies the extent to which we should trust semi-classical quantum gravity, and gives a handle on just where the ``Planck regime'' resides. The key obstruction to pushing semi-classical quantum gravity into the Planck regime is often the existence of large metric fluctuations, rather than a large back-reaction.

  1. Reliability Analysis of a Steel Frame

    M. Sýkora


    Full Text Available A steel frame with haunches is designed according to Eurocodes. The frame is exposed to self-weight, snow, and wind actions. Lateral-torsional buckling appears to represent the most critical criterion, which is considered as a basis for the limit state function. In the reliability analysis, the probabilistic models proposed by the Joint Committee for Structural Safety (JCSS are used for basic variables. The uncertainty model coefficients take into account the inaccuracy of the resistance model for the haunched girder and the inaccuracy of the action effect model. The time invariant reliability analysis is based on Turkstra's rule for combinations of snow and wind actions. The time variant analysis describes snow and wind actions by jump processes with intermittencies. Assuming a 50-year lifetime, the obtained values of the reliability index b vary within the range from 3.95 up to 5.56. The cross-profile IPE 330 designed according to Eurocodes seems to be adequate. It appears that the time invariant reliability analysis based on Turkstra's rule provides considerably lower values of b than those obtained by the time variant analysis.

  2. Reliability block diagrams to model disease management.

    Sonnenberg, A; Inadomi, J M; Bauerfeind, P


    Studies of diagnostic or therapeutic procedures in the management of any given disease tend to focus on one particular aspect of the disease and ignore the interaction between the multitude of factors that determine its final outcome. The present article introduces a mathematical model that accounts for the joint contribution of various medical and non-medical components to the overall disease outcome. A reliability block diagram is used to model patient compliance, endoscopic screening, and surgical therapy for dysplasia in Barrett's esophagus. The overall probability of a patient with a Barrett's esophagus to comply with a screening program, be correctly diagnosed with dysplasia, and undergo successful therapy is 37%. The reduction in the overall success rate, despite the fact that the majority of components are assumed to function with reliability rates of 80% or more, is a reflection of the multitude of serial subsystems involved in disease management. Each serial component influences the overall success rate in a linear fashion. Building multiple parallel pathways into the screening program raises its overall success rate to 91%. Parallel arrangements render systems less sensitive to diagnostic or therapeutic failures. A reliability block diagram provides the means to model the contributions of many heterogeneous factors to disease outcome. Since no medical system functions perfectly, redundancy provided by parallel subsystems assures a greater overall reliability.

  3. Challenges Regarding IP Core Functional Reliability

    Berg, Melanie D.; LaBel, Kenneth A.


    For many years, intellectual property (IP) cores have been incorporated into field programmable gate array (FPGA) and application specific integrated circuit (ASIC) design flows. However, the usage of large complex IP cores were limited within products that required a high level of reliability. This is no longer the case. IP core insertion has become mainstream including their use in highly reliable products. Due to limited visibility and control, challenges exist when using IP cores and subsequently compromise product reliability. We discuss challenges and suggest potential solutions to critical application IP insertion.

  4. Reliability-Based Design Optimization Considering Variable Uncertainty

    Lim, Woochul; Jang, Junyoung; Lee, Tae Hee [Hanyang University, Seoul (Korea, Republic of); Kim, Jungho; Na, Jongho; Lee, Changkun; Kim, Yongsuk [GM Korea, Incheon (Korea, Republic of)


    Although many reliability analysis and reliability-based design optimization (RBDO) methods have been developed to estimate system reliability, many studies assume the uncertainty of the design variable to be constant. In practice, because uncertainty varies with the design variable's value, this assumption results in inaccurate conclusions about the reliability of the optimum design. Therefore, uncertainty should be considered variable in RBDO. In this paper, we propose an RBDO method considering variable uncertainty. Variable uncertainty can modify uncertainty for each design point, resulting in accurate reliability estimation. Finally, a notable optimum design is obtained using the proposed method with variable uncertainty. A mathematical example and an engine cradle design are illustrated to verify the proposed method.

  5. Reasons People Surrender Unowned and Owned Cats to Australian Animal Shelters and Barriers to Assuming Ownership of Unowned Cats.

    Zito, Sarah; Morton, John; Vankan, Dianne; Paterson, Mandy; Bennett, Pauleen C; Rand, Jacquie; Phillips, Clive J C


    Most cats surrendered to nonhuman animal shelters are identified as unowned, and the surrender reason for these cats is usually simply recorded as "stray." A cross-sectional study was conducted with people surrendering cats to 4 Australian animal shelters. Surrenderers of unowned cats commonly gave surrender reasons relating to concern for the cat and his/her welfare. Seventeen percent of noncaregivers had considered adopting the cat. Barriers to assuming ownership most commonly related to responsible ownership concerns. Unwanted kittens commonly contributed to the decision to surrender for both caregivers and noncaregivers. Nonowners gave more surrender reasons than owners, although many owners also gave multiple surrender reasons. These findings highlight the multifactorial nature of the decision-making process leading to surrender and demonstrate that recording only one reason for surrender does not capture the complexity of the surrender decision. Collecting information about multiple reasons for surrender, particularly reasons for surrender of unowned cats and barriers to assuming ownership, could help to develop strategies to reduce the number of cats surrendered.

  6. Reliability analysis of the bulk cargo loading system including dependent components

    Blokus-Roszkowska, Agnieszka


    In the paper an innovative approach to the reliability analysis of multistate series-parallel systems assuming their components' dependency is presented. The reliability function of a multistate series system with components dependent according to the local load sharing rule is determined. Linking these results for series systems with results for parallel systems with independent components, we obtain the reliability function of a multistate series-parallel system assuming dependence of components' departures from the reliability states subsets in series subsystem and assuming independence between these subsystems. As a particular case, the reliability function of a multistate series-parallel system composed of dependent components having exponential reliability functions is fixed. Theoretical results are applied practically to the reliability evaluation of a bulk cargo transportation system, which main area is to load bulk cargo on board the ships. The reliability function and other reliability characteristics of the loading system are determined in case its components have exponential reliability functions with interdependent departures rates from the subsets of their reliability states. Finally, the obtained results are compared with results for the bulk cargo transportation system composed of independent components.

  7. Reliability and Levels of Difficulty of Objective Test Items in a Mathematics Achievement Test: A Study of Ten Senior Secondary Schools in Five Local Government Areas of Akure, Ondo State

    Adebule, S. O.


    This study examined the reliability and difficult indices of Multiple Choice (MC) and True or False (TF) types of objective test items in a Mathematics Achievement Test (MAT). The instruments used were two variants- 50-items Mathematics achievement test based on the multiple choice and true or false test formats. A total of five hundred (500)…

  8. The reliability of DSM impact estimates

    Vine, E.L. [Lawrence Berkeley Lab., CA (United States); Kushler, M.G. [Michigan Public Service Commission, Lansing, MI (United States)


    Demand-side management (DSM) critics continue to question the reliability of DSM program savings, and therefore, the need for funding such programs. In this paper, the authors examine the issues underlying the discussion of reliability of DSM program savings (e.g., bias and precision) and compare the levels of precision of DSM impact estimates for three utilities. Overall, the precision results from all three companies appear quite similar and, for the most part, demonstrate reasonably good precision levels around DSM savings estimate. The conclude by recommending activities for program managers and evaluators for increasing the understanding of the factors leading to DSM uncertainty and for reducing the level of DSM uncertainty.

  9. Reliability Degradation Due to Stockpile Aging

    Robinson, David G.


    The objective of this reseach is the investigation of alternative methods for characterizing the reliability of systems with time dependent failure modes associated with stockpile aging. Reference to 'reliability degradation' has, unfortunately, come to be associated with all types of aging analyes: both deterministic and stochastic. In this research, in keeping with the true theoretical definition, reliability is defined as a probabilistic description of system performance as a funtion of time. Traditional reliability methods used to characterize stockpile reliability depend on the collection of a large number of samples or observations. Clearly, after the experiments have been performed and the data has been collected, critical performance problems can be identified. A Major goal of this research is to identify existing methods and/or develop new mathematical techniques and computer analysis tools to anticipate stockpile problems before they become critical issues. One of the most popular methods for characterizing the reliability of components, particularly electronic components, assumes that failures occur in a completely random fashion, i.e. uniformly across time. This method is based primarily on the use of constant failure rates for the various elements that constitute the weapon system, i.e. the systems do not degrade while in storage. Experience has shown that predictions based upon this approach should be regarded with great skepticism since the relationship between the life predicted and the observed life has been difficult to validate. In addition to this fundamental problem, the approach does not recognize that there are time dependent material properties and variations associated with the manufacturing process and the operational environment. To appreciate the uncertainties in predicting system reliability a number of alternative methods are explored in this report. All of the methods are very different from those currently used to assess

  10. Reliability in the utility computing era: Towards reliable Fog computing

    Madsen, Henrik; Burtschy, Bernard; Albeanu, G.


    This paper considers current paradigms in computing and outlines the most important aspects concerning their reliability. The Fog computing paradigm as a non-trivial extension of the Cloud is considered and the reliability of the networks of smart devices are discussed. Combining the reliability...... requirements of grid and cloud paradigms with the reliability requirements of networks of sensor and actuators it follows that designing a reliable Fog computing platform is feasible....

  11. Component Reliability Assessment of Offshore Jacket Platforms

    V.J. Kurian


    Full Text Available Oil and gas industry is one of the most important industries contributing to the Malaysian economy. To extract hydrocarbons, various types of production platforms have been developed. Fixed jacket platform is the earliest type of production structure, widely installed in Malaysia’s shallow and intermediate waters. To date, more than 60% of these jacket platforms have operated exceeding their initial design life, thus making the re-evaluation and reassessment necessary for these platforms to continue to be put in service. In normal engineering practice, system reliability of a structure is evaluated as its safety parameter. This method is however, much complicated and time consuming. Assessing component's reliability can be an alternative approach to provide assurance about a structure’s condition in an early stage. Design codes such as the Working Stress Design (WSD and the Load and Resistance Factor Design (LRFD are well established for the component-level assessment. In reliability analysis, failure function, which consists of strength and load, is used to define the failure event. If the load acting exceeds the capacity of a structure, the structure will fail. Calculation of stress utilization ratio as given in the design codes is able to predict the reliability of a member and to estimate the extent to which a member is being utilised. The basic idea of this ratio is that if it is more than one, the member has failed and vice versa. Stress utilization ratio is a ratio of applied stress, which is the output reaction of environmental loadings acting on the structural member, to the design strength that comes from the member’s geometric and material properties. Adopting this ratio as the failure event, the reliability of each component is found. This study reviews and discusses the reliability for selected members of three Malaysian offshore jacket platforms. First Order Reliability Method (FORM was used to generate reliability index and

  12. Reliability estimation in a multilevel confirmatory factor analysis framework.

    Geldhof, G John; Preacher, Kristopher J; Zyphur, Michael J


    Scales with varying degrees of measurement reliability are often used in the context of multistage sampling, where variance exists at multiple levels of analysis (e.g., individual and group). Because methodological guidance on assessing and reporting reliability at multiple levels of analysis is currently lacking, we discuss the importance of examining level-specific reliability. We present a simulation study and an applied example showing different methods for estimating multilevel reliability using multilevel confirmatory factor analysis and provide supporting Mplus program code. We conclude that (a) single-level estimates will not reflect a scale's actual reliability unless reliability is identical at each level of analysis, (b) 2-level alpha and composite reliability (omega) perform relatively well in most settings, (c) estimates of maximal reliability (H) were more biased when estimated using multilevel data than either alpha or omega, and (d) small cluster size can lead to overestimates of reliability at the between level of analysis. We also show that Monte Carlo confidence intervals and Bayesian credible intervals closely reflect the sampling distribution of reliability estimates under most conditions. We discuss the estimation of credible intervals using Mplus and provide R code for computing Monte Carlo confidence intervals.

  13. Reliability Compliance Testing of Electronic Components for Consumer Electronics

    Peciakowski, E.; Przybyl, E.


    In this paper the organisation of reliability compliance testing of electronic components in Poland is discussed. The aim of the testing is to find the reliability of the components to both producer and user and hence to establish reliability for the two parties. The system described is derived from standard methods and has two aims. These are:-1) To enable periodical checks of production to be made.2) To estimate the reliability level of the components produced.Sampling plans are constructed...

  14. Explicit theory of mind is even more unified than previously assumed: belief ascription and understanding aspectuality emerge together in development.

    Rakoczy, Hannes; Bergfeld, Delia; Schwarz, Ina; Fizke, Ella


    Existing evidence suggests that children, when they first pass standard theory-of-mind tasks, still fail to understand the essential aspectuality of beliefs and other propositional attitudes: such attitudes refer to objects only under specific aspects. Oedipus, for example, believes Yocaste (his mother) is beautiful, but this does not imply that he believes his mother is beautiful. In three experiments, 3- to 6-year-olds' (N = 119) understanding of aspectuality was tested with a novel, radically simplified task. In contrast to all previous findings, this task was as difficult as and highly correlated with a standard false belief task. This suggests that a conceptual capacity more unified than previously assumed emerges around ages 4-5, a full-fledged metarepresentational scheme of propositional attitudes.

  15. Attitude assumed by nurses in regards to end of life decisions of people: Case of Costa Rica, 2011

    Jerik Andrade Espinales


    Full Text Available The research problem was to analyze what is the attitude assumed by nurses in regards to end of life decisions of people in Costa Rica during 2011. A quantitative exploratory and transactional methodology was developed, with a random sample and a subsample multistage taking into account the national hospitals Class A; obtaining a sample of 86 nursing professionals who completed a questionnaire. The obtained data was tabulated using a statistical package. The data demonstrated that although most of the participants were unfamiliar with the concept of end of life decisions, they related the said concept to the respect for the dignity, the rights and the autonomy of people in the same way they apply such ethical values when providing care. The research team concluded that the sampled nursing professionals favored the mentioned ethical values over their own personal ethics and morals.

  16. 重整热力学理论的方案%Assume of Reconstructing the Thermodynamic Theory

    何沛平; 朱顶余


    “热寂说”是热力学第二定律的宇宙学推论,由于涉及到宇宙未来、人类命运等重大问题,引起了科学界和哲学界一百多年持续不断的争论.本文从重力场影响介质温度分布出发,研究系统处于外力场中的热力学规律,给出重整热力学理论的方案,该方案涉及到热力学第二定律、第零定律、热流定律.重整后的热力学定律更具普适性.%"Heat Death" is the second law of thermodynamics cosmological inferences. As it relates to the future of the universe and the destiny of mankind, the scientific community and the philosophical community have debated hundred years. A famous scholar pointed out that " With gravitation, thermodynamics should how to reconstruct? "in the article "why not heat death " . In order to study the subject, based on force field affecting the medium temperature, we study thermodynamics laws when the system staying in the external force field, obtaining the reconstructing assume of the thermodynamic theory. The assume is related to the second law of thermodynamics, zero law, heat flow law. After the reconstructing, the law of thermodynamics is more universal.

  17. Human Reliability Program Workshop

    Landers, John; Rogers, Erin; Gerke, Gretchen


    A Human Reliability Program (HRP) is designed to protect national security as well as worker and public safety by continuously evaluating the reliability of those who have access to sensitive materials, facilities, and programs. Some elements of a site HRP include systematic (1) supervisory reviews, (2) medical and psychological assessments, (3) management evaluations, (4) personnel security reviews, and (4) training of HRP staff and critical positions. Over the years of implementing an HRP, the Department of Energy (DOE) has faced various challenges and overcome obstacles. During this 4-day activity, participants will examine programs that mitigate threats to nuclear security and the insider threat to include HRP, Nuclear Security Culture (NSC) Enhancement, and Employee Assistance Programs. The focus will be to develop an understanding of the need for a systematic HRP and to discuss challenges and best practices associated with mitigating the insider threat.

  18. Accelerator reliability workshop

    Hardy, L.; Duru, Ph.; Koch, J.M.; Revol, J.L.; Van Vaerenbergh, P.; Volpe, A.M.; Clugnet, K.; Dely, A.; Goodhew, D


    About 80 experts attended this workshop, which brought together all accelerator communities: accelerator driven systems, X-ray sources, medical and industrial accelerators, spallation sources projects (American and European), nuclear physics, etc. With newly proposed accelerator applications such as nuclear waste transmutation, replacement of nuclear power plants and others. Reliability has now become a number one priority for accelerator designers. Every part of an accelerator facility from cryogenic systems to data storage via RF systems are concerned by reliability. This aspect is now taken into account in the design/budget phase, especially for projects whose goal is to reach no more than 10 interruptions per year. This document gathers the slides but not the proceedings of the workshop.

  19. Reliability and construction control

    Sherif S. AbdelSalam


    Full Text Available The goal of this study was to determine the most reliable and efficient combination of design and construction methods required for vibro piles. For a wide range of static and dynamic formulas, the reliability-based resistance factors were calculated using EGYPT database, which houses load test results for 318 piles. The analysis was extended to introduce a construction control factor that determines the variation between the pile nominal capacities calculated using static versus dynamic formulae. From the major outcomes, the lowest coefficient of variation is associated with Davisson’s criterion, and the resistance factors calculated for the AASHTO method are relatively high compared with other methods. Additionally, the CPT-Nottingham and Schmertmann method provided the most economic design. Recommendations related to a pile construction control factor were also presented, and it was found that utilizing the factor can significantly reduce variations between calculated and actual capacities.

  20. Improving Power Converter Reliability

    Ghimire, Pramod; de Vega, Angel Ruiz; Beczkowski, Szymon


    The real-time junction temperature monitoring of a high-power insulated-gate bipolar transistor (IGBT) module is important to increase the overall reliability of power converters for industrial applications. This article proposes a new method to measure the on-state collector?emitter voltage...... of a high-power IGBT module during converter operation, which may play a vital role in improving the reliability of the power converters. The measured voltage is used to estimate the module average junction temperature of the high and low-voltage side of a half-bridge IGBT separately in every fundamental...... is measured in a wind power converter at a low fundamental frequency. To illustrate more, the test method as well as the performance of the measurement circuit are also presented. This measurement is also useful to indicate failure mechanisms such as bond wire lift-off and solder layer degradation...

  1. Power electronics reliability.

    Kaplar, Robert James; Brock, Reinhard C.; Marinella, Matthew; King, Michael Patrick; Stanley, James K.; Smith, Mark A.; Atcitty, Stanley


    The project's goals are: (1) use experiments and modeling to investigate and characterize stress-related failure modes of post-silicon power electronic (PE) devices such as silicon carbide (SiC) and gallium nitride (GaN) switches; and (2) seek opportunities for condition monitoring (CM) and prognostics and health management (PHM) to further enhance the reliability of power electronics devices and equipment. CM - detect anomalies and diagnose problems that require maintenance. PHM - track damage growth, predict time to failure, and manage subsequent maintenance and operations in such a way to optimize overall system utility against cost. The benefits of CM/PHM are: (1) operate power conversion systems in ways that will preclude predicted failures; (2) reduce unscheduled downtime and thereby reduce costs; and (3) pioneering reliability in SiC and GaN.

  2. ATLAS reliability analysis

    Bartsch, R.R.


    Key elements of the 36 MJ ATLAS capacitor bank have been evaluated for individual probabilities of failure. These have been combined to estimate system reliability which is to be greater than 95% on each experimental shot. This analysis utilizes Weibull or Weibull-like distributions with increasing probability of failure with the number of shots. For transmission line insulation, a minimum thickness is obtained and for the railgaps, a method for obtaining a maintenance interval from forthcoming life tests is suggested.

  3. a Reliability Evaluation System of Association Rules

    Chen, Jiangping; Feng, Wanshu; Luo, Minghai


    In mining association rules, the evaluation of the rules is a highly important work because it directly affects the usability and applicability of the output results of mining. In this paper, the concept of reliability was imported into the association rule evaluation. The reliability of association rules was defined as the accordance degree that reflects the rules of the mining data set. Such degree contains three levels of measurement, namely, accuracy, completeness, and consistency of rules. To show its effectiveness, the "accuracy-completeness-consistency" reliability evaluation system was applied to two extremely different data sets, namely, a basket simulation data set and a multi-source lightning data fusion. Results show that the reliability evaluation system works well in both simulation data set and the actual problem. The three-dimensional reliability evaluation can effectively detect the useless rules to be screened out and add the missing rules thereby improving the reliability of mining results. Furthermore, the proposed reliability evaluation system is applicable to many research fields; using the system in the analysis can facilitate obtainment of more accurate, complete, and consistent association rules.

  4. Reliability-based congestion pricing model under endogenous equilibrated market penetration and compliance rate of ATIS

    钟绍鹏; 邓卫


    A reliability-based stochastic system optimum congestion pricing (SSOCP) model with endogenous market penetration and compliance rate in an advanced traveler information systems (ATIS) environment was proposed. All travelers were divided into two classes. The first guided travelers were referred to as the equipped travelers who follow ATIS advice, while the second unguided travelers were referred to as the unequipped travelers and the equipped travelers who do not follow the ATIS advice (also referred to as non-complied travelers). Travelers were assumed to take travel time, congestion pricing, and travel time reliability into account when making travel route choice decisions. In order to arrive at on time, travelers needed to allow for a safety margin to their trip. The market penetration of ATIS was determined by a continuous increasing function of the information benefit, and the ATIS compliance rate of equipped travelers was given as the probability of the actually experienced travel costs of guided travelers less than or equal to those of unguided travelers. The analysis results could enhance our understanding of the effect of travel demand level and travel time reliability confidence level on the ATIS market penetration and compliance rate; and the effect of travel time perception variation of guided and unguided travelers on the mean travel cost savings (MTCS) of the equipped travelers, the ATIS market penetration, compliance rate, and the total network effective travel time (TNETT).

  5. Reliability of Summed Item Scores Using Structural Equation Modeling: An Alternative to Coefficient Alpha

    Green, Samuel B.; Yang, Yanyun


    A method is presented for estimating reliability using structural equation modeling (SEM) that allows for nonlinearity between factors and item scores. Assuming the focus is on consistency of summed item scores, this method for estimating reliability is preferred to those based on linear SEM models and to the most commonly reported estimate of…

  6. 板级跌落碰撞下BGA焊点的失效分析与可靠性%Failure analysis and reliability studying of BGA solder joint in board-level drop test



    研究焊点在跌落碰撞状态下的可靠性是研究电子产品可靠性的关键技术之一,本文基于JE-DEC冲击跌落标准对焊料的跌落性能进行了测试,试验过程中充分考虑了焊料的材料组成、助焊剂、焊盘的处理方法以及焊球的大小等因素.焊料选择常用的Sn-3Ag-0.5Cu(SAC305)和Sn-1Ag-0.5Cu(SAC105)和Sn63Pb37,焊盘处理方法为Ni/Au电镀法和有机保焊膜(OSP)涂覆法.运用红油浸渍试验和焊点金相剖面分析对焊点的失效模式和可靠性进行分析,结果表明无铅焊料中银含量较低和焊盘采用OSP涂覆法有利于提高BGA焊点的可靠性.%The experimental studying of the reliability of solder joint in dropping and impacting have been a key procedure of studying of the electronic product reliability. The paper have measured the drop performance about solder joint point based on JEDEC drop standard, and compared the effect to reliability of solder joint among the composition of solder material, fluxes, the method of substrate pad surface treatment and solder ball size and so on. Now Sn -3Ag -0. 5Cu and Sn - lAg -0. 5Cu and Sn63Pb37 are three kind of solder material selected, and substrate pad with Ni/Au surface finish and organic solder-ability preservatives (OSP) coating in the testing. The failure model and reliability of solder joint are examined using the dye- pry and cross - section test, and the result indicate that the solder joint with a low Ag weight contented and substrate pad with OSP coating can both enhance the reliability of BGA solder joint.

  7. Reliability assessment of wave Energy devices

    Ambühl, Simon; Kramer, Morten; Kofoed, Jens Peter


    Energy from waves may play a key role in sustainable electricity production in the future. Optimal reliability levels for components used for Wave Energy Devices (WEDs) need to be defined to be able to decrease their cost of electricity. Optimal reliability levels can be found using probabilistic...... methods. Extreme loads during normal operation, but also extreme loads simultaneous with failure of mechanical and electrical components as well as the control system, are of importance for WEDs. Furthermore, fatigue loading needs to be assessed. This paper focus on the Wavestar prototype which is located...

  8. Ultimately Reliable Pyrotechnic Systems

    Scott, John H.; Hinkel, Todd


    This paper presents the methods by which NASA has designed, built, tested, and certified pyrotechnic devices for high reliability operation in extreme environments and illustrates the potential applications in the oil and gas industry. NASA's extremely successful application of pyrotechnics is built upon documented procedures and test methods that have been maintained and developed since the Apollo Program. Standards are managed and rigorously enforced for performance margins, redundancy, lot sampling, and personnel safety. The pyrotechnics utilized in spacecraft include such devices as small initiators and detonators with the power of a shotgun shell, detonating cord systems for explosive energy transfer across many feet, precision linear shaped charges for breaking structural membranes, and booster charges to actuate valves and pistons. NASA's pyrotechnics program is one of the more successful in the history of Human Spaceflight. No pyrotechnic device developed in accordance with NASA's Human Spaceflight standards has ever failed in flight use. NASA's pyrotechnic initiators work reliably in temperatures as low as -420 F. Each of the 135 Space Shuttle flights fired 102 of these initiators, some setting off multiple pyrotechnic devices, with never a failure. The recent landing on Mars of the Opportunity rover fired 174 of NASA's pyrotechnic initiators to complete the famous '7 minutes of terror.' Even after traveling through extreme radiation and thermal environments on the way to Mars, every one of them worked. These initiators have fired on the surface of Titan. NASA's design controls, procedures, and processes produce the most reliable pyrotechnics in the world. Application of pyrotechnics designed and procured in this manner could enable the energy industry's emergency equipment, such as shutoff valves and deep-sea blowout preventers, to be left in place for years in extreme environments and still be relied upon to function when needed, thus greatly enhancing

  9. Ferrite logic reliability study

    Baer, J. A.; Clark, C. B.


    Development and use of digital circuits called all-magnetic logic are reported. In these circuits the magnetic elements and their windings comprise the active circuit devices in the logic portion of a system. The ferrite logic device belongs to the all-magnetic class of logic circuits. The FLO device is novel in that it makes use of a dual or bimaterial ferrite composition in one physical ceramic body. This bimaterial feature, coupled with its potential for relatively high speed operation, makes it attractive for high reliability applications. (Maximum speed of operation approximately 50 kHz.)

  10. Blade reliability collaborative :

    Ashwill, Thomas D.; Ogilvie, Alistair B.; Paquette, Joshua A.


    The Blade Reliability Collaborative (BRC) was started by the Wind Energy Technologies Department of Sandia National Laboratories and DOE in 2010 with the goal of gaining insight into planned and unplanned O&M issues associated with wind turbine blades. A significant part of BRC is the Blade Defect, Damage and Repair Survey task, which will gather data from blade manufacturers, service companies, operators and prior studies to determine details about the largest sources of blade unreliability. This report summarizes the initial findings from this work.

  11. The canonical Luminous Blue Variable AG Car and its neighbor Hen 3-519 are much closer than previously assumed

    Smith, Nathan


    The strong mass loss of Luminous Blue Variables (LBVs) is thought to play a critical role in the evolution of massive stars, but the physics of their instability and their place in the evolutionary sequence remains uncertain and debated. A key to understanding their peculiar instability is their high observed luminosity, which for Galactic LBVs often depends on an uncertain distance estimate. Here we report direct distances and space motions of four canonical Milky Way LBVs---AG Car, HR Car, HD 168607, and (the LBV candidate) Hen 3-519---whose parallaxes and proper motions have been provided by the Gaia first data release. Whereas the distances of HR Car and HD 168607 are consistent with those previously adopted in the literature within the uncertainty, we find that the distances to Hen 3-519 and AG Car, both at $\\sim$2 kpc, are much closer than the 6--8 kpc distances previously assumed. For Hen 3-519, this moves the star far from the locus of LBVs on the HR Diagram. AG Car has been considered a defining exam...

  12. Importance of the habitat choice behavior assumed when modeling the effects of food and temperature on fish populations

    Wildhaber, Mark L.; Lamberson, Peter J.


    Various mechanisms of habitat choice in fishes based on food and/or temperature have been proposed: optimal foraging for food alone; behavioral thermoregulation for temperature alone; and behavioral energetics and discounted matching for food and temperature combined. Along with development of habitat choice mechanisms, there has been a major push to develop and apply to fish populations individual-based models that incorporate various forms of these mechanisms. However, it is not known how the wide variation in observed and hypothesized mechanisms of fish habitat choice could alter fish population predictions (e.g. growth, size distributions, etc.). We used spatially explicit, individual-based modeling to compare predicted fish populations using different submodels of patch choice behavior under various food and temperature distributions. We compared predicted growth, temperature experience, food consumption, and final spatial distribution using the different models. Our results demonstrated that the habitat choice mechanism assumed in fish population modeling simulations was critical to predictions of fish distribution and growth rates. Hence, resource managers who use modeling results to predict fish population trends should be very aware of and understand the underlying patch choice mechanisms used in their models to assure that those mechanisms correctly represent the fish populations being modeled.

  13. Distance determination for RAVE stars using stellar models II: Most likely values assuming a standard stellar evolution scenario

    Zwitter, T; Breddels, M A; Smith, M C; Helmi, A; Munari, U; Bienaym\\'{e), O; Bland-Hawthorn, J; Boeche, C; Brown, A G A; Campbell, R; Freeman, K C; Fulbright, J; Gibson, B; Gilmore, G; Grebel, E K; Navarro, J F; Parker, Q A; Seabroke, G M; Siebert, A; Siviero, A; Steinmetz, M; Watson, F G; Williams, M; Wyse, R F G


    The RAdial Velocity Experiment (RAVE) is a spectroscopic survey of the Milky Way. We use the subsample of spectra with spectroscopically determined values of stellar parameters to determine the distances to these stars. The list currently contains 235,064 high quality spectra which show no peculiarities and belong to 210,872 different stars. The numbers will grow as the RAVE survey progresses. The public version of the catalog will be made available through the CDS services along with the ongoing RAVE public data releases. The distances are determined with a method based on the work by Breddels et al.~(2010). Here we assume that the star undergoes a standard stellar evolution and that its spectrum shows no peculiarities. The refinements include: the use of either of the three isochrone sets, a better account of the stellar ages and masses, use of more realistic errors of stellar parameter values, and application to a larger dataset. The derived distances of both dwarfs and giants match within ~21% to the astr...

  14. Controlling a transfer trajectory with realistic impulses assumming perturbations in the Sun-Earth-Moon Quasi-Bicircular Problem

    Leiva, A. M.; Briozzo, C. B.

    In a previous work we successfully implemented a control algorithm to stabilize unstable periodic orbits in the Sun-Earth-Moon Quasi-Bicircular Problem (QBCP). Applying the same techniques, in this work we stabilize an unstable trajectory performing fast transfers between the Earth and the Moon in a dynamical system similar to the QBCP but incorporating the gravitational perturbation of the planets Mercury, Venus, Mars, Jupiter, Saturn, Uranus, and Neptune, assumed to move on circular coplanar heliocentric orbits. In the control stage we used as a reference trajectory an unstable periodic orbit from the unperturbed QBCP. We performed 400 numerical experiments integrating the trajectories over time spans of ~40 years, taking for each one random values for the initial positions of the planets. In all cases the control impulses applied were larger than 20 cm/s, consistently with realistic implementations. The minimal and maximal yearly mean consumptions were ~10 m/s and ~71 m/s, respectively. FULL TEXT IN SPANISH

  15. Analysis on testing and operational reliability of software

    ZHAO Jing; LIU Hong-wei; CUI Gang; WANG Hui-qiang


    Software reliability was estimated based on NHPP software reliability growth models. Testing reliability and operational reliability may be essentially different. On the basis of analyzing similarities and differences of the testing phase and the operational phase, using the concept of operational reliability and the testing reliability, different forms of the comparison between the operational failure ratio and the predicted testing failure ratio were conducted, and the mathematical discussion and analysis were performed in detail. Finally, software optimal release was studied using software failure data. The results show that two kinds of conclusions can be derived by applying this method, one conclusion is to continue testing to meet the required reliability level of users, and the other is that testing stops when the required operational reliability is met, thus the testing cost can be reduced.

  16. Will male advertisement be a reliable indicator of paternal care, if offspring survival depends on male care?

    Kelly, Natasha B; Alonzo, Suzanne H


    Existing theory predicts that male signalling can be an unreliable indicator of paternal care, but assumes that males with high levels of mating success can have high current reproductive success, without providing any parental care. As a result, this theory does not hold for the many species where offspring survival depends on male parental care. We modelled male allocation of resources between advertisement and care for species with male care where males vary in quality, and the effect of care and advertisement on male fitness is multiplicative rather than additive. Our model predicts that males will allocate proportionally more of their resources to whichever trait (advertisement or paternal care) is more fitness limiting. In contrast to previous theory, we find that male advertisement is always a reliable indicator of paternal care and male phenotypic quality (e.g. males with higher levels of advertisement never allocate less to care than males with lower levels of advertisement). Our model shows that the predicted pattern of male allocation and the reliability of male signalling depend very strongly on whether paternal care is assumed to be necessary for offspring survival and how male care affects offspring survival and male fitness.

  17. Will male advertisement be a reliable indicator of paternal care, if offspring survival depends on male care?

    Kelly, Natasha B.; Alonzo, Suzanne H.


    Existing theory predicts that male signalling can be an unreliable indicator of paternal care, but assumes that males with high levels of mating success can have high current reproductive success, without providing any parental care. As a result, this theory does not hold for the many species where offspring survival depends on male parental care. We modelled male allocation of resources between advertisement and care for species with male care where males vary in quality, and the effect of care and advertisement on male fitness is multiplicative rather than additive. Our model predicts that males will allocate proportionally more of their resources to whichever trait (advertisement or paternal care) is more fitness limiting. In contrast to previous theory, we find that male advertisement is always a reliable indicator of paternal care and male phenotypic quality (e.g. males with higher levels of advertisement never allocate less to care than males with lower levels of advertisement). Our model shows that the predicted pattern of male allocation and the reliability of male signalling depend very strongly on whether paternal care is assumed to be necessary for offspring survival and how male care affects offspring survival and male fitness. PMID:19520802

  18. Reliability, return periods, and risk under nonstationarity

    Read, Laura K.; Vogel, Richard M.


    Water resources design has widely used the average return period as a concept to inform management and communication of the risk of experiencing an exceedance event within a planning horizon. Even though nonstationarity is often apparent, in practice hydrologic design often mistakenly assumes that the probability of exceedance, p, is constant from year to year which leads to an average return period To equal to 1/p; this expression is far more complex under nonstationarity. Even for stationary processes, the common application of an average return period is problematic: it does not account for planning horizon, is an average value that may not be representative of the time to the next flood, and is generally not applied in other areas of water planning. We combine existing theoretical and empirical results from the literature to provide the first general, comprehensive description of the probabilistic behavior of the return period and reliability under nonstationarity. We show that under nonstationarity, the underlying distribution of the return period exhibits a more complex shape than the exponential distribution under stationary conditions. Using a nonstationary lognormal model, we document the increased complexity and challenges associated with planning for future flood events over a planning horizon. We compare application of the average return period with the more common concept of reliability and recommend replacing the average return period with reliability as a more practical way to communicate event likelihood in both stationary and nonstationary contexts.

  19. Reliability Analysis of Bearing Capacity of Square Footing on Soil with Strength Anisotropy Due to Layered Microstructure

    Kawa Marek


    Full Text Available The paper deals with reliability analysis of square footing on soil with strength anisotropy. The strength of the soil has been described with identified anisotropic strength criterion dedicated to geomaterials with layered microstructure. The analysis assumes dip angle α and azimuth angle β which define direction of lamination of the structure to be random variables with given probability density functions. Bearing capacity being a function of these variables is approximated based on results of deterministic simulations obtained for variety of orientations. The weighted regression method by Kaymaz and McMahon within the framework of Response Surface Method is used for the approximation. As a result of analysis, global factor of safety that corresponds to assumed value of probability of failure is determined. The value of the safety factor denotes the ratio between the value of the design load and the mean value of bearing capacity which is needed to reduce the probability of failure to the acceptable level. The procedure of calculating the factor has been presented for two different cases. In the first case, no information about lamination direction of the soil has been provided and thus all the orientations are assumed to be equally probable (uniform distribution. In the second case, statistical information including mean, variance and assumed probability distribution for both α and β angle is known. For the latter case, using results obtained for few different values of mean of angle α, also the influence of strength anisotropy on the value of global factor of safety is shown.

  20. Load Control System Reliability

    Trudnowski, Daniel [Montana Tech of the Univ. of Montana, Butte, MT (United States)


    This report summarizes the results of the Load Control System Reliability project (DOE Award DE-FC26-06NT42750). The original grant was awarded to Montana Tech April 2006. Follow-on DOE awards and expansions to the project scope occurred August 2007, January 2009, April 2011, and April 2013. In addition to the DOE monies, the project also consisted of matching funds from the states of Montana and Wyoming. Project participants included Montana Tech; the University of Wyoming; Montana State University; NorthWestern Energy, Inc., and MSE. Research focused on two areas: real-time power-system load control methodologies; and, power-system measurement-based stability-assessment operation and control tools. The majority of effort was focused on area 2. Results from the research includes: development of fundamental power-system dynamic concepts, control schemes, and signal-processing algorithms; many papers (including two prize papers) in leading journals and conferences and leadership of IEEE activities; one patent; participation in major actual-system testing in the western North American power system; prototype power-system operation and control software installed and tested at three major North American control centers; and, the incubation of a new commercial-grade operation and control software tool. Work under this grant certainly supported the DOE-OE goals in the area of “Real Time Grid Reliability Management.”

  1. Reliable predictions of waste performance in a geologic repository

    Pigford, T.H.; Chambre, P.L.


    Establishing reliable estimates of long-term performance of a waste repository requires emphasis upon valid theories to predict performance. Predicting rates that radionuclides are released from waste packages cannot rest upon empirical extrapolations of laboratory leach data. Reliable predictions can be based on simple bounding theoretical models, such as solubility-limited bulk-flow, if the assumed parameters are reliably known or defensibly conservative. Wherever possible, performance analysis should proceed beyond simple bounding calculations to obtain more realistic - and usually more favorable - estimates of expected performance. Desire for greater realism must be balanced against increasing uncertainties in prediction and loss of reliability. Theoretical predictions of release rate based on mass-transfer analysis are bounding and the theory can be verified. Postulated repository analogues to simulate laboratory leach experiments introduce arbitrary and fictitious repository parameters and are shown not to agree with well-established theory. 34 refs., 3 figs., 2 tabs.

  2. OSS reliability measurement and assessment

    Yamada, Shigeru


    This book analyses quantitative open source software (OSS) reliability assessment and its applications, focusing on three major topic areas: the Fundamentals of OSS Quality/Reliability Measurement and Assessment; the Practical Applications of OSS Reliability Modelling; and Recent Developments in OSS Reliability Modelling. Offering an ideal reference guide for graduate students and researchers in reliability for open source software (OSS) and modelling, the book introduces several methods of reliability assessment for OSS including component-oriented reliability analysis based on analytic hierarchy process (AHP), analytic network process (ANP), and non-homogeneous Poisson process (NHPP) models, the stochastic differential equation models and hazard rate models. These measurement and management technologies are essential to producing and maintaining quality/reliable systems using OSS.

  3. Reliability and validity in research.

    Roberts, Paula; Priest, Helena

    This article examines reliability and validity as ways to demonstrate the rigour and trustworthiness of quantitative and qualitative research. The authors discuss the basic principles of reliability and validity for readers who are new to research.

  4. Activation of the aryl hydrocarbon receptor by carbaryl: Computational evidence of the ability of carbaryl to assume a planar conformation.

    Casado, Susana; Alonso, Mercedes; Herradón, Bernardo; Tarazona, José V; Navas, José


    It has been accepted that aryl hydrocarbon receptor (AhR) ligands are compounds with two or more aromatic rings in a coplanar conformation. Although general agreement exists that carbaryl is able to activate the AhR, it has been proposed that such activation could occur through alternative pathways without ligand binding. This idea was supported by studies showing a planar conformation of carbaryl as unlikely. The objective of the present work was to clarify the process of AhR activation by carbaryl. In rat H4IIE cells permanently transfected with a luciferase gene under the indirect control of AhR, incubation with carbaryl led to an increase of luminescence. Ligand binding to the AhR was studied by means of a cell-free in vitro system in which the activation of AhR can occur only by ligand binding. In this system, exposure to carbaryl also led to activation of AhR. These results were similar to those obtained with the AhR model ligand beta-naphthoflavone, although this compound exhibited higher potency than carbaryl in both assays. By means of computational modeling (molecular mechanics and quantum chemical calculations), the structural characteristics and electrostatic properties of carbaryl were described in detail, and it was observed that the substituent at C-1 and the naphthyl ring were not coplanar. Assuming that carbaryl would interact with the AhR through a hydrogen bond, this interaction was studied computationally using hydrogen fluoride as a model H-bond donor. Under this situation, the stabilization energy of the carbaryl molecule would permit it to adopt a planar conformation. These results are in accordance with the mechanism traditionally accepted for AhR activation: Binding of ligands in a planar conformation.

  5. The Canonical Luminous Blue Variable AG Car and Its Neighbor Hen 3-519 are Much Closer than Previously Assumed

    Smith, Nathan; Stassun, Keivan G.


    The strong mass loss of Luminous Blue Variables (LBVs) is thought to play a critical role in massive-star evolution, but their place in the evolutionary sequence remains debated. A key to understanding their peculiar instability is their high observed luminosities, which often depends on uncertain distances. Here we report direct distances and space motions of four canonical Milky Way LBVs—AG Car, HR Car, HD 168607, and (candidate) Hen 3-519—from the Gaia first data release. Whereas the distances of HR Car and HD 168607 are consistent with previous literature estimates within the considerable uncertainties, Hen 3-519 and AG Car, both at ∼2 kpc, are much closer than the 6–8 kpc distances previously assumed. As a result, Hen 3-519 moves far from the locus of LBVs on the Hertzsprung–Russell diagram, making it a much less luminous object. For AG Car, considered a defining example of a classical LBV, its lower luminosity would also move it off the S Dor instability strip. Lower luminosities allow both AG Car and Hen 3-519 to have passed through a previous red supergiant phase, lower the mass estimates for their shell nebulae, and imply that binary evolution is needed to account for their peculiarities. These results may also impact our understanding of LBVs as potential supernova progenitors and their isolated environments. Improved distances will be provided in the Gaia second data release, which will include additional LBVs. AG Car and Hen 3-519 hint that this new information may alter our traditional view of LBVs.

  6. Guidelines for Reporting Reliability and Agreement Studies (GRRAS) were proposed

    Kottner, Jan; Audigé, Laurent; Brorson, Stig;


    Results of reliability and agreement studies are intended to provide information about the amount of error inherent in any diagnosis, score, or measurement. The level of reliability and agreement among users of scales, instruments, or classifications is widely unknown. Therefore, there is a need...... for rigorously conducted interrater and intrarater reliability and agreement studies. Information about sample selection, study design, and statistical analysis is often incomplete. Because of inadequate reporting, interpretation and synthesis of study results are often difficult. Widely accepted criteria......, standards, or guidelines for reporting reliability and agreement in the health care and medical field are lacking. The objective was to develop guidelines for reporting reliability and agreement studies....

  7. Reliability and Its Quantitative Measures

    Alexandru ISAIC-MANIU


    Full Text Available In this article is made an opening for the software reliability issues, through wide-ranging statistical indicators, which are designed based on information collected from operating or testing (samples. It is developed the reliability issues also for the case of the main reliability laws (exponential, normal, Weibull, which validated for a particular system, allows the calculation of some reliability indicators with a higher degree of accuracy and trustworthiness

  8. 2017 NREL Photovoltaic Reliability Workshop

    Kurtz, Sarah [National Renewable Energy Laboratory (NREL), Golden, CO (United States)


    NREL's Photovoltaic (PV) Reliability Workshop (PVRW) brings together PV reliability experts to share information, leading to the improvement of PV module reliability. Such improvement reduces the cost of solar electricity and promotes investor confidence in the technology -- both critical goals for moving PV technologies deeper into the electricity marketplace.

  9. Testing for PV Reliability (Presentation)

    Kurtz, S.; Bansal, S.


    The DOE SUNSHOT workshop is seeking input from the community about PV reliability and how the DOE might address gaps in understanding. This presentation describes the types of testing that are needed for PV reliability and introduces a discussion to identify gaps in our understanding of PV reliability testing.

  10. Reliable Quantum Computers

    Preskill, J


    The new field of quantum error correction has developed spectacularly since its origin less than two years ago. Encoded quantum information can be protected from errors that arise due to uncontrolled interactions with the environment. Recovery from errors can work effectively even if occasional mistakes occur during the recovery procedure. Furthermore, encoded quantum information can be processed without serious propagation of errors. Hence, an arbitrarily long quantum computation can be performed reliably, provided that the average probability of error per quantum gate is less than a certain critical value, the accuracy threshold. A quantum computer storing about 10^6 qubits, with a probability of error per quantum gate of order 10^{-6}, would be a formidable factoring engine. Even a smaller, less accurate quantum computer would be able to perform many useful tasks. (This paper is based on a talk presented at the ITP Conference on Quantum Coherence and Decoherence, 15-18 December 1996.)

  11. Reliability prediction from burn-in data fit to reliability models

    Bernstein, Joseph


    This work will educate chip and system designers on a method for accurately predicting circuit and system reliability in order to estimate failures that will occur in the field as a function of operating conditions at the chip level. This book will combine the knowledge taught in many reliability publications and illustrate how to use the knowledge presented by the semiconductor manufacturing companies in combination with the HTOL end-of-life testing that is currently performed by the chip suppliers as part of their standard qualification procedure and make accurate reliability predictions. Th

  12. DOE Nuclear Weapon Reliability Definition: History, Description, and Implementation

    Wright, D.L.; Cashen, J.J.; Sjulin, J.M.; Bierbaum, R.L.; Kerschen, T.J.


    The overarching goal of the Department of Energy (DOE) nuclear weapon reliability assessment process is to provide a quantitative metric that reflects the ability of the weapons to perform their intended function successfully. This white paper is intended to provide insight into the current and long-standing DOE definition of nuclear weapon reliability, which can be summarized as: The probability of achieving the specified yield, at the target, across the Stockpile-To-Target Sequence of environments, throughout the weapon's lifetime, assuming proper inputs.

  13. Reliability with imperfect diagnostics. [flight-maintenance sequence

    White, A. L.


    A reliability estimation method for systems that continually accumulate faults because of imperfect diagnostics is developed and an application for redundant digital avionics is presented. The present method assumes that if a fault does not appear in a short period of time, it will remain hidden until a majority of components are faulty and the system fails. A certain proportion of a component's faults are detected in a short period of time, and a description of their detection is included in the reliability model. A Markov model of failure during flight for a nonreconfigurable five-plex is presented for a sequence of one-hour flights followed by maintenance.

  14. Reliability-based Assessment of Fatigue Life for Bridges

    Toft, Henrik Stensgaard; Sørensen, John Dalsgaard


    The reliability level for bridges is discussed based on a comparison of the reliability levels proposed and used by e.g. JCSS, ISO, NKB and Eurocodes. The influence of reserve capacity by which failure of a specific detail does not lead to structural collapse is investigated. The results show...... of regular inspections into account. In an illustrative example the reliability level for a set of typical welded steel details is investigated along with the influence of different inspection methods. The results show that the reliability level can be significantly increased by applying regular inspections....... However, the accuracy of the inspection methods has a significant influence on how much the reliability level is increased by an inspection....

  15. Continuous Reliability Enhancement for Wind (CREW) database :

    Hines, Valerie Ann-Peters; Ogilvie, Alistair B.; Bond, Cody R.


    To benchmark the current U.S. wind turbine fleet reliability performance and identify the major contributors to component-level failures and other downtime events, the Department of Energy funded the development of the Continuous Reliability Enhancement for Wind (CREW) database by Sandia National Laboratories. This report is the third annual Wind Plant Reliability Benchmark, to publically report on CREW findings for the wind industry. The CREW database uses both high resolution Supervisory Control and Data Acquisition (SCADA) data from operating plants and Strategic Power Systems ORAPWindª (Operational Reliability Analysis Program for Wind) data, which consist of downtime and reserve event records and daily summaries of various time categories for each turbine. Together, these data are used as inputs into CREWs reliability modeling. The results presented here include: the primary CREW Benchmark statistics (operational availability, utilization, capacity factor, mean time between events, and mean downtime); time accounting from an availability perspective; time accounting in terms of the combination of wind speed and generation levels; power curve analysis; and top system and component contributors to unavailability.

  16. Lower confidence limits for structure reliability

    CHEN Jiading; LI Ji


    For a class of data often arising in engineering,we have developed an approach to compute the lower confidence limit for structure reliability with a given confidence level.Especially,in a case with no failure and a case with only one failure,the concrete computational methods are presented.

  17. Optimal reserve capacity allocation with consideration of customer reliability requirements

    Najafi, M. [Department of Engineering, Islamic Azad University, Science and Research Branch, Tehran (Iran); Ehsan, M.; Fotuhi-Firuzabad, M. [Department of Electrical Engineering, Sharif University of Technology, Tehran (Iran); Akhavein, A. [Department of Engineering, Islamic Azad University, Tehran South Branch (Iran); Afshar, K. [Department of Electrical Engineering, Imam Khomeini International University, Qazvin (Iran)


    An algorithm for determining optimal reserve capacity in a power market is presented in this paper. Optimization process in the proposed algorithm is based on the cost-benefit trade off. Market clearance is executed with consideration of uncertainties of power system components in an aggregated environment. It is assumed that both generating units and interruptible loads participate in the reserve market. In addition, customers' reliability requirements are considered as constraints for decision making process of ISO. The rendered method considers random outages of generating units and transmission lines and determined outage of interruptible loads and employs Monte Carlo Simulation (MCS) for scenarios generation. Unlike previous methods in which a constant value is assumed for cost of the energy not supplied, a flexible value for this parameter is applied which shows an important effect in the evaluation results. The performance of the proposed method has been examined on the IEEE-Reliability Test System (IEEE-RTS). (author)

  18. Electronics reliability calculation and design

    Dummer, Geoffrey W A; Hiller, N


    Electronics Reliability-Calculation and Design provides an introduction to the fundamental concepts of reliability. The increasing complexity of electronic equipment has made problems in designing and manufacturing a reliable product more and more difficult. Specific techniques have been developed that enable designers to integrate reliability into their products, and reliability has become a science in its own right. The book begins with a discussion of basic mathematical and statistical concepts, including arithmetic mean, frequency distribution, median and mode, scatter or dispersion of mea

  19. Power Industry Reliability Coordination in Asia in a Market Environment

    Nikolai I. Voropai


    Full Text Available This paper addresses the problems of power supply reliability in a market environment. The specific features of economic interrelations between the power supply organization and consumers in terms of reliability assurance are examined and the principles of providing power supply reliability are formulated. The economic mechanisms of coordinating the interests of power supply organization and consumers to provide power supply reliability are discussed. Reliability of restructuring China's power industry is introduced. Some reliability data is provided. The data shows that the reliability level has increased significantly in the past two decades. More and more measures are being applied to guarantee reliability of the restructured power systems. The reliability issues and challenges that are facing the Chinese power industry are considered The paper, then examines the evolution of power grids in India, the establishment of a regulatory framework, and operational philosophy in reliability aspects of long-, mid- as well as short-term (operational / outage planning. Grid security, restoration, and mock trial for black start, etc. from the reliability angle are considered. Related issues for islanding operation to improve service reliability for Thailand's Electric Power System are then analyzed.

  20. Hybrid reliability model for fatigue reliability analysis of steel bridges

    曹珊珊; 雷俊卿


    A kind of hybrid reliability model is presented to solve the fatigue reliability problems of steel bridges. The cumulative damage model is one kind of the models used in fatigue reliability analysis. The parameter characteristics of the model can be described as probabilistic and interval. The two-stage hybrid reliability model is given with a theoretical foundation and a solving algorithm to solve the hybrid reliability problems. The theoretical foundation is established by the consistency relationships of interval reliability model and probability reliability model with normally distributed variables in theory. The solving process is combined with the definition of interval reliability index and the probabilistic algorithm. With the consideration of the parameter characteristics of theS−N curve, the cumulative damage model with hybrid variables is given based on the standards from different countries. Lastly, a case of steel structure in the Neville Island Bridge is analyzed to verify the applicability of the hybrid reliability model in fatigue reliability analysis based on the AASHTO.

  1. Mathematical reliability an expository perspective

    Mazzuchi, Thomas; Singpurwalla, Nozer


    In this volume consideration was given to more advanced theoretical approaches and novel applications of reliability to ensure that topics having a futuristic impact were specifically included. Topics like finance, forensics, information, and orthopedics, as well as the more traditional reliability topics were purposefully undertaken to make this collection different from the existing books in reliability. The entries have been categorized into seven parts, each emphasizing a theme that seems poised for the future development of reliability as an academic discipline with relevance. The seven parts are networks and systems; recurrent events; information and design; failure rate function and burn-in; software reliability and random environments; reliability in composites and orthopedics, and reliability in finance and forensics. Embedded within the above are some of the other currently active topics such as causality, cascading, exchangeability, expert testimony, hierarchical modeling, optimization and survival...

  2. On reliability analysis of multi-categorical forecasts

    J. Bröcker


    Full Text Available Reliability analysis of probabilistic forecasts, in particular through the rank histogram or Talagrand diagram, is revisited. Two shortcomings are pointed out: Firstly, a uniform rank histogram is but a necessary condition for reliability. Secondly, if the forecast is assumed to be reliable, an indication is needed how far a histogram is expected to deviate from uniformity merely due to randomness. Concerning the first shortcoming, it is suggested that forecasts be grouped or stratified along suitable criteria, and that reliability is analyzed individually for each forecast stratum. A reliable forecast should have uniform histograms for all individual forecast strata, not only for all forecasts as a whole. As to the second shortcoming, instead of the observed frequencies, the probability of the observed frequency is plotted, providing and indication of the likelihood of the result under the hypothesis that the forecast is reliable. Furthermore, a Goodness-Of-Fit statistic is discussed which is essentially the reliability term of the Ignorance score. The discussed tools are applied to medium range forecasts for 2 m-temperature anomalies at several locations and lead times. The forecasts are stratified along the expected ranked probability score. Those forecasts which feature a high expected score turn out to be particularly unreliable.

  3. Availability, reliability and downtime of systems with repairable components

    Kiureghian, Armen Der; Ditlevsen, Ove Dalager; Song, J.


    Closed-form expressions are derived for the steady-state availability, mean rate of failure, mean duration of downtime and lower bound reliability of a general system with randomly and independently failing repairable components. Component failures are assumed to be homogeneous Poisson events in ......, or reducing the mean duration of system downtime. Example applications to an electrical substation system demonstrate the use of the formulas developed in the paper....

  4. Spatial variations of sea-level rise and impacts: an application of DIVA

    Brown, S; Nicholls, R.J.; Lowe, J A; Hinkel, J.


    Due to complexities of creating sea-level rise scenarios, impacts of climate-induced sea-level rise are often produced from a limited number of models assuming a global uniform rise in sea level. A greater number of models, including those with a pattern reflecting regional variations would help to assure reliability and a range of projections, indicating where models agree and disagree. This paper determines how nine new patterned-scaled sea-level rise scenarios (plus the uniform and pattern...

  5. The reliability and validity of the PALOC-s: a post-acute level of consciousness scale for assessment of young patients with prolonged disturbed consciousness after brain injury.

    Eilander, H.J.; Wiel, M. van de; Wijers-Rouw, M.J.P.; Heugten, C.M. van; Buljevac, D.; Lavrijsen, J.C.M.; Hoenderdaal, P.L.; Letter-van der Heide, L. de; Wijnen, V.J.; Scheirs, J.G.; Kort, P.L. de; Prevo, A.J.


    The objective of the study was the validation of the Post-Acute Level of Consciousness scale (PALOC-s) for use in assessing levels of consciousness of severe brain injured patients in a vegetative state or in a minimally conscious state. A cohort of 44 successively admitted patients (between 2 and

  6. Effects of an assumed cosmic ray-modulated low global cloud cover on the Earth's temperature

    Ramirez, J.; Mendoza, B. [Instituto de Geofisica, Universidad Nacional Autonoma de Mexico, Mexico, D.F. (Mexico); Mendoza, V.; Adem, J. [Centro de Ciencias de la Atmosfera, Universidad Nacional Autonoma de Mexico, Mexico, D.F. (Mexico)]. E-mail:


    We have used the Thermodynamic Model of the Climate to estimate the effect of variations in the low cloud cover on the surface temperature of the Earth in the Northern Hemisphere during the period 1984-1994. We assume that the variations in the low cloud cover are proportional to the variation of the cosmic ray flux measured during the same period. The results indicate that the effect in the surface temperature is more significant in the continents, where for July of 1991, we have found anomalies of the order of 0.7 degrees Celsius for the southeastern of Asia and 0.5 degrees Celsius for the northeast of Mexico. For an increase of 0.75% in the low cloud cover, the surface temperature computed by the model in the North Hemisphere presents a decrease of {approx} 0.11 degrees Celsius; however, for a decrease of 0.90% in the low cloud cover, the model gives an increase in the surface temperature of {approx} 0.15 degrees Celsius, these two cases correspond to a climate sensitivity factor for the case of forcing by duplication of atmospheric CO{sub 2}. These decreases or increases in surface temperature by increases of decreases in low clouds cover are ten times greater than the overall variability of the non-forced model time series. [Spanish] Hemos usado el Modelo Termodinamico del Clima para estimar el efecto de variaciones en la cubierta de nubes bajas sobre la temperatura superficial de la Tierra en el Hemisferio Norte durante el periodo 1984 - 1994. Suponemos que las variaciones en la cubierta de nubes bajas son proporcionales a las variaciones del flujo de rayos cosmicos medido durante el mismo periodo. Los resultados indican que el efecto en la temperatura es mas significativo en los continentes, donde para julio de 1991, hemos encontrado anomalias del orden de 0.7 grados Celsius sobre el sureste de Asia y 0.5 grados Celsius al noreste de Mexico. Para un incremento de 0.75% en la cubierta de nubes bajas, la temperatura de la superficie calculada por el modelo en

  7. Fatigue Reliability of Offshore Wind Turbine Systems

    Marquez-Dominguez, Sergio; Sørensen, John Dalsgaard


    Optimization of the design of offshore wind turbine substructures with respect to fatigue loads is an important issue in offshore wind energy. A stochastic model is developed for assessing the fatigue failure reliability. This model can be used for direct probabilistic design and for calibration...... of appropriate partial safety factors / fatigue design factors (FDF) for steel substructures of offshore wind turbines (OWTs). The fatigue life is modeled by the SN approach. Design and limit state equations are established based on the accumulated fatigue damage. The acceptable reliability level for optimal...

  8. Is quantitative electromyography reliable?

    Cecere, F; Ruf, S; Pancherz, H


    The reliability of quantitative electromyography (EMG) of the masticatory muscles was investigated in 14 subjects without any signs or symptoms of temporomandibular disorders. Integrated EMG activity from the anterior temporalis and masseter muscles was recorded bilaterally by means of bipolar surface electrodes during chewing and biting activities. In the first experiment, the influence of electrode relocation was investigated. No influence of electrode relocation on the recorded EMG signal could be detected. In a second experiment, three sessions of EMG recordings during five different chewing and biting activities were performed in the morning (I); 1 hour later without intermediate removal of the electrodes (II); and in the afternoon, using new electrodes (III). The method errors for different time intervals (I-II and I-III errors) for each muscle and each function were calculated. Depending on the time interval between the EMG recordings, the muscles considered, and the function performed, the individual errors ranged from 5% to 63%. The method error increased significantly (P masseter (mean 27.2%) was higher than for the temporalis (mean 20.0%). The largest function error was found during maximal biting in intercuspal position (mean 23.1%). Based on the findings, quantitative electromyography of the masticatory muscles seems to have a limited value in diagnostics and in the evaluation of individual treatment results.

  9. Reliability-based design optimization strategies based on FORM: a review

    Lopez, Rafael Holdorf; BECK, André Teófilo


    In deterministic optimization, the uncertainties of the structural system (i.e. dimension, model, material, loads, etc) are not explicitly taken into account. Hence, resulting optimal solutions may lead to reduced reliability levels. The objective of reliability based design optimization (RBDO) is to optimize structures guaranteeing that a minimum level of reliability, chosen a priori by the designer, is maintained. Since reliability analysis using the First Order Reliability Method (FORM) is...

  10. Reliability Oriented Design of a Grid-Connected Photovoltaic Microinverter

    Shen, Yanfeng; Wang, Huai; Blaabjerg, Frede


    High reliability performance of microinverters in Photovoltaic (PV) systems is a merit to match lifetime with PV panels, and to reduce the required maintenance efforts and costs. This digest applies a reliability oriented design method for a grid-connected PV microinverter to achieve specific...... lifetime requirement. Reliability allocation is performed from system-level requirement to component-level reliability design target. Special attentions are paid to reliability-critical components, e.g., GaN HEMTs and the dc-link aluminum electrolytic capacitor. A design flow chart, including key steps...... of mission profile based long-term stress analysis, lifetime predication, and reliability modeling are presented. A case study of a 300 W two-stage PV microinverter is used to demonstrate the effectiveness of the design method....

  11. Reliability Demands in FTTH Access Networks

    Pedersen, Jens Myrup; Knudsen, Thomas Phillip; Madsen, Ole Brun


    In this paper reliability and bandwidth demands of existing, new and expected classes of applications running over Fiber To The Home (FTTH) networks to private users and small enterprises are analysed and discussed. Certain applications such as home security and telemedicine are likely to require...... high levels of reliability in the sense that the demands for network availability are high; even short times without connectivity are unacceptable. To satisfy these demands, physical redundancy in the networks is needed. It seems to be the case that - at least in the short term - most reliability......-critical applications do not require much bandwidth. This implies that redundancy do not need to be by fiber, but can be ensured by e.g. coax, copper or wireless solutions. However, implementing these solutions need careful planning to ensure the physical redundancy. In the long term, it is more likely that physical...

  12. Reliability Demands in FTTH Access Networks

    Pedersen, Jens Myrup; Knudsen, Thomas Phillip; Madsen, Ole Brun


    In this paper reliability and bandwidth demands of existing, new and expected classes of applications running over Fiber To The Home (FTTH) networks to private users and small enterprises are analyzed and discussed. Certain applications such as home security and telemedicine are likely to require...... high levels of reliability in the sense that the demands for network availability are high; even short times without connectivity are unacceptable. To satisfy these demands, physical redundancy in the networks is needed. It seems to be the case that - at least in the short term - most reliability......-critical applications do not require much bandwidth. This implies that redundancy do not need to be by fiber, but can be ensured by e.g. coax, copper or wireless solutions. However, implementing these solutions need careful planning to ensure the physical redundancy. In the long term, it is more likely that physical...

  13. Dynamic reliability of digital-based transmitters

    Brissaud, Florent, E-mail: florent.brissaud.2007@utt.f [Institut National de l' Environnement Industriel et des Risques (INERIS), Parc Technologique Alata, BP 2, 60550 Verneuil-en-Halatte (France) and Universite de Technologie de Troyes - UTT, Institut Charles Delaunay - ICD and UMR CNRS 6279 STMR, 12 rue Marie Curie, BP 2060, 10010 Troyes Cedex (France); Smidts, Carol [Ohio State University (OSU), Nuclear Engineering Program, Department of Mechanical Engineering, Scott Laboratory, 201 W 19th Ave, Columbus OH 43210 (United States); Barros, Anne; Berenguer, Christophe [Universite de Technologie de Troyes (UTT), Institut Charles Delaunay (ICD) and UMR CNRS 6279 STMR, 12 rue Marie Curie, BP 2060, 10010 Troyes Cedex (France)


    Dynamic reliability explicitly handles the interactions between the stochastic behaviour of system components and the deterministic behaviour of process variables. While dynamic reliability provides a more efficient and realistic way to perform probabilistic risk assessment than 'static' approaches, its industrial level applications are still limited. Factors contributing to this situation are the inherent complexity of the theory and the lack of a generic platform. More recently the increased use of digital-based systems has also introduced additional modelling challenges related to specific interactions between system components. Typical examples are the 'intelligent transmitters' which are able to exchange information, and to perform internal data processing and advanced functionalities. To make a contribution to solving these challenges, the mathematical framework of dynamic reliability is extended to handle the data and information which are processed and exchanged between systems components. Stochastic deviations that may affect system properties are also introduced to enhance the modelling of failures. A formalized Petri net approach is then presented to perform the corresponding reliability analyses using numerical methods. Following this formalism, a versatile model for the dynamic reliability modelling of digital-based transmitters is proposed. Finally the framework's flexibility and effectiveness is demonstrated on a substantial case study involving a simplified model of a nuclear fast reactor.

  14. Reliability Assessment Of Wind Turbines

    Sørensen, John Dalsgaard


    Reduction of cost of energy for wind turbines are very important in order to make wind energy competitive compared to other energy sources. Therefore the turbine components should be designed to have sufficient reliability but also not be too costly (and safe). This paper presents models...... for uncertainty modeling and reliability assessment of especially the structural components such as tower, blades, substructure and foundation. But since the function of a wind turbine is highly dependent on many electrical and mechanical components as well as a control system also reliability aspects...... of these components are discussed and it is described how there reliability influences the reliability of the structural components. Two illustrative examples are presented considering uncertainty modeling, reliability assessment and calibration of partial safety factors for structural wind turbine components exposed...

  15. Nuclear weapon reliability evaluation methodology

    Wright, D.L. [Sandia National Labs., Albuquerque, NM (United States)


    This document provides an overview of those activities that are normally performed by Sandia National Laboratories to provide nuclear weapon reliability evaluations for the Department of Energy. These reliability evaluations are first provided as a prediction of the attainable stockpile reliability of a proposed weapon design. Stockpile reliability assessments are provided for each weapon type as the weapon is fielded and are continuously updated throughout the weapon stockpile life. The reliability predictions and assessments depend heavily on data from both laboratory simulation and actual flight tests. An important part of the methodology are the opportunities for review that occur throughout the entire process that assure a consistent approach and appropriate use of the data for reliability evaluation purposes.

  16. Lithium battery safety and reliability

    Levy, Samuel C.

    Lithium batteries have been used in a variety of applications for a number of years. As their use continues to grow, particularly in the consumer market, a greater emphasis needs to be placed on safety and reliability. There is a useful technique which can help to design cells and batteries having a greater degree of safety and higher reliability. This technique, known as fault tree analysis, can also be useful in determining the cause of unsafe behavior and poor reliability in existing designs.

  17. Quasi-Bayesian software reliability model with small samples

    ZHANG Jin; TU Jun-xiang; CHEN Zhuo-ning; YAN Xiao-guang


    In traditional Bayesian software reliability models,it was assume that all probabilities are precise.In practical applications the parameters of the probability distributions are often under uncertainty due to strong dependence on subjective information of experts' judgments on sparse statistical data.In this paper,a quasi-Bayesian software reliability model using interval-valued probabilities to clearly quantify experts' prior beliefs on possible intervals of the parameters of the probability distributions is presented.The model integrates experts' judgments with statistical data to obtain more convincible assessments of software reliability with small samples.For some actual data sets,the presented model yields better predictions than the Jelinski-Moranda (JM) model using maximum likelihood (ML).

  18. Integrated Reliability-Based Optimal Design of Structures

    Sørensen, John Dalsgaard; Thoft-Christensen, Palle


    the reliability decreases with time it is often necessary to design an inspection and repair programme. For example the reliability of offshore steel structures decreases with time due to corrosion and development of fatigue cracks. Until now most inspection and repair strategies are based on experience rather......In conventional optimal design of structural systems the weight or the initial cost of the structure is usually used as objective function. Further, the constraints require that the stresses and/or strains at some critical points have to be less than some given values. Finally, all variables...... and parameters are assumed to be deterministic quantities. In this paper a probabilistic formulation is used. Some of the quantities specifying the load and the strength of the structure are modelled as random variables, and the constraints specify that the reliability of the structure has to exceed some given...

  19. Reliability analysis of two unit parallel repairable industrial system

    Mohit Kumar Kakkar


    Full Text Available The aim of this work is to present a reliability and profit analysis of a two-dissimilar parallel unit system under the assumption that operative unit cannot fail after post repair inspection and replacement and there is only one repair facility. Failure and repair times of each unit are assumed to be uncorrelated. Using regenerative point technique various reliability characteristics are obtained which are useful to system designers and industrial managers. Graphical behaviors of mean time to system failure (MTSF and profit function have also been studied. In this paper, some important measures of reliability characteristics of a two non-identical unit standby system model with repair, inspection and post repair are obtained using regenerative point technique.

  20. Joint Architecture Standard (JAS) Reliable Data Delivery Protocol (RDDP) specification.

    Enderle, Justin Wayne; Daniels, James W.; Gardner, Michael T.; Eldridge, John M.; Hunt, Richard D.; Gallegos, Daniel E.


    The Joint Architecture Standard (JAS) program at Sandia National Laboratories requires the use of a reliable data delivery protocol over SpaceWire. The National Aeronautics and Space Administration at the Goddard Spaceflight Center in Greenbelt, Maryland, developed and specified a reliable protocol for its Geostationary Operational Environment Satellite known as GOES-R Reliable Data Delivery Protocol (GRDDP). The JAS program implemented and tested GRDDP and then suggested a number of modifications to the original specification to meet its program specific requirements. This document details the full RDDP specification as modified for JAS. The JAS Reliable Data Delivery Protocol uses the lower-level SpaceWire data link layer to provide reliable packet delivery services to one or more higher-level host application processes. This document specifies the functional requirements for JRDDP but does not specify the interfaces to the lower- or higher-level processes, which may be implementation-dependent.

  1. Guidelines for Reporting Reliability and Agreement Studies (GRRAS) were proposed

    Kottner, Jan; Audige, Laurent; Brorson, Stig;


    Results of reliability and agreement studies are intended to provide information about the amount of error inherent in any diagnosis, score, or measurement. The level of reliability and agreement among users of scales, instruments, or classifications is widely unknown. Therefore, there is a need ...

  2. Reliability of Roof Truss with Punched Nail Plates

    Hansson, Martin; Ellegaard, Peter


    characteristic values as input to the model. The system effect is also determined on the basis of reliability analyses. The found system effect depends on the coefficient of variation, the distribution of the random load variable and the reliability level. Depending on the assumptions, the system effect...

  3. Reliability engineering theory and practice

    Birolini, Alessandro


    Presenting a solid overview of reliability engineering, this volume enables readers to build and evaluate the reliability of various components, equipment and systems. Current applications are presented, and the text itself is based on the author's 30 years of experience in the field.

  4. The Validity of Reliability Measures.

    Seddon, G. M.


    Demonstrates that some commonly used indices can be misleading in their quantification of reliability. The effects are most pronounced on gain or difference scores. Proposals are made to avoid sources of invalidity by using a procedure to assess reliability in terms of upper and lower limits for the true scores of each examinee. (Author/JDH)

  5. Software Reliability through Theorem Proving

    S.G.K. Murthy


    Full Text Available Improving software reliability of mission-critical systems is widely recognised as one of the major challenges. Early detection of errors in software requirements, designs and implementation, need rigorous verification and validation techniques. Several techniques comprising static and dynamic testing approaches are used to improve reliability of mission critical software; however it is hard to balance development time and budget with software reliability. Particularly using dynamic testing techniques, it is hard to ensure software reliability, as exhaustive testing is not possible. On the other hand, formal verification techniques utilise mathematical logic to prove correctness of the software based on given specifications, which in turn improves the reliability of the software. Theorem proving is a powerful formal verification technique that enhances the software reliability for missioncritical aerospace applications. This paper discusses the issues related to software reliability and theorem proving used to enhance software reliability through formal verification technique, based on the experiences with STeP tool, using the conventional and internationally accepted methodologies, models, theorem proving techniques available in the tool without proposing a new model.Defence Science Journal, 2009, 59(3, pp.314-317, DOI:

  6. Reliability engineering in RF CMOS


    In this thesis new developments are presented for reliability engineering in RF CMOS. Given the increase in use of CMOS technology in applications for mobile communication, also the reliability of CMOS for such applications becomes increasingly important. When applied in these applications, CMOS is typically referred to as RF CMOS, where RF stands for radio frequencies.

  7. Reliability in automotive ethernet networks

    Soares, Fabio L.; Campelo, Divanilson R.; Yan, Ying;


    This paper provides an overview of in-vehicle communication networks and addresses the challenges of providing reliability in automotive Ethernet in particular.......This paper provides an overview of in-vehicle communication networks and addresses the challenges of providing reliability in automotive Ethernet in particular....

  8. Estimation of Bridge Reliability Distributions

    Thoft-Christensen, Palle

    In this paper it is shown how the so-called reliability distributions can be estimated using crude Monte Carlo simulation. The main purpose is to demonstrate the methodology. Therefor very exact data concerning reliability and deterioration are not needed. However, it is intended in the paper to ...

  9. A particle swarm model for estimating reliability and scheduling system maintenance

    Puzis, Rami; Shirtz, Dov; Elovici, Yuval


    Modifying data and information system components may introduce new errors and deteriorate the reliability of the system. Reliability can be efficiently regained with reliability centred maintenance, which requires reliability estimation for maintenance scheduling. A variant of the particle swarm model is used to estimate reliability of systems implemented according to the model view controller paradigm. Simulations based on data collected from an online system of a large financial institute are used to compare three component-level maintenance policies. Results show that appropriately scheduled component-level maintenance greatly reduces the cost of upholding an acceptable level of reliability by reducing the need in system-wide maintenance.


    B.Anni Princy


    Full Text Available A software reliability exemplary projects snags the random process as disillusionments which were the culmination yield of two progressions: emerging faults and initial state values. The predominant classification uses the logistic analysis effort function mounting efficient software on the real time dataset. The detriments of the logistic testing were efficaciously overcome by Pareto distribution. The estimated outline ventures the resolved technique for analyzing the suitable communities and the preeminent of fit for a software reliability progress model. Its constraints are predictable to evaluate the reliability of a software system. The future process will permit for software reliability estimations that can be used both as prominence Indicator, but also for planning and controlling resources, the development times based on the onslaught assignments of the efficient computing and reliable measurement of a software system was competent.

  11. Reliability estimation using kriging metamodel

    Cho, Tae Min; Ju, Byeong Hyeon; Lee, Byung Chai [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Jung, Do Hyun [Korea Automotive Technology Institute, Chonan (Korea, Republic of)


    In this study, the new method for reliability estimation is proposed using kriging metamodel. Kriging metamodel can be determined by appropriate sampling range and sampling numbers because there are no random errors in the Design and Analysis of Computer Experiments(DACE) model. The first kriging metamodel is made based on widely ranged sampling points. The Advanced First Order Reliability Method(AFORM) is applied to the first kriging metamodel to estimate the reliability approximately. Then, the second kriging metamodel is constructed using additional sampling points with updated sampling range. The Monte-Carlo Simulation(MCS) is applied to the second kriging metamodel to evaluate the reliability. The proposed method is applied to numerical examples and the results are almost equal to the reference reliability.

  12. Photovoltaic performance and reliability workshop

    Mrig, L. [ed.


    This workshop was the sixth in a series of workshops sponsored by NREL/DOE under the general subject of photovoltaic testing and reliability during the period 1986--1993. PV performance and PV reliability are at least as important as PV cost, if not more. In the US, PV manufacturers, DOE laboratories, electric utilities, and others are engaged in the photovoltaic reliability research and testing. This group of researchers and others interested in the field were brought together to exchange the technical knowledge and field experience as related to current information in this evolving field of PV reliability. The papers presented here reflect this effort since the last workshop held in September, 1992. The topics covered include: cell and module characterization, module and system testing, durability and reliability, system field experience, and standards and codes.

  13. Reliability impact of solar electric generation upon electric utility systems

    Day, J. T.; Hobbs, W. J.


    The introduction of solar electric systems into an electric utility grid brings new considerations in the assessment of the utility's power supply reliability. This paper summarizes a methodology for estimating the reliability impact of solar electric technologies upon electric utilities for value assessment and planning purposes. Utility expansion and operating impacts are considered. Sample results from photovoltaic analysis show that solar electric plants can increase the reliable load-carrying capability of a utility system. However, the load-carrying capability of the incremental power tends to decrease, particularly at significant capacity penetration levels. Other factors influencing reliability impact are identified.

  14. Reliable software for unreliable hardware a cross layer perspective

    Rehman, Semeen; Henkel, Jörg


    This book describes novel software concepts to increase reliability under user-defined constraints. The authors’ approach bridges, for the first time, the reliability gap between hardware and software. Readers will learn how to achieve increased soft error resilience on unreliable hardware, while exploiting the inherent error masking characteristics and error (stemming from soft errors, aging, and process variations) mitigations potential at different software layers. · Provides a comprehensive overview of reliability modeling and optimization techniques at different hardware and software levels; · Describes novel optimization techniques for software cross-layer reliability, targeting unreliable hardware.

  15. CMOS RF circuit design for reliability and variability

    Yuan, Jiann-Shiun


    The subject of this book is CMOS RF circuit design for reliability. The device reliability and process variation issues on RF transmitter and receiver circuits will be particular interest to the readers in the field of semiconductor devices and circuits. This proposed book is unique to explore typical reliability issues in the device and technology level and then to examine their impact on RF wireless transceiver circuit performance. Analytical equations, experimental data, device and circuit simulation results will be given for clear explanation. The main benefit the reader derive from this book will be clear understanding on how device reliability issues affects the RF circuit performance subjected to operation aging and process variations.


    Waterman, Brian; Sutter, Robert; Burroughs, Thomas; Dunagan, W Claiborne


    When evaluating physician performance measures, physician leaders are faced with the quandary of determining whether departures from expected physician performance measurements represent a true signal or random error. This uncertainty impedes the physician leader's ability and confidence to take appropriate performance improvement actions based on physician performance measurements. Incorporating reliability adjustment into physician performance measurement is a valuable way of reducing the impact of random error in the measurements, such as those caused by small sample sizes. Consequently, the physician executive has more confidence that the results represent true performance and is positioned to make better physician performance improvement decisions. Applying reliability adjustment to physician-level performance data is relatively new. As others have noted previously, it's important to keep in mind that reliability adjustment adds significant complexity to the production, interpretation and utilization of results. Furthermore, the methods explored in this case study only scratch the surface of the range of available Bayesian methods that can be used for reliability adjustment; further study is needed to test and compare these methods in practice and to examine important extensions for handling specialty-specific concerns (e.g., average case volumes, which have been shown to be important in cardiac surgery outcomes). Moreover, it's important to note that the provider group average as a basis for shrinkage is one of several possible choices that could be employed in practice and deserves further exploration in future research. With these caveats, our results demonstrate that incorporating reliability adjustment into physician performance measurements is feasible and can notably reduce the incidence of "real" signals relative to what one would expect to see using more traditional approaches. A physician leader who is interested in catalyzing performance improvement

  17. Optimal Implementations for Reliable Circadian Clocks

    Hasegawa, Yoshihiko; Arita, Masanori


    Circadian rhythms are acquired through evolution to increase the chances for survival through synchronizing with the daylight cycle. Reliable synchronization is realized through two trade-off properties: regularity to keep time precisely, and entrainability to synchronize the internal time with daylight. We find by using a phase model with multiple inputs that achieving the maximal limit of regularity and entrainability entails many inherent features of the circadian mechanism. At the molecular level, we demonstrate the role sharing of two light inputs, phase advance and delay, as is well observed in mammals. At the behavioral level, the optimal phase-response curve inevitably contains a dead zone, a time during which light pulses neither advance nor delay the clock. We reproduce the results of phase-controlling experiments entrained by two types of periodic light pulses. Our results indicate that circadian clocks are designed optimally for reliable clockwork through evolution.

  18. Transformer real-time reliability model based on operating conditions

    HE Jian; CHENG Lin; SUN Yuan-zhang


    Operational reliability evaluation theory reflects real-time reliability level of power system. The component failure rate varies with operating conditions. The impact of real-time operating conditions such as ambient temperature and transformer MVA (megavolt-ampere) loading on transformer insulation life is studied in this paper. The formula of transformer failure rate based on the winding hottest-spot temperature (HST) is given. Thus the real-time reliability model of transformer based on operating conditions is presented. The work is illustrated using the 1979 IEEE Reliability Test System. The changes of operating conditions are simulated by using hourly load curve and temperature curve, so the curves of real-time reliability indices are obtained by using operational reliability evaluation.

  19. Reliable Signal Transduction

    Wollman, Roy

    Stochasticity inherent to biochemical reactions (intrinsic noise) and variability in cellular states (extrinsic noise) degrade information transmitted through signaling networks. We analyzed the ability of temporal signal modulation - that is dynamics - to reduce noise-induced information loss. In the extracellular signal-regulated kinase (ERK), calcium (Ca(2 +)) , and nuclear factor kappa-B (NF- κB) pathways, response dynamics resulted in significantly greater information transmission capacities compared to nondynamic responses. Theoretical analysis demonstrated that signaling dynamics has a key role in overcoming extrinsic noise. Experimental measurements of information transmission in the ERK network under varying signal-to-noise levels confirmed our predictions and showed that signaling dynamics mitigate, and can potentially eliminate, extrinsic noise-induced information loss. By curbing the information-degrading effects of cell-to-cell variability, dynamic responses substantially increase the accuracy of biochemical signaling networks.

  20. Failure database and tools for wind turbine availability and reliability analyses. The application of reliability data for selected wind turbines

    Kozine, Igor; Christensen, P.; Winther-Jensen, M.


    The objective of this project was to develop and establish a database for collecting reliability and reliability-related data, for assessing the reliability of wind turbine components and subsystems and wind turbines as a whole, as well as for assessingwind turbine availability while ranking...... the contributions at both the component and system levels. The project resulted in a software package combining a failure database with programs for predicting WTB availability and the reliability of all thecomponents and systems, especially the safety system. The report consists of a description of the theoretical...... foundation of the reliability and availability analyses and of sections devoted to the development of the WTB reliability models as well as adescription of the features of the database and software developed. The project comprises analysis of WTBs NM 600/44, 600/48, 750/44 and 750/48, all of which have...

  1. Maritime shipping as a high reliability industry: A qualitative analysis

    Mannarelli, T.; Roberts, K.; Bea, R.


    The maritime oil shipping industry has great public demands for safe and reliable organizational performance. Researchers have identified a set of organizations and industries that operate at extremely high levels of reliability, and have labelled them High Reliability Organizations (HRO). Following the Exxon Valdez oil spill disaster of 1989, public demands for HRO-level operations were placed on the oil industry. It will be demonstrated that, despite enormous improvements in safety and reliability, maritime shipping is not operating as an HRO industry. An analysis of the organizational, environmental, and cultural history of the oil industry will help to provide justification and explanation. The oil industry will be contrasted with other HRO industries and the differences will inform the shortfalls maritime shipping experiences with regard to maximizing reliability. Finally, possible solutions for the achievement of HRO status will be offered.

  2. An indirect technique for estimating reliability of analog and mixed-signal systems during operational life

    Khan, M.A.; Kerkhoff, Hans G.


    Reliability of electronic systems has been thoroughly investigated in literature and a number of analytical approaches at the design stage are already available via examination of the circuit-level reliability effects based on device-level models. Reliability estimation during operational life of an

  3. An indirect technique for estimating reliability of analog and mixed-signal systems during operational life

    Khan, Muhammad Aamir; Kerkhoff, Hans G.


    Reliability of electronic systems has been thoroughly investigated in literature and a number of analytical approaches at the design stage are already available via examination of the circuit-level reliability effects based on device-level models. Reliability estimation during operational life of an

  4. Reliability Modeling of Wind Turbines

    Kostandyan, Erik

    and uncertainties are quantified. Further, estimation of annual failure probability for structural components taking into account possible faults in electrical or mechanical systems is considered. For a representative structural failure mode, a probabilistic model is developed that incorporates grid loss failures...... components. Thus, models of reliability should be developed and applied in order to quantify the residual life of the components. Damage models based on physics of failure combined with stochastic models describing the uncertain parameters are imperative for development of cost-optimal decision tools...... for Operation & Maintenance planning. Concentrating efforts on development of such models, this research is focused on reliability modeling of Wind Turbine critical subsystems (especially the power converter system). For reliability assessment of these components, structural reliability methods are applied...

  5. On Bayesian System Reliability Analysis

    Soerensen Ringi, M.


    The view taken in this thesis is that reliability, the probability that a system will perform a required function for a stated period of time, depends on a person`s state of knowledge. Reliability changes as this state of knowledge changes, i.e. when new relevant information becomes available. Most existing models for system reliability prediction are developed in a classical framework of probability theory and they overlook some information that is always present. Probability is just an analytical tool to handle uncertainty, based on judgement and subjective opinions. It is argued that the Bayesian approach gives a much more comprehensive understanding of the foundations of probability than the so called frequentistic school. A new model for system reliability prediction is given in two papers. The model encloses the fact that component failures are dependent because of a shared operational environment. The suggested model also naturally permits learning from failure data of similar components in non identical environments. 85 refs.

  6. VCSEL reliability: a user's perspective

    McElfresh, David K.; Lopez, Leoncio D.; Melanson, Robert; Vacar, Dan


    VCSEL arrays are being considered for use in interconnect applications that require high speed, high bandwidth, high density, and high reliability. In order to better understand the reliability of VCSEL arrays, we initiated an internal project at SUN Microsystems, Inc. In this paper, we present preliminary results of an ongoing accelerated temperature-humidity-bias stress test on VCSEL arrays from several manufacturers. This test revealed no significant differences between the reliability of AlGaAs, oxide confined VCSEL arrays constructed with a trench oxide and mesa for isolation. This test did find that the reliability of arrays needs to be measured on arrays and not be estimated with the data from singulated VCSELs as is a common practice.

  7. Innovations in power systems reliability

    Santora, Albert H; Vaccaro, Alfredo


    Electrical grids are among the world's most reliable systems, yet they still face a host of issues, from aging infrastructure to questions of resource distribution. Here is a comprehensive and systematic approach to tackling these contemporary challenges.

  8. Accelerator Availability and Reliability Issues

    Steve Suhring


    Maintaining reliable machine operations for existing machines as well as planning for future machines' operability present significant challenges to those responsible for system performance and improvement. Changes to machine requirements and beam specifications often reduce overall machine availability in an effort to meet user needs. Accelerator reliability issues from around the world will be presented, followed by a discussion of the major factors influencing machine availability.

  9. Software Reliability Experimentation and Control

    Kai-Yuan Cai


    This paper classifies software researches as theoretical researches, experimental researches, and engineering researches, and is mainly concerned with the experimental researches with focus on software reliability experimentation and control. The state-of-the-art of experimental or empirical studies is reviewed. A new experimentation methodology is proposed, which is largely theory discovering oriented. Several unexpected results of experimental studies are presented to justify the importance of software reliability experimentation and control. Finally, a few topics that deserve future investigation are identified.

  10. Human Reliability and the Cost of Doing Business

    DeMott, Diana


    Most businesses recognize that people will make mistakes and assume errors are just part of the cost of doing business, but does it need to be? Companies with high risk, or major consequences, should consider the effect of human error. In a variety of industries, Human Errors have caused costly failures and workplace injuries. These have included: airline mishaps, medical malpractice, administration of medication and major oil spills have all been blamed on human error. A technique to mitigate or even eliminate some of these costly human errors is the use of Human Reliability Analysis (HRA). Various methodologies are available to perform Human Reliability Assessments that range from identifying the most likely areas for concern to detailed assessments with human error failure probabilities calculated. Which methodology to use would be based on a variety of factors that would include: 1) how people react and act in different industries, and differing expectations based on industries standards, 2) factors that influence how the human errors could occur such as tasks, tools, environment, workplace, support, training and procedure, 3) type and availability of data and 4) how the industry views risk & reliability influences ( types of emergencies, contingencies and routine tasks versus cost based concerns). The Human Reliability Assessments should be the first step to reduce, mitigate or eliminate the costly mistakes or catastrophic failures. Using Human Reliability techniques to identify and classify human error risks allows a company more opportunities to mitigate or eliminate these risks and prevent costly failures.

  11. Reliability of redundant ductile structures with uncertain system failure criteria

    Baidurya Bhattacharya; Qiang Lu; Jinquan Zhong


    Current reliability based approaches to structural design are typically element based: they commonly include uncertainties in the structural resistance, applied loads and geometric parameters, and in some cases in the idealized structural model. Nevertheless, the true measure of safety is the structural systems reliability which must consider multiple failure paths, load sharing and load redistribution after member failures, and is beyond the domain of element reliability analysis. Identification of system failure is often subjective, and a crisp definition of system failure arises naturally only in a few idealized instances equally important. We analyse the multi-girder steel highway bridge as a out of active parallel system. System failure is defined as gross inelastic deformation of the bridge deck; the subjectivity in the failure criterion is accounted for by generalizing as a random variable. Randomness in arises from a non-unique relation between number of failed girders and maximum deflection and from randomness in the definition of the failure deflection. We show how uncertain failure criteria and structural systems analyses can be decoupled. Randomness in the transverse location of trucks is considered and elastic perfectly plastic material response is assumed. The role of the system factor modifying the element-reliability based design equation to achieve a target system reliability is also demonstrated.

  12. MEMS reliability: coming of age

    Douglass, Michael R.


    In today's high-volume semiconductor world, one could easily take reliability for granted. As the MOEMS/MEMS industry continues to establish itself as a viable alternative to conventional manufacturing in the macro world, reliability can be of high concern. Currently, there are several emerging market opportunities in which MOEMS/MEMS is gaining a foothold. Markets such as mobile media, consumer electronics, biomedical devices, and homeland security are all showing great interest in microfabricated products. At the same time, these markets are among the most demanding when it comes to reliability assurance. To be successful, each company developing a MOEMS/MEMS device must consider reliability on an equal footing with cost, performance and manufacturability. What can this maturing industry learn from the successful development of DLP technology, air bag accelerometers and inkjet printheads? This paper discusses some basic reliability principles which any MOEMS/MEMS device development must use. Examples from the commercially successful and highly reliable Digital Micromirror Device complement the discussion.

  13. Reliability assessment of an OVH HV power line truss transmission tower subjected to seismic loading

    Winkelmann, Karol; Jakubowska, Patrycja; Soltysik, Barbara


    The study focuses on the reliability of a transmission tower OS24 ON150 + 10, an element of an OVH HV power line, under seismic loading. In order to describe the seismic force, the real-life recording of the horizontal component of the El Centro earthquake was adopted. The amplitude and the period of this excitation are assumed random, their variation is described by Weibull distribution. The possible space state of the phenomenon is given in the form of a structural response surface (RSM methodology), approximated by an ANOVA table with directional sampling (DS) points. Four design limit states are considered: stress limit criterion for a natural load combination, criterion for an accidental combination (one-sided cable snap), vertical and horizontal translation criteria. According to these cases the HLRF reliability index β is used for structural safety assessment. The RSM approach is well suited for the analysis - it is numerically efficient, not excessively time consuming, indicating a high confidence level. Given the problem conditions, the seismic excitation is shown the sufficient trigger to the loss of load-bearing capacity or stability of the tower.

  14. ECLSS Reliability for Long Duration Missions Beyond Lower Earth Orbit

    Sargusingh, Miriam J.; Nelson, Jason


    Reliability has been highlighted by NASA as critical to future human space exploration particularly in the area of environmental controls and life support systems. The Advanced Exploration Systems (AES) projects have been encouraged to pursue higher reliability components and systems as part of technology development plans. However, there is no consensus on what is meant by improving on reliability; nor on how to assess reliability within the AES projects. This became apparent when trying to assess reliability as one of several figures of merit for a regenerable water architecture trade study. In the Spring of 2013, the AES Water Recovery Project (WRP) hosted a series of events at the NASA Johnson Space Center (JSC) with the intended goal of establishing a common language and understanding of our reliability goals and equipping the projects with acceptable means of assessing our respective systems. This campaign included an educational series in which experts from across the agency and academia provided information on terminology, tools and techniques associated with evaluating and designing for system reliability. The campaign culminated in a workshop at JSC with members of the ECLSS and AES communities with the goal of developing a consensus on what reliability means to AES and identifying methods for assessing our low to mid-technology readiness level (TRL) technologies for reliability. This paper details the results of the workshop.

  15. Fault recovery in the reliable multicast protocol

    Callahan, John R.; Montgomery, Todd L.; Whetten, Brian


    The Reliable Multicast Protocol (RMP) provides a unique, group-based model for distributed programs that need to handle reconfiguration events at the application layer. This model, called membership views, provides an abstraction in which events such as site failures, network partitions, and normal join-leave events are viewed as group reformations. RMP provides access to this model through an application programming interface (API) that notifies an application when a group is reformed as the result of a some event. RMP provides applications with reliable delivery of messages using an underlying IP Multicast (12, 5) media to other group members in a distributed environment even in the case of reformations. A distributed application can use various Quality of Service (QoS) levels provided by RMP to tolerate group reformations. This paper explores the implementation details of the mechanisms in RMP that provide distributed applications with membership view information and fault recovery capabilities.

  16. By-product information can stabilize the reliability of communication.

    Schaefer, H Martin; Ruxton, G D


    Although communication underpins many biological processes, its function and basic definition remain contentious. In particular, researchers have debated whether information should be an integral part of a definition of communication and how it remains reliable. So far the handicap principle, assuming signal costs to stabilize reliable communication, has been the predominant paradigm in the study of animal communication. The role of by-product information produced by mechanisms other than the communicative interaction has been neglected in the debate on signal reliability. We argue that by-product information is common and that it provides the starting point for ritualization as the process of the evolution of communication. Second, by-product information remains unchanged during ritualization and enforces reliable communication by restricting the options for manipulation and cheating. Third, this perspective changes the focus of research on communication from studying signal costs to studying the costs of cheating. It can thus explain the reliability of signalling in many communication systems that do not rely on handicaps. We emphasize that communication can often be informative but that the evolution of communication does not cause the evolution of information because by-product information often predates and stimulates the evolution of communication. Communication is thus a consequence but not a cause of reliability. Communication is the interplay of inadvertent, informative traits and evolved traits that increase the stimulation and perception of perceivers. Viewing communication as a complex of inadvertent and derived traits facilitates understanding of the selective pressures shaping communication and those shaping information and its reliability. This viewpoint further contributes to resolving the current controversy on the role of information in communication.

  17. Common aspects and differences in the behaviour of classical configuration versus canard configuration aircraft in the presence of vertical gusts, assuming the hypothesis of an elastic fuselage

    Octavian PREOTU


    Full Text Available The paper analyzes, in parallel, common aspects and differences in the behavior of classical configuration versus canard configuration aircraft in the presence of vertical gusts, assuming the hypothesis of an elastic fuselage. The effects of the main constructional dimensions of the horizontal empennage on lift cancelling and horizontal empennage control are being analyzed

  18. Reliability of coprological diagnosis of Paramphistomum sp. infection in cows.

    Rieu, Emilie; Recca, Alain; Bénet, Jean Jacques; Saana, Moez; Dorchies, Philippe; Guillot, Jacques


    A modified MacMaster method was tested to check its reliability for the diagnosis of bovine paramphistomosis in France. A total number of 148 fecal samples from cows examined post-mortem were analysed. Coprological results were in accordance with necropsic examinations. Bayesian techniques (Markov Chain Monte Carlo) were used to estimate the diagnostic parameters of each of these tests. Two scenarios were envisaged: one assuming a sensitivity of the necropsic examination equal to 1 and one assuming the specificity of the coprology equal to 1. Whatever the scenarios, each test presented good estimated parameters, always superior to 0.9. A significant relationship was clearly established between epg counts and parasites burden: more than 100 epg indicated the presence of more than 100 adult paramphistomes in rumen and/or reticulum.

  19. Reliability of plantar pressure platforms.

    Hafer, Jocelyn F; Lenhoff, Mark W; Song, Jinsup; Jordan, Joanne M; Hannan, Marian T; Hillstrom, Howard J


    Plantar pressure measurement is common practice in many research and clinical protocols. While the accuracy of some plantar pressure measuring devices and methods for ensuring consistency in data collection on plantar pressure measuring devices have been reported, the reliability of different devices when testing the same individuals is not known. This study calculated intra-mat, intra-manufacturer, and inter-manufacturer reliability of plantar pressure parameters as well as the number of plantar pressure trials needed to reach a stable estimate of the mean for an individual. Twenty-two healthy adults completed ten walking trials across each of two Novel emed-x(®) and two Tekscan MatScan(®) plantar pressure measuring devices in a single visit. Intraclass correlation (ICC) was used to describe the agreement between values measured by different devices. All intra-platform reliability correlations were greater than 0.70. All inter-emed-x(®) reliability correlations were greater than 0.70. Inter-MatScan(®) reliability correlations were greater than 0.70 in 31 and 52 of 56 parameters when looking at a 10-trial average and a 5-trial average, respectively. Inter-manufacturer reliability including all four devices was greater than 0.70 for 52 and 56 of 56 parameters when looking at a 10-trial average and a 5-trial average, respectively. All parameters reached a value within 90% of an unbiased estimate of the mean within five trials. Overall, reliability results are encouraging for investigators and clinicians who may have plantar pressure data sets that include data collected on different devices.

  20. 78 FR 38311 - Reliability Technical Conference Agenda


    ..., Fix, Track, and Report program enhanced reliability? b. What is the status of the NERC Reliability... Energy Regulatory Commission Reliability Technical Conference Agenda Reliability Technical Docket No. AD13-6-000 Conference. North American Electric Docket No. RC11-6-004 Reliability Corporation....

  1. Individual Differences in Human Reliability Analysis

    Jeffrey C. Joe; Ronald L. Boring


    While human reliability analysis (HRA) methods include uncertainty in quantification, the nominal model of human error in HRA typically assumes that operator performance does not vary significantly when they are given the same initiating event, indicators, procedures, and training, and that any differences in operator performance are simply aleatory (i.e., random). While this assumption generally holds true when performing routine actions, variability in operator response has been observed in multiple studies, especially in complex situations that go beyond training and procedures. As such, complexity can lead to differences in operator performance (e.g., operator understanding and decision-making). Furthermore, psychological research has shown that there are a number of known antecedents (i.e., attributable causes) that consistently contribute to observable and systematically measurable (i.e., not random) differences in behavior. This paper reviews examples of individual differences taken from operational experience and the psychological literature. The impact of these differences in human behavior and their implications for HRA are then discussed. We propose that individual differences should not be treated as aleatory, but rather as epistemic. Ultimately, by understanding the sources of individual differences, it is possible to remove some epistemic uncertainty from analyses.

  2. Ultra-Reliable Communication in 5G Wireless Systems

    Popovski, Petar


    —Wireless 5G systems will not only be “4G, but faster”. One of the novel features discussed in relation to 5G is Ultra-Reliable Communication (URC), an operation mode not present in today’s wireless systems. URC refers to provision of certain level of communication service almost 100 % of the time....... Example URC applications include reliable cloud connectivity, critical connections for industrial automation and reliable wireless coordination among vehicles. This paper puts forward a systematic view on URC in 5G wireless systems. It starts by analyzing the fundamental mechanisms that constitute......-term URC (URC-S). The second dimension is represented by the type of reliability impairment that can affect the communication reliability in a given scenario. The main objective of this paper is to create the context for defining and solving the new engineering problems posed by URC in 5G....

  3. Reliable control using the primary and dual Youla parameterizations

    Niemann, Hans Henrik; Stoustrup, J.


    in connection with reliable control and feedback control with fault rejection. The main emphasis is on fault modeling. A number of fault diagnosis problems, reliable control problems, and feedback control with fault rejection problems are formulated/considered, again, mainly from a fault modeling point of view......Different aspects of modeling faults in dynamic systems are considered in connection with reliable control (RC). The fault models include models with additive faults, multiplicative faults and structural changes in the models due to faults in the systems. These descriptions are considered....... Reliability is introduced by means of the (primary) Youla parameterization of all stabilizing controllers, where an additional loop is closed around a diagnostic signal. In order to quantify the level of reliability, the dual Youla parameterization is introduced which can be used to analyze how large faults...

  4. Reliability Impacts in Life Support Architecture and Technology Selection

    Lange Kevin E.; Anderson, Molly S.


    Quantitative assessments of system reliability and equivalent system mass (ESM) were made for different life support architectures based primarily on International Space Station technologies. The analysis was applied to a one-year deep-space mission. System reliability was increased by adding redundancy and spares, which added to the ESM. Results were thus obtained allowing a comparison of the ESM for each architecture at equivalent levels of reliability. Although the analysis contains numerous simplifications and uncertainties, the results suggest that achieving necessary reliabilities for deep-space missions will add substantially to the life support ESM and could influence the optimal degree of life support closure. Approaches for reducing reliability impacts were investigated and are discussed.


    Gustov Yuriy Ivanovich


    Full Text Available The primary objective of the research is the synergetic reliability of perlite-reduced structural steel 09G2FB exposed to various thermal and mechanical treatments. In the aftermath of the above exposure, the steel in question has proved to assume a set of strength-related and plastic mechanical properties (σσδ and ψ.

  6. Scenario based approach to structural damage detection and its value in a risk and reliability perspective

    Hovgaard, Mads Knude; Hansen, Jannick Balleby; Brincker, Rune


    A scenario- and vibration based structural damage detection method is demonstrated though simulation. The method is Finite Element (FE) based. The value of the monitoring is calculated using structural reliability theory. A high cycle fatigue crack propagation model is assumed as the damage mecha...


    Sebastian Marian ZAHARIA


    Full Text Available In this paper is presented a case study that demonstrates how design to experiments (DOE information can be used to design better accelerated reliability tests. In the case study described in this paper, will be done a comparison and optimization between main accelerated reliability test plans (3 Level Best Standard Plan, 3 Level Best Compromise Plan, 3 Level Best Equal Expected Number Failing Plan, 3 Level 4:2:1 Allocation Plan. Before starting an accelerated reliability test, it is advisable to have a plan that helps in accurately estimating reliability at operating conditions while minimizing test time and costs. A test plan should be used to decide on the appropriate stress levels that should be used (for each stress type and the amount of the test units that need to be allocated to the different stress levels (for each combination of the different stress types' levels. For the case study it used ALTA 7 software what provides a complete analysis for data from accelerated reliability tests

  8. Reliability Modeling of Wind Turbines

    Kostandyan, Erik

    Cost reductions for offshore wind turbines are a substantial requirement in order to make offshore wind energy more competitive compared to other energy supply methods. During the 20 – 25 years of wind turbines useful life, Operation & Maintenance costs are typically estimated to be a quarter...... the actions should be made and the type of actions requires knowledge on the accumulated damage or degradation state of the wind turbine components. For offshore wind turbines, the action times could be extended due to weather restrictions and result in damage or degradation increase of the remaining...... for Operation & Maintenance planning. Concentrating efforts on development of such models, this research is focused on reliability modeling of Wind Turbine critical subsystems (especially the power converter system). For reliability assessment of these components, structural reliability methods are applied...

  9. Reliability of Wave Energy Converters

    Ambühl, Simon

    . Structural reliability considerations and optimizations impact operation and maintenance (O&M) costs as well as the initial investment costs. Furthermore, there is a control system for WEC applications which defines the harvested energy but also the loads onto the structure. Therefore, extreme loads but also...... WEPTOS. Calibration of safety factors are performed for welded structures at theWavestar device including different control systems for harvesting energy from waves. In addition, a case study of different O&M strategies for WECs is discussed, and an example of reliability-based structural optimization......There are many different working principles for wave energy converters (WECs) which are used to produce electricity from waves. In order for WECs to become successful and more competitive to other renewable electricity sources, the consideration of the structural reliability of WECs is essential...

  10. Reliability Characteristics of Power Plants

    Zbynek Martinek


    Full Text Available This paper describes the phenomenon of reliability of power plants. It gives an explanation of the terms connected with this topic as their proper understanding is important for understanding the relations and equations which model the possible real situations. The reliability phenomenon is analysed using both the exponential distribution and the Weibull distribution. The results of our analysis are specific equations giving information about the characteristics of the power plants, the mean time of operations and the probability of failure-free operation. Equations solved for the Weibull distribution respect the failures as well as the actual operating hours. Thanks to our results, we are able to create a model of dynamic reliability for prediction of future states. It can be useful for improving the current situation of the unit as well as for creating the optimal plan of maintenance and thus have an impact on the overall economics of the operation of these power plants.

  11. Component reliability for electronic systems

    Bajenescu, Titu-Marius I


    The main reason for the premature breakdown of today's electronic products (computers, cars, tools, appliances, etc.) is the failure of the components used to build these products. Today professionals are looking for effective ways to minimize the degradation of electronic components to help ensure longer-lasting, more technically sound products and systems. This practical book offers engineers specific guidance on how to design more reliable components and build more reliable electronic systems. Professionals learn how to optimize a virtual component prototype, accurately monitor product reliability during the entire production process, and add the burn-in and selection procedures that are the most appropriate for the intended applications. Moreover, the book helps system designers ensure that all components are correctly applied, margins are adequate, wear-out failure modes are prevented during the expected duration of life, and system interfaces cannot lead to failure.

  12. Design for Reliability of Wafer Level MEMS packaging

    Zaal, J.J.M.


    The world has seen an unrivaled spread of semiconductor technology into virtually any part of society. The main enablers of this semiconductor rush are the decreasing feature size and the constantly decreasing costs of semiconductors. The decreasing costs of semiconductors in general are caused by t

  13. Reliability of single sample experimental designs: comfortable effort level.

    Brown, W S; Morris, R J; DeGroot, T; Murry, T


    This study was designed to ascertain the intrasubject variability across multiple recording sessions-most often disregarded in reporting group mean data or unavailable because of single sample experimental designs. Intrasubject variability was assessed within and across several experimental sessions from measures of speaking fundamental frequency, vocal intensity, and reading rate. Three age groups of men and women--young, middle-aged, and elderly--repeated the vowel /a/, read a standard passage, and spoke extemporaneously during each experimental session. Statistical analyses were performed to assess each speaker's variability from his or her own mean, and that which consistently varied for any one speaking sample type, both within or across days. Results indicated that intrasubject variability was minimal, with approximately 4% of the data exhibiting significant variation across experimental sessions.

  14. Design for Reliability of Wafer Level MEMS packaging

    Zaal, J.J.M.


    The world has seen an unrivaled spread of semiconductor technology into virtually any part of society. The main enablers of this semiconductor rush are the decreasing feature size and the constantly decreasing costs of semiconductors. The decreasing costs of semiconductors in general are caused by t

  15. Reliability-based optimization of engineering structures

    Sørensen, John Dalsgaard


    The theoretical basis for reliability-based structural optimization within the framework of Bayesian statistical decision theory is briefly described. Reliability-based cost benefit problems are formulated and exemplitied with structural optimization. The basic reliability-based optimization prob...

  16. Assessment of Network and Data Communication Reliability for Lungmen NPS

    Hsu, Teng Chieh; Chou, Hwai Pwu [National Tsing Hua University, Hsinchu (China); Chao, Chun Chang [The Institute of Nuclear Energy Research, Taoyuan (China)


    The Lungmen nuclear power station (NPS) is an advanced boiling water reactor (ABWR) with fully digitized instrumentation and control (I and C) system. The present work is to use the probabilistic risk assessment (PRA) technique to investigate the concerns about network reliability and data communication errors for Lungmen NPS. The reactor protection system (RPS) has chosen as the target to investigate the network and data communication reliability. A fault tree based on the RPS configuration has been modeled to evaluate the weak point in the digital logic part. A Lungmen NPS event tree model has also built to calculate the core damage frequency (CDF). Sensitivity studies were performed by assuming various data communication delays and errors and to evaluate the network affect and the influence on the CDF.

  17. Bayesian reliability demonstration for failure-free periods

    Coolen, F.P.A. [Department of Mathematical Sciences, Science Laboratories, University of Durham, South Road, Durham, DH1 3LE (United Kingdom)]. E-mail:; Coolen-Schrijner, P. [Department of Mathematical Sciences, Science Laboratories, University of Durham, South Road, Durham, DH1 3LE (United Kingdom); Rahrouh, M. [Department of Mathematical Sciences, Science Laboratories, University of Durham, South Road, Durham, DH1 3LE (United Kingdom)


    We study sample sizes for testing as required for Bayesian reliability demonstration in terms of failure-free periods after testing, under the assumption that tests lead to zero failures. For the process after testing, we consider both deterministic and random numbers of tasks, including tasks arriving as Poisson processes. It turns out that the deterministic case is worst in the sense that it requires most tasks to be tested. We consider such reliability demonstration for a single type of task, as well as for multiple types of tasks to be performed by one system. We also consider the situation, where tests of different types of tasks may have different costs, aiming at minimal expected total costs, assuming that failure in the process would be catastrophic, in the sense that the process would be discontinued. Generally, these inferences are very sensitive to the choice of prior distribution, so one must be very careful with interpretation of non-informativeness of priors.

  18. Metrological Reliability of Medical Devices

    Costa Monteiro, E.; Leon, L. F.


    The prominent development of health technologies of the 20th century triggered demands for metrological reliability of physiological measurements comprising physical, chemical and biological quantities, essential to ensure accurate and comparable results of clinical measurements. In the present work, aspects concerning metrological reliability in premarket and postmarket assessments of medical devices are discussed, pointing out challenges to be overcome. In addition, considering the social relevance of the biomeasurements results, Biometrological Principles to be pursued by research and innovation aimed at biomedical applications are proposed, along with the analysis of their contributions to guarantee the innovative health technologies compliance with the main ethical pillars of Bioethics.

  19. Reliability Assessment of Concrete Bridges

    Thoft-Christensen, Palle; Middleton, C. R.

    This paper is partly based on research performed for the Highways Agency, London, UK under the project DPU/9/44 "Revision of Bridge Assessment Rules Based on Whole Life Performance: concrete bridges". It contains the details of a methodology which can be used to generate Whole Life (WL) reliability...... profiles. These WL reliability profiles may be used to establish revised rules for concrete bridges. This paper is to some extend based on Thoft-Christensen et. al. [1996], Thoft-Christensen [1996] et. al. and Thoft-Christensen [1996]....

  20. Reliability Management for Information System

    李睿; 俞涛; 刘明伦


    An integrated intelligent management is presented to help organizations manage many heterogeneous resources in their information system. A general architecture of management for information system reliability is proposed, and the architecture from two aspects, process model and hierarchical model, described. Data mining techniques are used in data analysis. A data analysis system applicable to real-time data analysis is developed by improved data mining on the critical processes. The framework of the integrated management for information system reliability based on real-time data mining is illustrated, and the development of integrated and intelligent management of information system discussed.




    Full Text Available The veracity and secrecy of medical information which is transacted over the Internet is vulnerable to attack. But the transaction of such details is mandatory in order to avail the luxury of medical services anywhere, anytime. Especially in a web service enabled system for hospital management, it becomes necessary to address these security issues. It is mandatory that the services guarantee message delivery to software applications, with a chosen level of quality of service (QoS. This paper presents a VDM++ based specification for modelling a security framework for web services with non repudiation to ensure that a party in a dispute cannot repudiate, or refute the validity of a statement or contract and it is ensured that the transaction happens in a reliable manner. This model presents the procedure and technical options to have a secure communication over Internet with web services. Based on the model the Medi - Helper is developed to use the technologies of WS-Security, WS-Reliability and WS-Policy, WSRN in order to create encrypted messages so that the Patient’s medical records are not tampered with when relayed over Internet, and are sent in a reliable manner. In addition to authentication, integrity, confidentiality, as proposed in this paper security framework for healthcare based web services is equipped with non repudiation which is not inclusive in many existing frameworks.

  2. Measuring Service Reliability Using Automatic Vehicle Location Data

    Zhenliang Ma


    Full Text Available Bus service reliability has become a major concern for both operators and passengers. Buffer time measures are believed to be appropriate to approximate passengers' experienced reliability in the context of departure planning. Two issues with regard to buffer time estimation are addressed, namely, performance disaggregation and capturing passengers’ perspectives on reliability. A Gaussian mixture models based method is applied to disaggregate the performance data. Based on the mixture models distribution, a reliability buffer time (RBT measure is proposed from passengers’ perspective. A set of expected reliability buffer time measures is developed for operators by using different spatial-temporal levels combinations of RBTs. The average and the latest trip duration measures are proposed for passengers that can be used to choose a service mode and determine the departure time. Using empirical data from the automatic vehicle location system in Brisbane, Australia, the existence of mixture service states is verified and the advantage of mixture distribution model in fitting travel time profile is demonstrated. Numerical experiments validate that the proposed reliability measure is capable of quantifying service reliability consistently, while the conventional ones may provide inconsistent results. Potential applications for operators and passengers are also illustrated, including reliability improvement and trip planning.



  4. Recent advances in computational structural reliability analysis methods

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.


    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  5. Reliability Modeling and Optimization Using Fuzzy Logic and Chaos Theory

    Alexander Rotshtein


    Full Text Available Fuzzy sets membership functions integrated with logistic map as the chaos generator were used to create reliability bifurcations diagrams of the system with redundancy of the components. This paper shows that increasing in the number of redundant components results in a postponement of the moment of the first bifurcation which is considered as most contributing to the loss of the reliability. The increasing of redundancy also provides the shrinkage of the oscillation orbit of the level of the system’s membership to reliable state. The paper includes the problem statement of redundancy optimization under conditions of chaotic behavior of influencing parameters and genetic algorithm of this problem solving. The paper shows the possibility of chaos-tolerant systems design with the required level of reliability.

  6. Optimization and Reliability Problems in Structural Design of Wind Turbines

    Sørensen, John Dalsgaard


    reliability index equal to 3. An example with fatigue failure indicates that the reliability level is almost the same for single wind turbines and for wind turbines in wind farms if the wake effects are modeled equivalently in the design equation and the limit state equation.......Reliability-based cost-benefit optimization formulations for wind turbines are presented. Some of the improtant aspects for stochastic modeling of loads, strengths and models uncertainties for wind turbines are described. Single wind turbines and wind turbines in wind farms with wake effects...... are discussed. Limit state equations are presented for fatigue limit states and for ultimate limit states with extreme wind load, and illustrated by bending failure. Illustrative examples are presented, and as a part of the results optimal reliability levels are obtained which corresponds to an annual...

  7. Reliability of quantitative content analyses

    Enschot-van Dijk, R. van


    Reliable coding of stimuli is a daunting task that often yields unsatisfactory results. This paper discusses a case study in which tropes (e.g., metaphors, puns) in TV commercials were analyzed as well the extent and location of verbal and visual anchoring (i.e., explanation) of these tropes. After

  8. Wind turbine reliability database update.

    Peters, Valerie A.; Hill, Roger Ray; Stinebaugh, Jennifer A.; Veers, Paul S.


    This report documents the status of the Sandia National Laboratories' Wind Plant Reliability Database. Included in this report are updates on the form and contents of the Database, which stems from a fivestep process of data partnerships, data definition and transfer, data formatting and normalization, analysis, and reporting. Selected observations are also reported.

  9. Photovoltaic performance and reliability workshop

    Kroposki, B


    This proceedings is the compilation of papers presented at the ninth PV Performance and Reliability Workshop held at the Sheraton Denver West Hotel on September 4--6, 1996. This years workshop included presentations from 25 speakers and had over 100 attendees. All of the presentations that were given are included in this proceedings. Topics of the papers included: defining service lifetime and developing models for PV module lifetime; examining and determining failure and degradation mechanisms in PV modules; combining IEEE/IEC/UL testing procedures; AC module performance and reliability testing; inverter reliability/qualification testing; standardization of utility interconnect requirements for PV systems; need activities to separate variables by testing individual components of PV systems (e.g. cells, modules, batteries, inverters,charge controllers) for individual reliability and then test them in actual system configurations; more results reported from field experience on modules, inverters, batteries, and charge controllers from field deployed PV systems; and system certification and standardized testing for stand-alone and grid-tied systems.


    Dr Obe

    Evaluation of the reliability of a primary cell took place in three stages: 192 cells went through a ... CCV - Closed Circuit Voltage, the voltage at the terminals of a battery when it is under an electrical ... Cylindrical spirally wound cells have the.

  11. Finding Reliable Health Information Online

    ... at NHGRI About About the Institute Budget and Financial Information Divisions Director's Page How to Contact Us Institute ... una búsqueda saludable en Internet Finding Reliable Health Information Online As Internet users quickly discover, an enormous amount of health information ...

  12. Reliability of subjective wound assessment

    M.C.T. Bloemen; P.P.M. van Zuijlen; E. Middelkoop


    Introduction: Assessment of the take of split-skin graft and the rate of epithelialisation are important parameters in burn surgery. Such parameters are normally estimated by the clinician in a bedside procedure. This study investigates whether this subjective assessment is reliable for graft take a

  13. Fatigue Reliability under Random Loads

    Talreja, R.


    , with the application of random loads, the initial homogeneous distribution of strength changes to a two-component distribution, reflecting the two-stage fatigue damage. In the crack initiation stage, the strength increases initially and then decreases, while an abrupt decrease of strength is seen in the crack...... propagation stage. The consequences of this behaviour on the fatigue reliability are discussed....

  14. Reliability Analysis of Money Habitudes

    Delgadillo, Lucy M.; Bushman, Brittani S.


    Use of the Money Habitudes exercise has gained popularity among various financial professionals. This article reports on the reliability of this resource. A survey administered to young adults at a western state university was conducted, and each Habitude or "domain" was analyzed using Cronbach's alpha procedures. Results showed all six…

  15. The Reliability of College Grades

    Beatty, Adam S.; Walmsley, Philip T.; Sackett, Paul R.; Kuncel, Nathan R.; Koch, Amanda J.


    Little is known about the reliability of college grades relative to how prominently they are used in educational research, and the results to date tend to be based on small sample studies or are decades old. This study uses two large databases (N > 800,000) from over 200 educational institutions spanning 13 years and finds that both first-year…

  16. Reliability Analysis of Money Habitudes

    Delgadillo, Lucy M.; Bushman, Brittani S.


    Use of the Money Habitudes exercise has gained popularity among various financial professionals. This article reports on the reliability of this resource. A survey administered to young adults at a western state university was conducted, and each Habitude or "domain" was analyzed using Cronbach's alpha procedures. Results showed all six…

  17. Advancing methods for reliably assessing motivational interviewing fidelity using the motivational interviewing skills code.

    Lord, Sarah Peregrine; Can, Doğan; Yi, Michael; Marin, Rebeca; Dunn, Christopher W; Imel, Zac E; Georgiou, Panayiotis; Narayanan, Shrikanth; Steyvers, Mark; Atkins, David C


    The current paper presents novel methods for collecting MISC data and accurately assessing reliability of behavior codes at the level of the utterance. The MISC 2.1 was used to rate MI interviews from five randomized trials targeting alcohol and drug use. Sessions were coded at the utterance-level. Utterance-based coding reliability was estimated using three methods and compared to traditional reliability estimates of session tallies. Session-level reliability was generally higher compared to reliability using utterance-based codes, suggesting that typical methods for MISC reliability may be biased. These novel methods in MI fidelity data collection and reliability assessment provided rich data for therapist feedback and further analyses. Beyond implications for fidelity coding, utterance-level coding schemes may elucidate important elements in the counselor-client interaction that could inform theories of change and the practice of MI. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. MDP: Reliable File Transfer for Space Missions

    Rash, James; Criscuolo, Ed; Hogie, Keith; Parise, Ron; Hennessy, Joseph F. (Technical Monitor)


    This paper presents work being done at NASA/GSFC by the Operating Missions as Nodes on the Internet (OMNI) project to demonstrate the application of the Multicast Dissemination Protocol (MDP) to space missions to reliably transfer files. This work builds on previous work by the OMNI project to apply Internet communication technologies to space communication. The goal of this effort is to provide an inexpensive, reliable, standard, and interoperable mechanism for transferring files in the space communication environment. Limited bandwidth, noise, delay, intermittent connectivity, link asymmetry, and one-way links are all possible issues for space missions. Although these are link-layer issues, they can have a profound effect on the performance of transport and application level protocols. MDP, a UDP-based reliable file transfer protocol, was designed for multicast environments which have to address these same issues, and it has done so successfully. Developed by the Naval Research Lab in the mid 1990's, MDP is now in daily use by both the US Post Office and the DoD. This paper describes the use of MDP to provide automated end-to-end data flow for space missions. It examines the results of a parametric study of MDP in a simulated space link environment and discusses the results in terms of their implications for space missions. Lessons learned are addressed, which suggest minor enhancements to the MDP user interface to add specific features for space mission requirements, such as dynamic control of data rate, and a checkpoint/resume capability. These are features that are provided for in the protocol, but are not implemented in the sample MDP application that was provided. A brief look is also taken at the status of standardization. A version of MDP known as NORM (Neck Oriented Reliable Multicast) is in the process of becoming an IETF standard.

  19. Fatigue Reliability of Gas Turbine Engine Structures

    Cruse, Thomas A.; Mahadevan, Sankaran; Tryon, Robert G.


    The results of an investigation are described for fatigue reliability in engine structures. The description consists of two parts. Part 1 is for method development. Part 2 is a specific case study. In Part 1, the essential concepts and practical approaches to damage tolerance design in the gas turbine industry are summarized. These have evolved over the years in response to flight safety certification requirements. The effect of Non-Destructive Evaluation (NDE) methods on these methods is also reviewed. Assessment methods based on probabilistic fracture mechanics, with regard to both crack initiation and crack growth, are outlined. Limit state modeling techniques from structural reliability theory are shown to be appropriate for application to this problem, for both individual failure mode and system-level assessment. In Part 2, the results of a case study for the high pressure turbine of a turboprop engine are described. The response surface approach is used to construct a fatigue performance function. This performance function is used with the First Order Reliability Method (FORM) to determine the probability of failure and the sensitivity of the fatigue life to the engine parameters for the first stage disk rim of the two stage turbine. A hybrid combination of regression and Monte Carlo simulation is to use incorporate time dependent random variables. System reliability is used to determine the system probability of failure, and the sensitivity of the system fatigue life to the engine parameters of the high pressure turbine. 'ne variation in the primary hot gas and secondary cooling air, the uncertainty of the complex mission loading, and the scatter in the material data are considered.

  20. Calibrating ensemble reliability whilst preserving spatial structure

    Jonathan Flowerdew


    ensemble, variables such as sea-level pressure which are very reliable to start with, and the best way to handle derived variables such as dewpoint depression.

  1. Designing reliable supply chain network with disruption risk

    Ali Bozorgi Amiri


    Full Text Available Although supply chains disruptions rarely occur, their negative effects are prolonged and severe. In this paper, we propose a reliable capacitated supply chain network design (RSCND model by considering random disruptions in both distribution centers and suppliers. The proposed model determines the optimal location of distribution centers (DC with the highest reliability, the best plan to assign customers to opened DCs and assigns opened DCs to suitable suppliers with lowest transportation cost. In this study, random disruption occurs at the location, capacity of the distribution centers (DCs and suppliers. It is assumed that a disrupted DC and a disrupted supplier may lose a portion of their capacities, and the rest of the disrupted DC's demand can be supplied by other DCs. In addition, we consider shortage in DCs, which can occur in either normal or disruption conditions and DCs, can support each other in such circumstances. Unlike other studies in the extent of literature, we use new approach to model the reliability of DCs; we consider a range of reliability instead of using binary variables. In order to solve the proposed model for real-world instances, a Non-dominated Sorting Genetic Algorithm-II (NSGA-II is applied. Preliminary results of testing the proposed model of this paper on several problems with different sizes provide seem to be promising.

  2. Calculation and Updating of Reliability Parameters in Probabilistic Safety Assessment

    Zubair, Muhammad; Zhang, Zhijian; Khan, Salah Ud Din


    The internal events of nuclear power plant are complex and include equipment maintenance, equipment damage etc. These events will affect the probability of the current risk level of the system as well as the reliability of the equipment parameter values so such kind of events will serve as an important basis for systematic analysis and calculation. This paper presents a method for reliability parameters calculation and their updating. The method is based on binomial likelihood function and its conjugate beta distribution. For update parameters Bayes' theorem has been selected. To implement proposed method a computer base program is designed which provide help to estimate reliability parameters.

  3. Reliability based design optimization using akaike information criterion for discrete information

    Lim, Woochul; Lee, Tae Hee [Hanyang Univ., Seoul (Korea, Republic of)


    Reliability based design optimization (RBDO) can be used to determine the reliability of a system by means of probabilistic design criteria, i.e., the possibility of failure considering stochastic features of design variables and input parameters. To assure these criteria, various reliability analysis methods have been developed. Most of these methods assume that distribution functions are continuous. However, in real problems, because real data is often discrete in form, it is important to estimate the distributions for discrete information during reliability analysis. In this study, we employ the Akaike information criterion (AIC) method for reliability analysis to determine the best estimated distribution for discrete information and we suggest an RBDO method using AIC. Mathematical and engineering examples are illustrated to verify the proposed method.

  4. Reliability analysis of M/G/1 queues with general retrial times and server breakdowns

    WANG Jinting


    This paper concerns the reliability issues as well as queueing analysis of M/G/1 retrial queues with general retrial times and server subject to breakdowns and repairs. We assume that the server is unreliable and customers who find the server busy or down are queued in the retrial orbit in accordance with a first-come-first-served discipline. Only the customer at the head of the orbit queue is allowed for access to the server. The necessary and sufficient condition for the system to be stable is given. Using a supplementary variable method, we obtain the Laplace-Stieltjes transform of the reliability function of the server and a steady state solution for both queueing and reliability measures of interest. Some main reliability indexes, such as the availability, failure frequency, and the reliability function of the server, are obtained.

  5. Power Quality and Reliability Project

    Attia, John O.


    One area where universities and industry can link is in the area of power systems reliability and quality - key concepts in the commercial, industrial and public sector engineering environments. Prairie View A&M University (PVAMU) has established a collaborative relationship with the University of'Texas at Arlington (UTA), NASA/Johnson Space Center (JSC), and EP&C Engineering and Technology Group (EP&C) a small disadvantage business that specializes in power quality and engineering services. The primary goal of this collaboration is to facilitate the development and implementation of a Strategic Integrated power/Systems Reliability and Curriculum Enhancement Program. The objectives of first phase of this work are: (a) to develop a course in power quality and reliability, (b) to use the campus of Prairie View A&M University as a laboratory for the study of systems reliability and quality issues, (c) to provide students with NASA/EPC shadowing and Internship experience. In this work, a course, titled "Reliability Analysis of Electrical Facilities" was developed and taught for two semesters. About thirty seven has benefited directly from this course. A laboratory accompanying the course was also developed. Four facilities at Prairie View A&M University were surveyed. Some tests that were performed are (i) earth-ground testing, (ii) voltage, amperage and harmonics of various panels in the buildings, (iii) checking the wire sizes to see if they were the right size for the load that they were carrying, (iv) vibration tests to test the status of the engines or chillers and water pumps, (v) infrared testing to the test arcing or misfiring of electrical or mechanical systems.

  6. Reliability of Arctic offshore installations

    Bercha, F.G. [Bercha Group, Calgary, AB (Canada); Gudmestad, O.T. [Stavanger Univ., Stavanger (Norway)]|[Statoil, Stavanger (Norway)]|[Norwegian Univ. of Technology, Stavanger (Norway); Foschi, R. [British Columbia Univ., Vancouver, BC (Canada). Dept. of Civil Engineering; Sliggers, F. [Shell International Exploration and Production, Rijswijk (Netherlands); Nikitina, N. [VNIIG, St. Petersburg (Russian Federation); Nevel, D.


    Life threatening and fatal failures of offshore structures can be attributed to a broad range of causes such as fires and explosions, buoyancy losses, and structural overloads. This paper addressed the different severities of failure types, categorized as catastrophic failure, local failure or serviceability failure. Offshore tragedies were also highlighted, namely the failures of P-36, the Ocean Ranger, the Piper Alpha, and the Alexander Kieland which all resulted in losses of human life. P-36 and the Ocean Ranger both failed ultimately due to a loss of buoyancy. The Piper Alpha was destroyed by a natural gas fire, while the Alexander Kieland failed due to fatigue induced structural failure. The mode of failure was described as being the specific way in which a failure occurs from a given cause. Current reliability measures in the context of offshore installations only consider the limited number of causes such as environmental loads. However, it was emphasized that a realistic value of the catastrophic failure probability should consider all credible causes of failure. This paper presented a general method for evaluating all credible causes of failure of an installation. The approach to calculating integrated reliability involves the use of network methods such as fault trees to combine the probabilities of all factors that can cause a catastrophic failure, as well as those which can cause a local failure with the potential to escalate to a catastrophic failure. This paper also proposed a protocol for setting credible reliability targets such as the consideration of life safety targets and escape, evacuation, and rescue (EER) success probabilities. A set of realistic reliability targets for both catastrophic and local failures for representative safety and consequence categories associated with offshore installations was also presented. The reliability targets were expressed as maximum average annual failure probabilities. The method for converting these annual

  7. Software Technology for Adaptable, Reliable Systems (STARS) Technical Program Plan,


    TPP (8/06186) * SOFTWARE TECHNOLOGY IE c0 FOR r ADAPTABLE, RELIABLE SYSTEMS (STARS) TECHNICAL PROGRAM PLAN 6 AUGUST 1986 DTIC ELECTEri NOV 141986WI...NONE NONE 11. TITLE (Include Security Classification) %. Software Technology for Adaptable, Reliable Systems (STARS) Technical Program Plan 12. PERSONAL...document is the top-level technical program plan for the STARS program. It describes the objectives of the program, the technical approach to achieve

  8. Identifying Sources of Difference in Reliability in Content Analysis

    Elizabeth Murphy


    Full Text Available This paper reports on a case study which identifies and illustrates sources of difference in agreement in relation to reliability in a context of quantitative content analysis of a transcript of an online asynchronous discussion (OAD. Transcripts of 10 students in a month-long online asynchronous discussion were coded by two coders using an instrument with two categories, five processes, and 19 indicators of Problem Formulation and Resolution (PFR. Sources of difference were identified in relation to: coders; tasks; and students. Reliability values were calculated at the levels of categories, processes, and indicators. At the most detailed level of coding on the basis of the indicator, findings revealed that the overall level of reliability between coders was .591 when measured with Cohen’s kappa. The difference between tasks at the same level ranged from .349 to .664, and the difference between participants ranged from .390 to .907. Implications for training and research are discussed.

  9. Exact reliability quantification of highly reliable systems with maintenance

    Bris, Radim, E-mail: radim.bris@vsb.c [VSB-Technical University Ostrava, Faculty of Electrical Engineering and Computer Science, Department of Applied Mathematics, 17. listopadu 15, 70833 Ostrava-Poruba (Czech Republic)


    When a system is composed of highly reliable elements, exact reliability quantification may be problematic, because computer accuracy is limited. Inaccuracy can be due to different aspects. For example, an error may be made when subtracting two numbers that are very close to each other, or at the process of summation of many very different numbers, etc. The basic objective of this paper is to find a procedure, which eliminates errors made by PC when calculations close to an error limit are executed. Highly reliable system is represented by the use of directed acyclic graph which is composed from terminal nodes, i.e. highly reliable input elements, internal nodes representing subsystems and edges that bind all of these nodes. Three admissible unavailability models of terminal nodes are introduced, including both corrective and preventive maintenance. The algorithm for exact unavailability calculation of terminal nodes is based on merits of a high-performance language for technical computing MATLAB. System unavailability quantification procedure applied to a graph structure, which considers both independent and dependent (i.e. repeatedly occurring) terminal nodes is based on combinatorial principle. This principle requires summation of a lot of very different non-negative numbers, which may be a source of an inaccuracy. That is why another algorithm for exact summation of such numbers is designed in the paper. The summation procedure uses benefits from a special number system with the base represented by the value 2{sup 32}. Computational efficiency of the new computing methodology is compared with advanced simulation software. Various calculations on systems from references are performed to emphasize merits of the methodology.

  10. Robust Reliability or reliable robustness? - Integrated consideration of robustness and reliability aspects

    Kemmler, S.; Eifler, Tobias; Bertsche, B.


    products are and vice versa. For a comprehensive understanding and to use existing synergies between both domains, this paper discusses the basic principles of Reliability- and Robust Design theory. The development of a comprehensive model will enable an integrated consideration of both domains...

  11. Simulation methods for reliability and availability of complex systems

    Faulin Fajardo, Javier; Martorell Alsina, Sebastián Salvador; Ramirez-Marquez, Jose Emmanuel; Martorell, Sebastian


    This text discusses the use of computer simulation-based techniques and algorithms to determine reliability and/or availability levels in complex systems and to help improve these levels both at the design stage and during the system operating stage.

  12. 78 FR 30804 - Transmission Planning Reliability Standards


    ... proposed Reliability Standard TPL-001-2, submitted by the North American Electric Reliability Corporation... American Electric Reliability Corporation (NERC), the Commission-certified Electric Reliability... Microsoft Word format for viewing, printing, and/or downloading. To access this document in eLibrary,...

  13. Statistics and Analysis on Reliability of HVDC Transmission Systems of SGCC



    Reliability level of HVDC power transmission systems becomes an important factor impacting the entire power grid. The author analyzes the reliability of HVDC power transmission systems owned by SGCC since 2003 in respect of forced outage times, forced energy unavailability, scheduled energy unavailability and energy utilization efficiency. The results show that the reliability level of HVDC power transmission systems owned by SGCC is improving. By analyzing different reliability indices of HVDC power transmission system, the maximum asset benefits of power grid can be achieved through building a scientific and reasonable reliability evaluation system.

  14. Reliability and agreement in student ratings of the class environment.

    Nelson, Peter M; Christ, Theodore J


    The current study estimated the reliability and agreement of student ratings of the classroom environment obtained using the Responsive Environmental Assessment for Classroom Teaching (REACT; Christ, Nelson, & Demers, 2012; Nelson, Demers, & Christ, 2014). Coefficient alpha, class-level reliability, and class agreement indices were evaluated as each index provides important information for different interpretations and uses of student rating scale data. Data for 84 classes across 29 teachers in a suburban middle school were sampled to derive reliability and agreement indices for the REACT subscales across 4 class sizes: 25, 20, 15, and 10. All participating teachers were White and a larger number of 6th-grade classes were included (42%) relative to 7th- (33%) or 8th- (23%) grade classes. Teachers were responsible for a variety of content areas, including language arts (26%), science (26%), math (20%), social studies (19%), communications (6%), and Spanish (3%). Coefficient alpha estimates were generally high across all subscales and class sizes (α = .70-.95); class-mean estimates were greatly impacted by the number of students sampled from each class, with class-level reliability values generally falling below .70 when class size was reduced from 25 to 20. Further, within-class student agreement varied widely across the REACT subscales (mean agreement = .41-.80). Although coefficient alpha and test-retest reliability are commonly reported in research with student rating scales, class-level reliability and agreement are not. The observed differences across coefficient alpha, class-level reliability, and agreement indices provide evidence for evaluating students' ratings of the class environment according to their intended use (e.g., differentiating between classes, class-level instructional decisions). (PsycINFO Database Record

  15. Defining Requirements for Improved Photovoltaic System Reliability

    Maish, A.B.


    Reliable systems are an essential ingredient of any technology progressing toward commercial maturity and large-scale deployment. This paper defines reliability as meeting system fictional requirements, and then develops a framework to understand and quantify photovoltaic system reliability based on initial and ongoing costs and system value. The core elements necessary to achieve reliable PV systems are reviewed. These include appropriate system design, satisfactory component reliability, and proper installation and servicing. Reliability status, key issues, and present needs in system reliability are summarized for four application sectors.

  16. Predicting Reliability of Tactical Network Using RBFNN

    WANG Xiao-kai; HOU Chao-zhen


    A description of the reliability evaluation of tactical network is given, which reflects not only the non-reliable factors of nodes and links but also the factors of network topological structure. On the basis of this description, a reliability prediction model and its algorithms are put forward based on the radial basis function neural network (RBFNN) for the tactical network. This model can carry out the non-linear mapping relationship between the network topological structure, the nodes reliabilities, the links reliabilities and the reliability of network. The results of simulation prove the effectiveness of this method in the reliability and the connectivity prediction for tactical network.

  17. Reliability of Wave Energy Converters

    Ambühl, Simon

    for welded structures at the Wavestar device includingdifferent control systems for harvesting energy from waves. In addition, a casestudy of different O&M strategies for WECs is discussed, and an example ofreliability-based structural optimization of the Wavestar foundation ispresented. The work performed......There are many different working principles for wave energy converters (WECs) which are used to produce electricity from waves. In order for WECs tobecome successful and more competitive to other renewable electricity sources,the consideration of the structural reliability of WECs is essential.......Structural reliability considerations and optimizations impact operation andmaintenance (O&M) costs as well as the initial investment costs.Furthermore, there is a control system for WEC applications which defines theharvested energy but also the loads onto the structure. Therefore, extremeloads but also fatigue loads...

  18. Boolean networks with reliable dynamics

    Peixoto, Tiago P


    We investigated the properties of Boolean networks that follow a given reliable trajectory in state space. A reliable trajectory is defined as a sequence of states which is independent of the order in which the nodes are updated. We explored numerically the topology, the update functions, and the state space structure of these networks, which we constructed using a minimum number of links and the simplest update functions. We found that the clustering coefficient is larger than in random networks, and that the probability distribution of three-node motifs is similar to that found in gene regulation networks. Among the update functions, only a subset of all possible functions occur, and they can be classified according to their probability. More homogeneous functions occur more often, leading to a dominance of canalyzing functions. Finally, we studied the entire state space of the networks. We observed that with increasing systems size, fixed points become more dominant, moving the networks close to the frozen...

  19. Reliability of Wave Energy Converters

    Ambühl, Simon

    There are many different working principles for wave energy converters (WECs) which are used to produce electricity from waves. In order for WECs to become successful and more competitive to other renewable electricity sources, the consideration of the structural reliability of WECs is essential....... Structural reliability considerations and optimizations impact operation and maintenance (O&M) costs as well as the initial investment costs. Furthermore, there is a control system for WEC applications which defines the harvested energy but also the loads onto the structure. Therefore, extreme loads but also...... of the Wavestar foundation is presented. The work performed in this thesis focuses on the Wavestar and WEPTOS WEC devices which are only two working principles out of a large diversity. Therefore, in order to gain general statements and give advice for standards for structural WEC designs, more working principles...

  20. Fatigue reliability for LNG carrier

    Xiao Taoyun; Zhang Qin; Jin Wulei; Xu Shuai


    The procedure of reliability-based fatigue analysis of liquefied natural gas (LNG) carrier of membrane type under wave loads is presented. The stress responses of the hotspots in regular waves with different wave heading angles and wave lengths are evaluated by global ship finite element method (FEM). Based on the probabilistic distribution function of hotspots' short-term stress-range using spectral-based analysis, Weibull distribution is adopted and discussed for fitting the long-term probabilistic distribution of stress-range. Based on linear cumulative damage theory, fatigue damage is characterized by an S-N relationship, and limit state function is established. Structural fatigue damage behavior of several typical hotspots of LNG middle ship section is clarified and reliability analysis is performed. It is believed that the presented results and conclusions can be of use in calibration for practical design and initial fatigue safety evaluation for membrane type LNG carrier.

  1. Next Generation Reliable Transport Networks

    Zhang, Jiang

    of criticality and security, there are certain physical or logical segregation requirements between the avionic systems. Such segregations can be implemented on the proposed avionic networks with different hierarchies. In order to fulfill the segregation requirements, a tailored heuristic approach for solving......This thesis focuses the efforts on ensuring the reliability of transport networks and takes advantages and experiences from the transport networks into the networks for particular purposes. Firstly, the challenges of providing reliable multicast services on Multipath Label Switching......-Transport Profile (MPLS-TP) ring networks are addressed. Through the proposed protection structure and protection switching schemes, the recovery mechanism is enhanced in terms of recovery label consumption, operation simplicity and fine traffic engineering granularity. Furthermore, the extensions for existing...

  2. Gearbox Reliability Collaborative Bearing Calibration

    van Dam, J.


    NREL has initiated the Gearbox Reliability Collaborative (GRC) to investigate the root cause of the low wind turbine gearbox reliability. The GRC follows a multi-pronged approach based on a collaborative of manufacturers, owners, researchers and consultants. The project combines analysis, field testing, dynamometer testing, condition monitoring, and the development and population of a gearbox failure database. At the core of the project are two 750kW gearboxes that have been redesigned and rebuilt so that they are representative of the multi-megawatt gearbox topology currently used in the industry. These gearboxes are heavily instrumented and are tested in the field and on the dynamometer. This report discusses the bearing calibrations of the gearboxes.

  3. Assessing volume of accelerometry data for reliability in preschool children.

    Hinkley, Trina; O'Connell, Eoin; Okely, Anthony D; Crawford, David; Hesketh, Kylie; Salmon, Jo


    This study examines what volume of accelerometry data (h·d) is required to reliably estimate preschool children's physical activity and whether it is necessary to include weekday and weekend data. Accelerometry data from 493 to 799 (depending on wear time) preschool children from the Melbourne-based Healthy Active Preschool Years study were used. The percentage of wear time each child spent in total (light-vigorous) physical activity was the main outcome. Hourly increments of daily data were analyzed. t-tests, controlling for age and clustering by center of recruitment, assessed the differences between weekday and weekend physical activity. Intraclass correlation coefficients estimated reliability for an individual day. Spearman-Brown prophecy formula estimated the number of days required to reach reliability estimates of 0.7, 0.8, and 0.9. The children spent a significantly greater percentage of time being physically active on weekend compared with weekdays regardless of the minimum number of hours included (t = 12.49-16.76, P 8 d of data were required to reach a reliability estimate of 0.7 with 10 or more hours of data per day; 3.3-3.4 d were required to meet the same reliability estimate for days with 7 h of data. Future studies should ensure they include the minimum amount of data (hours per day and number of days) as identified in this study to meet at least a 0.7 reliability level and should report the level of reliability for their study. In addition to weekdays, at least one weekend day should be included in analyses to reliably estimate physical activity levels for preschool children.

  4. Storage Reliability of Reserve Batteries


    batteries – Environmental concerns, lack of business – Non-availability of some critical materials • Lithium Oxyhalides are systems of choice – Good...exhibit good corrosion resistance to neutral electrolytes (LiAlCl4 in thionyl chloride and sulfuryl chloride ) • Using AlCl3 creates a much more corrosive...Storage Reliability of Reserve Batteries Jeff Swank and Allan Goldberg Army Research Laboratory Adelphi, MD 301-394-3116 ll l


    Gabriel BURLACU,


    Full Text Available Axial lenticular compensators are made to take over the longitudinal heat expansion, shock , vibration and noise, made elastic connections for piping systems. In order to have a long life for installations it is necessary that all elements, including lenticular compensators, have a good reliability. This desire can be did by technology of manufactoring and assembly of compensators, the material for lenses and by maintenance.of compensator

  6. Improved reliability of power modules

    Baker, Nick; Liserre, Marco; Dupont, Laurent


    Power electronic systems play an increasingly important role in providing high-efficiency power conversion for adjustable-speed drives, power-quality correction, renewable-energy systems, energy-storage systems, and electric vehicles. However, they are often presented with demanding operating env...... temperature cycling conditions on the system. On the other hand, safety requirements in the aerospace and automotive industries place rigorous demands on reliability....

  7. Reliability Analysis of Sensor Networks

    JIN Yan; YANG Xiao-zong; WANG Ling


    To Integrate the capacity of sensing, communication, computing, and actuating, one of the compelling technological advances of these years has been the appearance of distributed wireless sensor network (DSN) for information gathering tasks. In order to save the energy, multi-hop routing between the sensor nodes and the sink node is necessary because of limited resource. In addition, the unpredictable conditional factors make the sensor nodes unreliable. In this paper, the reliability of routing designed for sensor network and some dependability issues of DSN, such as MTTF(mean time to failure) and the probability of connectivity between the sensor nodes and the sink node are analyzed.Unfortunately, we could not obtain the accurate result for the arbitrary network topology, which is # P-hard problem.And the reliability analysis of restricted topologies clustering-based is given. The method proposed in this paper will show us a constructive idea about how to place energyconstrained sensor nodes in the network efficiently from the prospective of reliability.




    以高校债务与收入资料为基础,对当前高校可承担债务比例与风险状况进行了测算,揭示出还款期限与还款比例之间的关系。%The paper measures and calculates the ratio of debt universities can assume at present and the current condition of risks based on their debt and revenue and then it reveals the relationship between reimbursement deadline and ratio.

  9. A Two-Unit Cold Standby Repairable System with One Replaceable Repair Facility and Delay Repair:Some Reliability Problems

    WEI Ying-yuan; TANG Ying-hui


    This paper considers a two-unit same cold standby repairable system with a replaceable repair facility and delay repair .The failure time of unit is assumed to follow exponential distribution , and the repair time and delay time of failed unit are assumed to follow arbitrary distributions , whereas the failure and replacement time distributions of the repair facility are exponential and arbitrary . By using the Markov renewal process theory, some primary reliability quantities of the system are obtained.

  10. Quantitative metal magnetic memory reliability modeling for welded joints

    Xing, Haiyan; Dang, Yongbin; Wang, Ben; Leng, Jiancheng


    Metal magnetic memory(MMM) testing has been widely used to detect welded joints. However, load levels, environmental magnetic field, and measurement noises make the MMM data dispersive and bring difficulty to quantitative evaluation. In order to promote the development of quantitative MMM reliability assessment, a new MMM model is presented for welded joints. Steel Q235 welded specimens are tested along the longitudinal and horizontal lines by TSC-2M-8 instrument in the tensile fatigue experiments. The X-ray testing is carried out synchronously to verify the MMM results. It is found that MMM testing can detect the hidden crack earlier than X-ray testing. Moreover, the MMM gradient vector sum K vs is sensitive to the damage degree, especially at early and hidden damage stages. Considering the dispersion of MMM data, the K vs statistical law is investigated, which shows that K vs obeys Gaussian distribution. So K vs is the suitable MMM parameter to establish reliability model of welded joints. At last, the original quantitative MMM reliability model is first presented based on the improved stress strength interference theory. It is shown that the reliability degree R gradually decreases with the decreasing of the residual life ratio T, and the maximal error between prediction reliability degree R 1 and verification reliability degree R 2 is 9.15%. This presented method provides a novel tool of reliability testing and evaluating in practical engineering for welded joints.

  11. Optimization of reliability allocation strategies through use of genetic algorithms

    Campbell, J.E.; Painton, L.A.


    This paper examines a novel optimization technique called genetic algorithms and its application to the optimization of reliability allocation strategies. Reliability allocation should occur in the initial stages of design, when the objective is to determine an optimal breakdown or allocation of reliability to certain components or subassemblies in order to meet system specifications. The reliability allocation optimization is applied to the design of a cluster tool, a highly complex piece of equipment used in semiconductor manufacturing. The problem formulation is presented, including decision variables, performance measures and constraints, and genetic algorithm parameters. Piecewise ``effort curves`` specifying the amount of effort required to achieve a certain level of reliability for each component of subassembly are defined. The genetic algorithm evolves or picks those combinations of ``effort`` or reliability levels for each component which optimize the objective of maximizing Mean Time Between Failures while staying within a budget. The results show that the genetic algorithm is very efficient at finding a set of robust solutions. A time history of the optimization is presented, along with histograms or the solution space fitness, MTBF, and cost for comparative purposes.

  12. Operator adaptation to changes in system reliability under adaptable automation.

    Chavaillaz, Alain; Sauer, Juergen


    This experiment examined how operators coped with a change in system reliability between training and testing. Forty participants were trained for 3 h on a complex process control simulation modelling six levels of automation (LOA). In training, participants either experienced a high- (100%) or low-reliability system (50%). The impact of training experience on operator behaviour was examined during a 2.5 h testing session, in which participants either experienced a high- (100%) or low-reliability system (60%). The results showed that most operators did not often switch between LOA. Most chose an LOA that relieved them of most tasks but maintained their decision authority. Training experience did not have a strong impact on the outcome measures (e.g. performance, complacency). Low system reliability led to decreased performance and self-confidence. Furthermore, complacency was observed under high system reliability. Overall, the findings suggest benefits of adaptable automation because it accommodates different operator preferences for LOA. Practitioner Summary: The present research shows that operators can adapt to changes in system reliability between training and testing sessions. Furthermore, it provides evidence that each operator has his/her preferred automation level. Since this preference varies strongly between operators, adaptable automation seems to be suitable to accommodate these large differences.

  13. Crystalline-silicon reliability lessons for thin-film modules

    Ross, R. G., Jr.


    The reliability of crystalline silicon modules has been brought to a high level with lifetimes approaching 20 years, and excellent industry credibility and user satisfaction. The transition from crystalline modules to thin film modules is comparable to the transition from discrete transistors to integrated circuits. New cell materials and monolithic structures will require new device processing techniques, but the package function and design will evolve to a lesser extent. Although there will be new encapsulants optimized to take advantage of the mechanical flexibility and low temperature processing features of thin films, the reliability and life degradation stresses and mechanisms will remain mostly unchanged. Key reliability technologies in common between crystalline and thin film modules include hot spot heating, galvanic and electrochemical corrosion, hail impact stresses, glass breakage, mechanical fatigue, photothermal degradation of encapsulants, operating temperature, moisture sorption, circuit design strategies, product safety issues, and the process required to achieve a reliable product from a laboratory prototype.

  14. Structural Reliability Aspects in Design of Wind Turbines

    Sørensen, John Dalsgaard


    Reliability assessment, optimal design and optimal operation and maintenance of wind turbines are an area of significant interest for the fast growing wind turbine industry for sustainable production of energy. Offshore wind turbines in wind farms give special problems due to wake effects inside...... turbines are described. Further, aspects are presented related to reliability-based optimization of wind turbines, assessment of optimal reliability level and operation and maintenance....... the farm. Reliability analysis and optimization of wind turbines require that the special conditions for wind turbine operation are taken into account. Control of the blades implies load reductions for large wind speeds and parking for high wind speeds. In this paper basic structural failure modes for wind...

  15. Reliability Architecture for Collaborative Robot Control Systems in Complex Environments

    Liang Tang


    Full Text Available Many different kinds of robot systems have been successfully deployed in complex environments, while research into collaborative control systems between different robots, which can be seen as a hybrid internetware safety-critical system, has become essential. This paper discusses ways to construct robust and secure reliability architecture for collaborative robot control systems in complex environments. First, the indication system for evaluating the realtime reliability of hybrid internetware systems is established. Next, a dynamic collaborative reliability model for components of hybrid internetware systems is proposed. Then, a reliable, adaptive and evolutionary computation method for hybrid internetware systems is proposed, and a timing consistency verification solution for collaborative robot control internetware applications is studied. Finally, a multi-level security model supporting dynamic resource allocation is established.

  16. Failure database and tools for wind turbine availability and reliability analyses. The application of reliability data for selected wind turbines

    Kozine, I.; Christensen, P.; Winther-Jensen, M.


    The objective of this project was to develop and establish a database for collecting reliability and reliability-related data, for assessing the reliability of wind turbine components and subsystems and wind turbines as a whole, as well as for assessing wind turbine availability while ranking the contributions at both the component and system levels. The project resulted in a software package combining a failure database with programs for predicting WTB availability and the reliability of all the components and systems, especially the safety system. The report consists of a description of the theoretical foundation of the reliability and availability analyses and of sections devoted to the development of the WTB reliability models as well as a description of the features of the database and software developed. The project comprises analysis of WTBs NM 600/44, 600/48, 750/44 and 750/48, all of which have similar safety systems. The database was established with Microsoft Access Database Management System, the software for reliability and availability assessments was created with Visual Basic. (au)

  17. Structural reliability analysis and reliability-based design optimization: Recent advances

    Qiu, ZhiPing; Huang, Ren; Wang, XiaoJun; Qi, WuChao


    We review recent research activities on structural reliability analysis, reliability-based design optimization (RBDO) and applications in complex engineering structural design. Several novel uncertainty propagation methods and reliability models, which are the basis of the reliability assessment, are given. In addition, recent developments on reliability evaluation and sensitivity analysis are highlighted as well as implementation strategies for RBDO.

  18. Crystalline-silicon photovoltaics summary module design and reliability

    Ross, R. G., Jr.

    The evolution of the design and reliability of solar modules was described. Design requirements involved 14 different considerations, including residential building and material electrical codes, wind-loading, hail impact, operating temperature levels, module flammability, and interfaces for both the array structure and the operation of the system. Reliability research involved in diverse investigations including glass-fracture strength, soiling levels, electrochemical corrosion, and bypass-diode qualification tests. Based on these internationally recognized studies, performance assessments, and failure analyses, the Flat-plate Solar Array Project in its 11-year duration served to nuture the development of 45 different solar module designs from 15 PV manufacturers.

  19. Technical information report: Plasma melter operation, reliability, and maintenance analysis

    Hendrickson, D.W. [ed.


    This document provides a technical report of operability, reliability, and maintenance of a plasma melter for low-level waste vitrification, in support of the Hanford Tank Waste Remediation System (TWRS) Low-Level Waste (LLW) Vitrification Program. A process description is provided that minimizes maintenance and downtime and includes material and energy balances, equipment sizes and arrangement, startup/operation/maintence/shutdown cycle descriptions, and basis for scale-up to a 200 metric ton/day production facility. Operational requirements are provided including utilities, feeds, labor, and maintenance. Equipment reliability estimates and maintenance requirements are provided which includes a list of failure modes, responses, and consequences.

  20. Decision theory in structural reliability

    Thomas, J. M.; Hanagud, S.; Hawk, J. D.


    Some fundamentals of reliability analysis as applicable to aerospace structures are reviewed, and the concept of a test option is introduced. A decision methodology, based on statistical decision theory, is developed for determining the most cost-effective design factor and method of testing for a given structural assembly. The method is applied to several Saturn V and Space Shuttle structural assemblies as examples. It is observed that the cost and weight features of the design have a significant effect on the optimum decision.