WorldWideScience

Sample records for time-variant reliability applications

  1. A new approach for reliability analysis with time-variant performance characteristics

    International Nuclear Information System (INIS)

    Wang, Zequn; Wang, Pingfeng

    2013-01-01

    Reliability represents safety level in industry practice and may variant due to time-variant operation condition and components deterioration throughout a product life-cycle. Thus, the capability to perform time-variant reliability analysis is of vital importance in practical engineering applications. This paper presents a new approach, referred to as nested extreme response surface (NERS), that can efficiently tackle time dependency issue in time-variant reliability analysis and enable to solve such problem by easily integrating with advanced time-independent tools. The key of the NERS approach is to build a nested response surface of time corresponding to the extreme value of the limit state function by employing Kriging model. To obtain the data for the Kriging model, the efficient global optimization technique is integrated with the NERS to extract the extreme time responses of the limit state function for any given system input. An adaptive response prediction and model maturation mechanism is developed based on mean square error (MSE) to concurrently improve the accuracy and computational efficiency of the proposed approach. With the nested response surface of time, the time-variant reliability analysis can be converted into the time-independent reliability analysis and existing advanced reliability analysis methods can be used. Three case studies are used to demonstrate the efficiency and accuracy of NERS approach

  2. Time-variant reliability assessment through equivalent stochastic process transformation

    International Nuclear Information System (INIS)

    Wang, Zequn; Chen, Wei

    2016-01-01

    Time-variant reliability measures the probability that an engineering system successfully performs intended functions over a certain period of time under various sources of uncertainty. In practice, it is computationally prohibitive to propagate uncertainty in time-variant reliability assessment based on expensive or complex numerical models. This paper presents an equivalent stochastic process transformation approach for cost-effective prediction of reliability deterioration over the life cycle of an engineering system. To reduce the high dimensionality, a time-independent reliability model is developed by translating random processes and time parameters into random parameters in order to equivalently cover all potential failures that may occur during the time interval of interest. With the time-independent reliability model, an instantaneous failure surface is attained by using a Kriging-based surrogate model to identify all potential failure events. To enhance the efficacy of failure surface identification, a maximum confidence enhancement method is utilized to update the Kriging model sequentially. Then, the time-variant reliability is approximated using Monte Carlo simulations of the Kriging model where system failures over a time interval are predicted by the instantaneous failure surface. The results of two case studies demonstrate that the proposed approach is able to accurately predict the time evolution of system reliability while requiring much less computational efforts compared with the existing analytical approach. - Highlights: • Developed a new approach for time-variant reliability analysis. • Proposed a novel stochastic process transformation procedure to reduce the dimensionality. • Employed Kriging models with confidence-based adaptive sampling scheme to enhance computational efficiency. • The approach is effective for handling random process in time-variant reliability analysis. • Two case studies are used to demonstrate the efficacy

  3. Eco-reliable path finding in time-variant and stochastic networks

    International Nuclear Information System (INIS)

    Li, Wenjie; Yang, Lixing; Wang, Li; Zhou, Xuesong; Liu, Ronghui; Gao, Ziyou

    2017-01-01

    This paper addresses a route guidance problem for finding the most eco-reliable path in time-variant and stochastic networks such that travelers can arrive at the destination with the maximum on-time probability while meeting vehicle emission standards imposed by government regulators. To characterize the dynamics and randomness of transportation networks, the link travel times and emissions are assumed to be time-variant random variables correlated over the entire network. A 0–1 integer mathematical programming model is formulated to minimize the probability of late arrival by simultaneously considering the least expected emission constraint. Using the Lagrangian relaxation approach, the primal model is relaxed into a dualized model which is further decomposed into two simple sub-problems. A sub-gradient method is developed to reduce gaps between upper and lower bounds. Three sets of numerical experiments are tested to demonstrate the efficiency and performance of our proposed model and algorithm. - Highlights: • The most eco-reliable path is defined in time-variant and stochastic networks. • The model is developed with on-time arrival probability and emission constraints. • The sub-gradient and label correcting algorithm are integrated to solve the model. • Numerical experiments demonstrate the effectiveness of developed approaches.

  4. Hybrid time-variant reliability estimation for active control structures under aleatory and epistemic uncertainties

    Science.gov (United States)

    Wang, Lei; Xiong, Chuang; Wang, Xiaojun; Li, Yunlong; Xu, Menghui

    2018-04-01

    Considering that multi-source uncertainties from inherent nature as well as the external environment are unavoidable and severely affect the controller performance, the dynamic safety assessment with high confidence is of great significance for scientists and engineers. In view of this, the uncertainty quantification analysis and time-variant reliability estimation corresponding to the closed-loop control problems are conducted in this study under a mixture of random, interval, and convex uncertainties. By combining the state-space transformation and the natural set expansion, the boundary laws of controlled response histories are first confirmed with specific implementation of random items. For nonlinear cases, the collocation set methodology and fourth Rounge-Kutta algorithm are introduced as well. Enlightened by the first-passage model in random process theory as well as by the static probabilistic reliability ideas, a new definition of the hybrid time-variant reliability measurement is provided for the vibration control systems and the related solution details are further expounded. Two engineering examples are eventually presented to demonstrate the validity and applicability of the methodology developed.

  5. A Time-Variant Reliability Model for Copper Bending Pipe under Seawater-Active Corrosion Based on the Stochastic Degradation Process

    Directory of Open Access Journals (Sweden)

    Bo Sun

    2018-03-01

    Full Text Available In the degradation process, the randomness and multiplicity of variables are difficult to describe by mathematical models. However, they are common in engineering and cannot be neglected, so it is necessary to study this issue in depth. In this paper, the copper bending pipe in seawater piping systems is taken as the analysis object, and the time-variant reliability is calculated by solving the interference of limit strength and maximum stress. We did degradation experiments and tensile experiments on copper material, and obtained the limit strength at each time. In addition, degradation experiments on copper bending pipe were done and the thickness at each time has been obtained, then the response of maximum stress was calculated by simulation. Further, with the help of one kind of Monte Carlo method we propose, the time-variant reliability of copper bending pipe was calculated based on the stochastic degradation process and interference theory. Compared with traditional methods and verified by maintenance records, the results show that the time-variant reliability model based on the stochastic degradation process proposed in this paper has better applicability in the reliability analysis, and it can be more convenient and accurate to predict the replacement cycle of copper bending pipe under seawater-active corrosion.

  6. Relevance of control theory to design and maintenance problems in time-variant reliability: The case of stochastic viability

    International Nuclear Information System (INIS)

    Rougé, Charles; Mathias, Jean-Denis; Deffuant, Guillaume

    2014-01-01

    The goal of this paper is twofold: (1) to show that time-variant reliability and a branch of control theory called stochastic viability address similar problems with different points of view, and (2) to demonstrate the relevance of concepts and methods from stochastic viability in reliability problems. On the one hand, reliability aims at evaluating the probability of failure of a system subjected to uncertainty and stochasticity. On the other hand, viability aims at maintaining a controlled dynamical system within a survival set. When the dynamical system is stochastic, this work shows that a viability problem belongs to a specific class of design and maintenance problems in time-variant reliability. Dynamic programming, which is used for solving Markovian stochastic viability problems, then yields the set of design states for which there exists a maintenance strategy which guarantees reliability with a confidence level β for a given period of time T. Besides, it leads to a straightforward computation of the date of the first outcrossing, informing on when the system is most likely to fail. We illustrate this approach with a simple example of population dynamics, including a case where load increases with time. - Highlights: • Time-variant reliability tools cannot devise complex maintenance strategies. • Stochastic viability is a control theory that computes a probability of failure. • Some design and maintenance problems are stochastic viability problems. • Used in viability, dynamic programming can find reliable maintenance actions. • Confronting reliability and control theories such as viability is promising

  7. Time-variant flexural reliability of RC beams with externally bonded CFRP under combined fatigue-corrosion actions

    International Nuclear Information System (INIS)

    Bigaud, David; Ali, Osama

    2014-01-01

    Time-variant reliability analysis of RC highway bridges strengthened with carbon fibre reinforced polymer CFRP laminates under four possible competing damage modes (concrete crushing, steel rupture after yielding, CFRP rupture and FRP plate debonding) and three degradation factors is analyzed in terms of reliability index β using FORM. The first degradation factor is chloride-attack corrosion which induces reduction in steel area and concrete cover cracking at characteristic key times (corrosion initiation, severe surface cover cracking). The second degradation factor considered is fatigue which leads to damage in concrete and steel rebar. Interaction between corrosion and fatigue crack growth in steel reinforcing bars is implemented. The third degradation phenomenon is the CFRP properties deterioration due to aging. Considering these three degradation factors, the time-dependent flexural reliability profile of a typical simple 15 m-span intermediate girder of a RC highway bridge is constructed under various traffic volumes and under different corrosion environments. The bridge design options follow AASHTO-LRFD specifications. Results of the study have shown that the reliability is very sensitive to factors governing the corrosion. Concrete damage due to fatigue slightly affects reliability profile of non-strengthened section, while service life after strengthening is strongly related to fatigue damage in concrete. - Highlights: • We propose a method to follow the time-variant reliability of strengthened RC beams. • We consider multiple competing failure modes of CFRP strengthened RC beams. • We consider combined degradation mechanisms (corrosion, fatigue, ageing of CFRP)

  8. New approaches for the reliability-oriented structural optimization considering time-variant aspects; Neue Ansaetze fuer die zuverlaessigkeitsorientierte Strukturoptimierung unter Beachtung zeitvarianter Aspekte

    Energy Technology Data Exchange (ETDEWEB)

    Kuschel, N.

    2000-07-01

    The optimization of structures with respect to cost, weight or performance is a well-known application of the nonlinear optimization. However reliability-based structural optimization has been subject of only very few studies. The approaches suggested up to now have been unsatisfactory regarding general possibility of application or easy handling by user. The objective of this thesis is the development of general approaches to solve both optimization problems, the minimization of cost with respect to constraint reliabilty and the maximization of reliability under cost constraint. The extented approach of an one-level-method will be introduced in detail for the time-invariant problems. Here, the reliability of the sturcture will be analysed in the framework of the First-Order-Reliability-Method (FORM). The use of time-variant reliability analysis is necessary for a realistic modelling of many practical problems. Therefore several generalizations of the new approaches will be derived for the time-variant reliability-based structural optimization. Some important properties of the optimization problems are proved. In addition some interesting extensions of the one-level-method, for example the cost optimization of structural series systems and the cost optimization in the frame of the Second-Order-Reliabiity-Method (SORM), are presented in the thesis. (orig.) [German] Die Optimierung von Tragwerken im Hinblick auf die Kosten, das Gewicht oder die Gestalt ist eine sehr bekannte Anwendung der nichtlinearen Optimierung. Die zuverlaessigkeitsorientierte Strukturoptimierung wurde dagegen weit seltener untersucht. Die bisher vorgeschlagenen Ansaetze koennen bezueglich ihrer allgemeinen Verwendbarkeit oder ihrer nutzerfreundlichen Handhabung nicht befriedigen. Das Ziel der vorliegenden Arbeit ist nun die Entwicklung allgemeiner Ansaetze zur Loesung der beiden Optimierungsprobleme, einer Kostenminimierung unter Zuverlaessigkeitsrestriktionen und einer

  9. Time variant layer control in atmospheric pressure chemical vapor deposition based growth of graphene

    KAUST Repository

    Qaisi, Ramy M.; Smith, Casey; Hussain, Muhammad Mustafa

    2013-01-01

    Graphene is a semi-metallic, transparent, atomic crystal structure material which is promising for its high mobility, strength and transparency - potentially applicable for radio frequency (RF) circuitry and energy harvesting and storage applications. Uniform (same number of layers), continuous (not torn or discontinuous), large area (100 mm to 200 mm wafer scale), low-cost, reliable growth are the first hand challenges for its commercialization prospect. We show a time variant uniform (layer control) growth of bi- to multi-layer graphene using atmospheric chemical vapor deposition system. We use Raman spectroscopy for physical characterization supported by electrical property analysis. © 2013 IEEE.

  10. Time variant layer control in atmospheric pressure chemical vapor deposition based growth of graphene

    KAUST Repository

    Qaisi, Ramy M.

    2013-04-01

    Graphene is a semi-metallic, transparent, atomic crystal structure material which is promising for its high mobility, strength and transparency - potentially applicable for radio frequency (RF) circuitry and energy harvesting and storage applications. Uniform (same number of layers), continuous (not torn or discontinuous), large area (100 mm to 200 mm wafer scale), low-cost, reliable growth are the first hand challenges for its commercialization prospect. We show a time variant uniform (layer control) growth of bi- to multi-layer graphene using atmospheric chemical vapor deposition system. We use Raman spectroscopy for physical characterization supported by electrical property analysis. © 2013 IEEE.

  11. Reliable gain-scheduled control of discrete-time systems and its application to CSTR model

    Science.gov (United States)

    Sakthivel, R.; Selvi, S.; Mathiyalagan, K.; Shi, Y.

    2016-10-01

    This paper is focused on reliable gain-scheduled controller design for a class of discrete-time systems with randomly occurring nonlinearities and actuator fault. Further, the nonlinearity in the system model is assumed to occur randomly according to a Bernoulli distribution with measurable time-varying probability in real time. The main purpose of this paper is to design a gain-scheduled controller by implementing a probability-dependent Lyapunov function and linear matrix inequality (LMI) approach such that the closed-loop discrete-time system is stochastically stable for all admissible randomly occurring nonlinearities. The existence conditions for the reliable controller is formulated in terms of LMI constraints. Finally, the proposed reliable gain-scheduled control scheme is applied on continuously stirred tank reactor model to demonstrate the effectiveness and applicability of the proposed design technique.

  12. VIPER: a web application for rapid expert review of variant calls.

    Science.gov (United States)

    Wöste, Marius; Dugas, Martin

    2018-01-15

    With the rapid development in next-generation sequencing, cost and time requirements for genomic sequencing are decreasing, enabling applications in many areas such as cancer research. Many tools have been developed to analyze genomic variation ranging from single nucleotide variants to whole chromosomal aberrations. As sequencing throughput increases, the number of variants called by such tools also grows. Often employed manual inspection of such calls is thus becoming a time-consuming procedure. We developed the Variant InsPector and Expert Rating tool (VIPER) to speed up this process by integrating the Integrative Genomics Viewer into a web application. Analysts can then quickly iterate through variants, apply filters and make decisions based on the generated images and variant metadata. VIPER was successfully employed in analyses with manual inspection of more than 10,000 calls. VIPER is implemented in Java and Javascript and is freely available at https://github.com/MarWoes/viper. Marius.Woeste@uni-muenster.de. Supplementary data are available at Bioinformatics online. © The Author (2018). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  13. Reliable and rapid characterization of functional FCN2 gene variants reveals diverse geographical patterns

    Directory of Open Access Journals (Sweden)

    Ojurongbe Olusola

    2012-05-01

    Full Text Available Abstract Background Ficolin-2 coded by FCN2 gene is a soluble serum protein and an innate immune recognition element of the complement system. FCN2 gene polymorphisms reveal distinct geographical patterns and are documented to alter serum ficolin levels and modulate disease susceptibility. Methods We employed a real-time PCR based on Fluorescence Resonance Energy Transfer (FRET method to genotype four functional SNPs including -986 G > A (#rs3124952, -602 G > A (#rs3124953, -4A > G (#rs17514136 and +6424 G > T (#rs7851696 in the ficolin-2 (FCN2 gene. We characterized the FCN2 variants in individuals representing Brazilian (n = 176, Nigerian (n = 180, Vietnamese (n = 172 and European Caucasian ethnicity (n = 165. Results We observed that the genotype distribution of three functional SNP variants (−986 G > A, -602 G > A and -4A > G differ significantly between the populations investigated (p p  Conclusions The observed distribution of the FCN2 functional SNP variants may likely contribute to altered serum ficolin levels and this may depend on the different disease settings in world populations. To conclude, the use of FRET based real-time PCR especially for FCN2 gene will benefit a larger scientific community who extensively depend on rapid, reliable method for FCN2 genotyping.

  14. Adaptive lattice decision-feedback equalizers - Their performance and application to time-variant multipath channnels

    Science.gov (United States)

    Ling, F.; Proakis, J. G.

    1985-04-01

    This paper presents two types of adaptive lattice decision-feedback equalizers (DFE), the least squares (LS) lattice DFE and the gradient lattice DFE. Their performance has been investigated on both time-invariant and time-variant channels through computer simulations and compared to other kinds of equalizers. An analysis of the self-noise and tracking characteristics of the LS DFE and the DFE employing the Widrow-Hoff least mean square adaptive algorithm (LMS DFE) are also given. The analysis and simulation results show that the LS lattice DFE has the faster initial convergence rate, while the gradient lattice DFE is computationally more efficient. The main advantages of the lattice DFE's are their numerical stability, their computational efficiency, the flexibility to change their length, and their excellent capabilities for tracking rapidly time-variant channels.

  15. Systems reliability/structural reliability

    International Nuclear Information System (INIS)

    Green, A.E.

    1980-01-01

    The question of reliability technology using quantified techniques is considered for systems and structures. Systems reliability analysis has progressed to a viable and proven methodology whereas this has yet to be fully achieved for large scale structures. Structural loading variants over the half-time of the plant are considered to be more difficult to analyse than for systems, even though a relatively crude model may be a necessary starting point. Various reliability characteristics and environmental conditions are considered which enter this problem. The rare event situation is briefly mentioned together with aspects of proof testing and normal and upset loading conditions. (orig.)

  16. APPLICATION OF TRAVEL TIME RELIABILITY FOR PERFORMANCE ORIENTED OPERATIONAL PLANNING OF EXPRESSWAYS

    Science.gov (United States)

    Mehran, Babak; Nakamura, Hideki

    Evaluation of impacts of congestion improvement scheme s on travel time reliability is very significant for road authorities since travel time reliability repr esents operational performance of expressway segments. In this paper, a methodology is presented to estimate travel tim e reliability prior to implementation of congestion relief schemes based on travel time variation modeling as a function of demand, capacity, weather conditions and road accident s. For subject expressway segmen ts, traffic conditions are modeled over a whole year considering demand and capacity as random variables. Patterns of demand and capacity are generated for each five minute interval by appl ying Monte-Carlo simulation technique, and accidents are randomly generated based on a model that links acci dent rate to traffic conditions. A whole year analysis is performed by comparing de mand and available capacity for each scenario and queue length is estimated through shockwave analysis for each time in terval. Travel times are estimated from refined speed-flow relationships developed for intercity expressways and buffer time index is estimated consequently as a measure of travel time reliability. For validation, estimated reliability indices are compared with measured values from empirical data, and it is shown that the proposed method is suitable for operational evaluation and planning purposes.

  17. Local binary patterns new variants and applications

    CERN Document Server

    Jain, Lakhmi; Nanni, Loris; Lumini, Alessandra

    2014-01-01

    This book introduces Local Binary Patterns (LBP), arguably one of the most powerful texture descriptors, and LBP variants. This volume provides the latest reviews of the literature and a presentation of some of the best LBP variants by researchers at the forefront of textual analysis research and research on LBP descriptors and variants. The value of LBP variants is illustrated with reported experiments using many databases representing a diversity of computer vision applications in medicine, biometrics, and other areas. There is also a chapter that provides an excellent theoretical foundation for texture analysis and LBP in particular. A special section focuses on LBP and LBP variants in the area of face recognition, including thermal face recognition. This book will be of value to anyone already in the field as well as to those interested in learning more about this powerful family of texture descriptors.

  18. Time-dependent reliability analysis of flood defences

    International Nuclear Information System (INIS)

    Buijs, F.A.; Hall, J.W.; Sayers, P.B.; Gelder, P.H.A.J.M. van

    2009-01-01

    This paper describes the underlying theory and a practical process for establishing time-dependent reliability models for components in a realistic and complex flood defence system. Though time-dependent reliability models have been applied frequently in, for example, the offshore, structural safety and nuclear industry, application in the safety-critical field of flood defence has to date been limited. The modelling methodology involves identifying relevant variables and processes, characterisation of those processes in appropriate mathematical terms, numerical implementation, parameter estimation and prediction. A combination of stochastic, hierarchical and parametric processes is employed. The approach is demonstrated for selected deterioration mechanisms in the context of a flood defence system. The paper demonstrates that this structured methodology enables the definition of credible statistical models for time-dependence of flood defences in data scarce situations. In the application of those models one of the main findings is that the time variability in the deterioration process tends to be governed the time-dependence of one or a small number of critical attributes. It is demonstrated how the need for further data collection depends upon the relevance of the time-dependence in the performance of the flood defence system.

  19. Mechanical reliability of structures subjected to time-variant physical phenomena

    International Nuclear Information System (INIS)

    Lemaire, Celine

    1999-01-01

    This work deals with two-phase critical flows in order to improve the way to dimension safety systems. It brings a numerical, physical and experimental contribution. We emphasized the importance to validate separately the numerical method and the physical model. Reference numerical solutions, assimilated to quasi-analytical solutions, were elaborated for a stationary one-dimensional restriction. They allowed to validate in space non stationary numerical schemes converged in time and constitute space convergence indicator (2 schemes validated). With this reliable numerical solution, we studied the physical model. The potential of a particular existing dispersed flow model has been validated thanks to experimental data. The validity domain of such a model is inevitably reduced. During this study, particular behaviors have been exhibited like the pseudo-critical nature of flow with a relaxation process, the non characteristic properties/nature of critical parameters where disequilibrium is largely reduced or the predominance of pressure due to interfacial transfers. The multidimensional aspect has been studied. A data base included local parameters corresponding to a simplify geometry has been constituted. The flow impact on the disk has been characterized and multidimensional effects identified. These effects form an additional step to the validation of multidimensional physical models. (author) [fr

  20. Analysis of time-dependent reliability of degenerated reinforced concrete structure

    Directory of Open Access Journals (Sweden)

    Zhang Hongping

    2016-07-01

    Full Text Available Durability deterioration of structure is a highly random process. The maintenance of degenerated structure involves the calculation of the reliability of time-dependent structure. This study introduced reinforced concrete structure resistance decrease model and related statistical parameters of uncertainty, analyzed resistance decrease rules of corroded bending element of reinforced concrete structure, and finally calculated timedependent reliability of the corroded bending element of reinforced concrete structure, aiming to provide a specific theoretical basis for the application of time-dependent reliability theory.

  1. Stochastic Petri nets for the reliability analysis of communication network applications with alternate-routing

    International Nuclear Information System (INIS)

    Balakrishnan, Meera; Trivedi, Kishor S.

    1996-01-01

    In this paper, we present a comparative reliability analysis of an application on a corporate B-ISDN network under various alternate-routing protocols. For simple cases, the reliability problem can be cast into fault-tree models and solved rapidly by means of known methods. For more complex scenarios, state space (Markov) models are required. However, generation of large state space models can get very labor intensive and error prone. We advocate the use of stochastic reward nets (a variant of stochastic Petri nets) for the concise specification, automated generation and solution of alternate-routing protocols in networks. This paper is written in a tutorial style so as to make it accessible to a large audience

  2. Time-Varying, Multi-Scale Adaptive System Reliability Analysis of Lifeline Infrastructure Networks

    Energy Technology Data Exchange (ETDEWEB)

    Gearhart, Jared Lee [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kurtz, Nolan Scot [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-09-01

    The majority of current societal and economic needs world-wide are met by the existing networked, civil infrastructure. Because the cost of managing such infrastructure is high and increases with time, risk-informed decision making is essential for those with management responsibilities for these systems. To address such concerns, a methodology that accounts for new information, deterioration, component models, component importance, group importance, network reliability, hierarchical structure organization, and efficiency concerns has been developed. This methodology analyzes the use of new information through the lens of adaptive Importance Sampling for structural reliability problems. Deterioration, multi-scale bridge models, and time-variant component importance are investigated for a specific network. Furthermore, both bridge and pipeline networks are studied for group and component importance, as well as for hierarchical structures in the context of specific networks. Efficiency is the primary driver throughout this study. With this risk-informed approach, those responsible for management can address deteriorating infrastructure networks in an organized manner.

  3. Quantifying evolutionary dynamics from variant-frequency time series

    Science.gov (United States)

    Khatri, Bhavin S.

    2016-09-01

    From Kimura’s neutral theory of protein evolution to Hubbell’s neutral theory of biodiversity, quantifying the relative importance of neutrality versus selection has long been a basic question in evolutionary biology and ecology. With deep sequencing technologies, this question is taking on a new form: given a time-series of the frequency of different variants in a population, what is the likelihood that the observation has arisen due to selection or neutrality? To tackle the 2-variant case, we exploit Fisher’s angular transformation, which despite being discovered by Ronald Fisher a century ago, has remained an intellectual curiosity. We show together with a heuristic approach it provides a simple solution for the transition probability density at short times, including drift, selection and mutation. Our results show under that under strong selection and sufficiently frequent sampling these evolutionary parameters can be accurately determined from simulation data and so they provide a theoretical basis for techniques to detect selection from variant or polymorphism frequency time-series.

  4. Time domain series system definition and gear set reliability modeling

    International Nuclear Information System (INIS)

    Xie, Liyang; Wu, Ningxiang; Qian, Wenxue

    2016-01-01

    Time-dependent multi-configuration is a typical feature for mechanical systems such as gear trains and chain drives. As a series system, a gear train is distinct from a traditional series system, such as a chain, in load transmission path, system-component relationship, system functioning manner, as well as time-dependent system configuration. Firstly, the present paper defines time-domain series system to which the traditional series system reliability model is not adequate. Then, system specific reliability modeling technique is proposed for gear sets, including component (tooth) and subsystem (tooth-pair) load history description, material priori/posterior strength expression, time-dependent and system specific load-strength interference analysis, as well as statistically dependent failure events treatment. Consequently, several system reliability models are developed for gear sets with different tooth numbers in the scenario of tooth root material ultimate tensile strength failure. The application of the models is discussed in the last part, and the differences between the system specific reliability model and the traditional series system reliability model are illustrated by virtue of several numerical examples. - Highlights: • A new type of series system, i.e. time-domain multi-configuration series system is defined, that is of great significance to reliability modeling. • Multi-level statistical analysis based reliability modeling method is presented for gear transmission system. • Several system specific reliability models are established for gear set reliability estimation. • The differences between the traditional series system reliability model and the new model are illustrated.

  5. Analysis of operating reliability of WWER-1000 unit

    International Nuclear Information System (INIS)

    Bortlik, J.

    1985-01-01

    The nuclear power unit was divided into 33 technological units. Input data for reliability analysis were surveys of operating results obtained from the IAEA information system and certain indexes of the reliability of technological equipment determined using the Bayes formula. The missing reliability data for technological equipment were used from the basic variant. The fault tree of the WWER-1000 unit was determined for the peak event defined as the impossibility of reaching 100%, 75% and 50% of rated power. The period was observed of the nuclear power plant operation with reduced output owing to defect and the respective time needed for a repair of the equipment. The calculation of the availability of the WWER-1000 unit was made for different variant situations. Certain indexes of the operating reliability of the WWER-1000 unit which are the result of a detailed reliability analysis are tabulated for selected variants. (E.S.)

  6. Analysis and Application of Reliability

    International Nuclear Information System (INIS)

    Jeong, Hae Seong; Park, Dong Ho; Kim, Jae Ju

    1999-05-01

    This book tells of analysis and application of reliability, which includes definition, importance and historical background of reliability, function of reliability and failure rate, life distribution and assumption of reliability, reliability of unrepaired system, reliability of repairable system, sampling test of reliability, failure analysis like failure analysis by FEMA and FTA, and cases, accelerated life testing such as basic conception, acceleration and acceleration factor, and analysis of accelerated life testing data, maintenance policy about alternation and inspection.

  7. Monitoring Travel Time Reliability on Freeways

    NARCIS (Netherlands)

    Tu, Huizhao

    2008-01-01

    Travel time and travel time reliability are important attributes of a trip. The current measures of reliability have in common that in general they all relate to the variability of travel times. However, travel time reliability does not only rely on variability but also on the stability of travel

  8. Highly-reliable laser diodes and modules for spaceborne applications

    Science.gov (United States)

    Deichsel, E.

    2017-11-01

    Laser applications become more and more interesting in contemporary missions such as earth observations or optical communication in space. One of these applications is light detection and ranging (LIDAR), which comprises huge scientific potential in future missions. The Nd:YAG solid-state laser of such a LIDAR system is optically pumped using 808nm emitting pump sources based on semiconductor laser-diodes in quasi-continuous wave (qcw) operation. Therefore reliable and efficient laser diodes with increased output powers are an important requirement for a spaceborne LIDAR-system. In the past, many tests were performed regarding the performance and life-time of such laser-diodes. There were also studies for spaceborne applications, but a test with long operation times at high powers and statistical relevance is pending. Other applications, such as science packages (e.g. Raman-spectroscopy) on planetary rovers require also reliable high-power light sources. Typically fiber-coupled laser diode modules are used for such applications. Besides high reliability and life-time, designs compatible to the harsh environmental conditions must be taken in account. Mechanical loads, such as shock or strong vibration are expected due to take-off or landing procedures. Many temperature cycles with high change rates and differences must be taken in account due to sun-shadow effects in planetary orbits. Cosmic radiation has strong impact on optical components and must also be taken in account. Last, a hermetic sealing must be considered, since vacuum can have disadvantageous effects on optoelectronics components.

  9. Reliability Analysis of a Steel Frame

    Directory of Open Access Journals (Sweden)

    M. Sýkora

    2002-01-01

    Full Text Available A steel frame with haunches is designed according to Eurocodes. The frame is exposed to self-weight, snow, and wind actions. Lateral-torsional buckling appears to represent the most critical criterion, which is considered as a basis for the limit state function. In the reliability analysis, the probabilistic models proposed by the Joint Committee for Structural Safety (JCSS are used for basic variables. The uncertainty model coefficients take into account the inaccuracy of the resistance model for the haunched girder and the inaccuracy of the action effect model. The time invariant reliability analysis is based on Turkstra's rule for combinations of snow and wind actions. The time variant analysis describes snow and wind actions by jump processes with intermittencies. Assuming a 50-year lifetime, the obtained values of the reliability index b vary within the range from 3.95 up to 5.56. The cross-profile IPE 330 designed according to Eurocodes seems to be adequate. It appears that the time invariant reliability analysis based on Turkstra's rule provides considerably lower values of b than those obtained by the time variant analysis.

  10. Laboratory testing of hemolytic properties of materials that come in contact with blood: Comparative application testing method’s two variants according to the standard ASTM F756 in accordance with ISO 10993-4

    Directory of Open Access Journals (Sweden)

    Pavlović Katarina B.

    2010-01-01

    Full Text Available The presence of hemolytic material in contact with blood may produce increased levels of blood cell lysis and increased levels of plasma hemoglobin. This may induce toxic effects or other effects which may stress the kidneys or other organs. In this paper two variants of in vitro method and obtained results’ comparison were presented for testing of hemolytic properties of six raw materials (Polipropylene Moplen EP 540 P, Policarbonate colorless 164 R-112, Policarbonate brown 164 R-51918, Polietylene NG 3026 K, Polietylene NG - Purell GB 7250, Polietylene VG - Hiplex 5502 for medical device manufacturing and one raw material (Polietylen NG granulate used for infusion solutions’s plastic bottles manufacturing. One of method’s variants relies on raw material direct contact with swine blood and the other on extract of the material contact with swine blood. Both method’s variants imply reading of the absorbance of the supernatant after tubes were incubated and centrifuged. According to values obtained and using the standard curve free hemoglobin concentration is determined and based on this percentage hemolysis of raw material. Positive and negative controls were used in both variants where water for injection (WFI was used as positive control in which partial or complete hemolysis of erythrocytes occurs due to osmotic shock and phosphate buffer saline was used as negative control with no hemolytic property. In this paper comparison of results obtained by both method’s variants for testing of seven raw materials was presented, while these conclusions can not be used neither for all materials, nor for all applications without preliminary testing using both variants and then choosing more sensitive and more reliable one. It was shown and stated in the paper as well that incubation time being 3, 15 or 24 h, had no impact on the variant’s with direct contact sensitivity. This comparative approach was used for drawing conclusions in terms of

  11. Making real-time reactive systems reliable

    Science.gov (United States)

    Marzullo, Keith; Wood, Mark

    1990-01-01

    A reactive system is characterized by a control program that interacts with an environment (or controlled program). The control program monitors the environment and reacts to significant events by sending commands to the environment. This structure is quite general. Not only are most embedded real time systems reactive systems, but so are monitoring and debugging systems and distributed application management systems. Since reactive systems are usually long running and may control physical equipment, fault tolerance is vital. The research tries to understand the principal issues of fault tolerance in real time reactive systems and to build tools that allow a programmer to design reliable, real time reactive systems. In order to make real time reactive systems reliable, several issues must be addressed: (1) How can a control program be built to tolerate failures of sensors and actuators. To achieve this, a methodology was developed for transforming a control program that references physical value into one that tolerates sensors that can fail and can return inaccurate values; (2) How can the real time reactive system be built to tolerate failures of the control program. Towards this goal, whether the techniques presented can be extended to real time reactive systems is investigated; and (3) How can the environment be specified in a way that is useful for writing a control program. Towards this goal, whether a system with real time constraints can be expressed as an equivalent system without such constraints is also investigated.

  12. MAI-free performance of PMU-OFDM transceiver in time-variant environment

    Science.gov (United States)

    Tadjpour, Layla; Tsai, Shang-Ho; Kuo, C.-C. J.

    2005-06-01

    An approximately multi-user OFDM transceiver was introduced to reduce the multi-access interference (MAI ) due to the carrier frequency offset (CFO) to a negligible amount via precoding by Tsai, Lin and Kuo. In this work, we investigate the performance of this precoded multi-user (PMU) OFDM system in a time-variant channel environment. We analyze and compare the MAI effect caused by time-variant channels in the PMU-OFDM and the OFDMA systems. Generally speaking, the MAI effect consists of two parts. The first part is due to the loss of orthogonality among subchannels for all users while the second part is due to the CFO effect caused by the Doppler shift. Simulation results show that, although OFDMA outperforms the PMU-OFDM transceiver in a fast time-variant environment without CFO, PMU-OFDM outperforms OFDMA in a slow time-variant channel via the use of M/2 symmetric or anti-symmetric codewords of M Hadamard-Walsh codes.

  13. Travel Time Reliability in Indiana

    OpenAIRE

    Martchouk, Maria; Mannering, Fred L.; Singh, Lakhwinder

    2010-01-01

    Travel time and travel time reliability are important performance measures for assessing traffic condition and extent of congestion on a roadway. This study first uses a floating car technique to assess travel time and travel time reliability on a number of Indiana highways. Then the study goes on to describe the use of Bluetooth technology to collect real travel time data on a freeway and applies it to obtain two weeks of data on Interstate 69 in Indianapolis. An autoregressive model, estima...

  14. Reliability of Two Smartphone Applications for Radiographic Measurements of Hallux Valgus Angles.

    Science.gov (United States)

    Mattos E Dinato, Mauro Cesar; Freitas, Marcio de Faria; Milano, Cristiano; Valloto, Elcio; Ninomiya, André Felipe; Pagnano, Rodrigo Gonçalves

    The objective of the present study was to assess the reliability of 2 smartphone applications compared with the traditional goniometer technique for measurement of radiographic angles in hallux valgus and the time required for analysis with the different methods. The radiographs of 31 patients (52 feet) with a diagnosis of hallux valgus were analyzed. Four observers, 2 with >10 years' experience in foot and ankle surgery and 2 in-training surgeons, measured the hallux valgus angle and intermetatarsal angle using a manual goniometer technique and 2 smartphone applications (Hallux Angles and iPinPoint). The interobserver and intermethod reliability were estimated using intraclass correlation coefficients (ICCs), and the time required for measurement of the angles among the 3 methods was compared using the Friedman test. A very good or good interobserver reliability was found among the 4 observers measuring the hallux valgus angle and intermetatarsal angle using the goniometer (ICC 0.913 and 0.821, respectively) and iPinPoint (ICC 0.866 and 0.638, respectively). Using the Hallux Angles application, a very good interobserver reliability was found for measurements of the hallux valgus angle (ICC 0.962) and intermetatarsal angle (ICC 0.935) only among the more experienced observers. The time required for the measurements was significantly shorter for the measurements using both smartphone applications compared with the goniometer method. One smartphone application (iPinPoint) was reliable for measurements of the hallux valgus angles by either experienced or nonexperienced observers. The use of these tools might save time in the evaluation of radiographic angles in the hallux valgus. Copyright © 2016 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.

  15. Application of reliability methods in Ontario Hydro

    International Nuclear Information System (INIS)

    Jeppesen, R.; Ravishankar, T.J.

    1985-01-01

    Ontario Hydro have established a reliability program in support of its substantial nuclear program. Application of the reliability program to achieve both production and safety goals is described. The value of such a reliability program is evident in the record of Ontario Hydro's operating nuclear stations. The factors which have contributed to the success of the reliability program are identified as line management's commitment to reliability; selective and judicious application of reliability methods; establishing performance goals and monitoring the in-service performance; and collection, distribution, review and utilization of performance information to facilitate cost-effective achievement of goals and improvements. (orig.)

  16. Fundamentals and applications of systems reliability analysis

    International Nuclear Information System (INIS)

    Boesebeck, K.; Heuser, F.W.; Kotthoff, K.

    1976-01-01

    The lecture gives a survey on the application of methods of reliability analysis to assess the safety of nuclear power plants. Possible statements of reliability analysis in connection with specifications of the atomic licensing procedure are especially dealt with. Existing specifications of safety criteria are additionally discussed with the help of reliability analysis by the example of the reliability analysis of a reactor protection system. Beyond the limited application to single safety systems, the significance of reliability analysis for a closed risk concept is explained in the last part of the lecture. (orig./LH) [de

  17. On industrial application of structural reliability theory

    International Nuclear Information System (INIS)

    Thoft-Christensen, P.

    1998-01-01

    In this paper it is shown that modern structural reliability theory is being successfully applied to a number of different industries. This review of papers is in no way complete. In the literature there is a large number of similar applications and also application not touched on in this presentation. There has been some concern among scientists from this area that structural reliability theory is not being used by industry. It is probably correct that structural reliability theory is not being used by industry as much as it should be used. However, the work by the ESReDA Working Group clearly shows the vary wide application of structural reliability theory by many different industries. One must also have in mind that industry often is reluctant to publish data related to safety and reliability. (au)

  18. Application of subset simulation in reliability estimation of underground pipelines

    International Nuclear Information System (INIS)

    Tee, Kong Fah; Khan, Lutfor Rahman; Li, Hongshuang

    2014-01-01

    This paper presents a computational framework for implementing an advanced Monte Carlo simulation method, called Subset Simulation (SS) for time-dependent reliability prediction of underground flexible pipelines. The SS can provide better resolution for low failure probability level of rare failure events which are commonly encountered in pipeline engineering applications. Random samples of statistical variables are generated efficiently and used for computing probabilistic reliability model. It gains its efficiency by expressing a small probability event as a product of a sequence of intermediate events with larger conditional probabilities. The efficiency of SS has been demonstrated by numerical studies and attention in this work is devoted to scrutinise the robustness of the SS application in pipe reliability assessment and compared with direct Monte Carlo simulation (MCS) method. Reliability of a buried flexible steel pipe with time-dependent failure modes, namely, corrosion induced deflection, buckling, wall thrust and bending stress has been assessed in this study. The analysis indicates that corrosion induced excessive deflection is the most critical failure event whereas buckling is the least susceptible during the whole service life of the pipe. The study also shows that SS is robust method to estimate the reliability of buried pipelines and it is more efficient than MCS, especially in small failure probability prediction

  19. On industrial application of structural reliability theory

    Energy Technology Data Exchange (ETDEWEB)

    Thoft-Christensen, P

    1998-06-01

    In this paper it is shown that modern structural reliability theory is being successfully applied to a number of different industries. This review of papers is in no way complete. In the literature there is a large number of similar applications and also application not touched on in this presentation. There has been some concern among scientists from this area that structural reliability theory is not being used by industry. It is probably correct that structural reliability theory is not being used by industry as much as it should be used. However, the work by the ESReDA Working Group clearly shows the vary wide application of structural reliability theory by many different industries. One must also have in mind that industry often is reluctant to publish data related to safety and reliability. (au) 32 refs.

  20. A reliable and real-time aggregation aware data dissemination in a chain-based wireless sensor network

    NARCIS (Netherlands)

    Taghikhaki, Zahra; Meratnia, Nirvana; Havinga, Paul J.M.

    2012-01-01

    Time-critical applications of Wireless Sensor Networks (WSNs) demand timely data delivery for fast identification of out-of-ordinary situations and fast and reliable delivery of notification and warning messages. Due to the low reliable links in WSNs, achieving real-time guarantees and providing

  1. Is the Frontal Assessment Battery reliable in ALS patients?

    NARCIS (Netherlands)

    Raaphorst, J.; Beeldman, E.; Jaeger, B.; Schmand, B.A.; Berg, L.H. van den; Weikamp, J.G.; Schelhaas, H.J.; Visser, M. de; Haan, R.J. de

    2013-01-01

    The assessment of frontal functions in ALS patients is important because of the overlap with the behavioural variant of frontotemporal dementia (bvFTD). We investigated the applicability and reliability of the Frontal Assessment Battery (FAB) within a cohort of predominantly prevalent ALS patients.

  2. Design for ASIC reliability for low-temperature applications

    Science.gov (United States)

    Chen, Yuan; Mojaradi, Mohammad; Westergard, Lynett; Billman, Curtis; Cozy, Scott; Burke, Gary; Kolawa, Elizabeth

    2005-01-01

    In this paper, we present a methodology to design for reliability for low temperature applications without requiring process improvement. The developed hot carrier aging lifetime projection model takes into account both the transistor substrate current profile and temperature profile to determine the minimum transistor size needed in order to meet reliability requirements. The methodology is applicable for automotive, military, and space applications, where there can be varying temperature ranges. A case study utilizing this methodology is given to design for reliability into a custom application-specific integrated circuit (ASIC) for a Mars exploration mission.

  3. Time-dependent reliability sensitivity analysis of motion mechanisms

    International Nuclear Information System (INIS)

    Wei, Pengfei; Song, Jingwen; Lu, Zhenzhou; Yue, Zhufeng

    2016-01-01

    Reliability sensitivity analysis aims at identifying the source of structure/mechanism failure, and quantifying the effects of each random source or their distribution parameters on failure probability or reliability. In this paper, the time-dependent parametric reliability sensitivity (PRS) analysis as well as the global reliability sensitivity (GRS) analysis is introduced for the motion mechanisms. The PRS indices are defined as the partial derivatives of the time-dependent reliability w.r.t. the distribution parameters of each random input variable, and they quantify the effect of the small change of each distribution parameter on the time-dependent reliability. The GRS indices are defined for quantifying the individual, interaction and total contributions of the uncertainty in each random input variable to the time-dependent reliability. The envelope function method combined with the first order approximation of the motion error function is introduced for efficiently estimating the time-dependent PRS and GRS indices. Both the time-dependent PRS and GRS analysis techniques can be especially useful for reliability-based design. This significance of the proposed methods as well as the effectiveness of the envelope function method for estimating the time-dependent PRS and GRS indices are demonstrated with a four-bar mechanism and a car rack-and-pinion steering linkage. - Highlights: • Time-dependent parametric reliability sensitivity analysis is presented. • Time-dependent global reliability sensitivity analysis is presented for mechanisms. • The proposed method is especially useful for enhancing the kinematic reliability. • An envelope method is introduced for efficiently implementing the proposed methods. • The proposed method is demonstrated by two real planar mechanisms.

  4. A study on the real-time reliability of on-board equipment of train control system

    Science.gov (United States)

    Zhang, Yong; Li, Shiwei

    2018-05-01

    Real-time reliability evaluation is conducive to establishing a condition based maintenance system for the purpose of guaranteeing continuous train operation. According to the inherent characteristics of the on-board equipment, the connotation of reliability evaluation of on-board equipment is defined and the evaluation index of real-time reliability is provided in this paper. From the perspective of methodology and practical application, the real-time reliability of the on-board equipment is discussed in detail, and the method of evaluating the realtime reliability of on-board equipment at component level based on Hidden Markov Model (HMM) is proposed. In this method the performance degradation data is used directly to realize the accurate perception of the hidden state transition process of on-board equipment, which can achieve a better description of the real-time reliability of the equipment.

  5. Stochastic models and reliability parameter estimation applicable to nuclear power plant safety

    International Nuclear Information System (INIS)

    Mitra, S.P.

    1979-01-01

    A set of stochastic models and related estimation schemes for reliability parameters are developed. The models are applicable for evaluating reliability of nuclear power plant systems. Reliability information is extracted from model parameters which are estimated from the type and nature of failure data that is generally available or could be compiled in nuclear power plants. Principally, two aspects of nuclear power plant reliability have been investigated: (1) The statistical treatment of inplant component and system failure data; (2) The analysis and evaluation of common mode failures. The model inputs are failure data which have been classified as either the time type of failure data or the demand type of failure data. Failures of components and systems in nuclear power plant are, in general, rare events.This gives rise to sparse failure data. Estimation schemes for treating sparse data, whenever necessary, have been considered. The following five problems have been studied: 1) Distribution of sparse failure rate component data. 2) Failure rate inference and reliability prediction from time type of failure data. 3) Analyses of demand type of failure data. 4) Common mode failure model applicable to time type of failure data. 5) Estimation of common mode failures from 'near-miss' demand type of failure data

  6. Multivariate performance reliability prediction in real-time

    International Nuclear Information System (INIS)

    Lu, S.; Lu, H.; Kolarik, W.J.

    2001-01-01

    This paper presents a technique for predicting system performance reliability in real-time considering multiple failure modes. The technique includes on-line multivariate monitoring and forecasting of selected performance measures and conditional performance reliability estimates. The performance measures across time are treated as a multivariate time series. A state-space approach is used to model the multivariate time series. Recursive forecasting is performed by adopting Kalman filtering. The predicted mean vectors and covariance matrix of performance measures are used for the assessment of system survival/reliability with respect to the conditional performance reliability. The technique and modeling protocol discussed in this paper provide a means to forecast and evaluate the performance of an individual system in a dynamic environment in real-time. The paper also presents an example to demonstrate the technique

  7. Failure and reliability prediction by support vector machines regression of time series data

    International Nuclear Information System (INIS)

    Chagas Moura, Marcio das; Zio, Enrico; Lins, Isis Didier; Droguett, Enrique

    2011-01-01

    Support Vector Machines (SVMs) are kernel-based learning methods, which have been successfully adopted for regression problems. However, their use in reliability applications has not been widely explored. In this paper, a comparative analysis is presented in order to evaluate the SVM effectiveness in forecasting time-to-failure and reliability of engineered components based on time series data. The performance on literature case studies of SVM regression is measured against other advanced learning methods such as the Radial Basis Function, the traditional MultiLayer Perceptron model, Box-Jenkins autoregressive-integrated-moving average and the Infinite Impulse Response Locally Recurrent Neural Networks. The comparison shows that in the analyzed cases, SVM outperforms or is comparable to other techniques. - Highlights: → Realistic modeling of reliability demands complex mathematical formulations. → SVM is proper when the relation input/output is unknown or very costly to be obtained. → Results indicate the potential of SVM for reliability time series prediction. → Reliability estimates support the establishment of adequate maintenance strategies.

  8. High reliable and Real-time Data Communication Network Technology for Nuclear Power Plant

    International Nuclear Information System (INIS)

    Jeong, K. I.; Lee, J. K.; Choi, Y. R.; Lee, J. C.; Choi, Y. S.; Cho, J. W.; Hong, S. B.; Jung, J. E.; Koo, I. S.

    2008-03-01

    As advanced digital Instrumentation and Control (I and C) system of NPP(Nuclear Power Plant) are being introduced to replace analog systems, a Data Communication Network(DCN) is becoming the important system for transmitting the data generated by I and C systems in NPP. In order to apply the DCNs to NPP I and C design, DCNs should conform to applicable acceptance criteria and meet the reliability and safety goals of the system. As response time is impacted by the selected protocol, network topology, network performance, and the network configuration of I and C system, DCNs should transmit a data within time constraints and response time required by I and C systems to satisfy response time requirements of I and C system. To meet these requirements, the DCNs of NPP I and C should be a high reliable and real-time system. With respect to high reliable and real-time system, several reports and techniques having influences upon the reliability and real-time requirements of DCNs are surveyed and analyzed

  9. System reliability time-dependent models

    International Nuclear Information System (INIS)

    Debernardo, H.D.

    1991-06-01

    A probabilistic methodology for safety system technical specification evaluation was developed. The method for Surveillance Test Interval (S.T.I.) evaluation basically means an optimization of S.T.I. of most important system's periodically tested components. For Allowed Outage Time (A.O.T.) calculations, the method uses system reliability time-dependent models (A computer code called FRANTIC III). A new approximation, which was called Independent Minimal Cut Sets (A.C.I.), to compute system unavailability was also developed. This approximation is better than Rare Event Approximation (A.E.R.) and the extra computing cost is neglectible. A.C.I. was joined to FRANTIC III to replace A.E.R. on future applications. The case study evaluations verified that this methodology provides a useful probabilistic assessment of surveillance test intervals and allowed outage times for many plant components. The studied system is a typical configuration of nuclear power plant safety systems (two of three logic). Because of the good results, these procedures will be used by the Argentine nuclear regulatory authorities in evaluation of technical specification of Atucha I and Embalse nuclear power plant safety systems. (Author) [es

  10. NASA Applications and Lessons Learned in Reliability Engineering

    Science.gov (United States)

    Safie, Fayssal M.; Fuller, Raymond P.

    2011-01-01

    Since the Shuttle Challenger accident in 1986, communities across NASA have been developing and extensively using quantitative reliability and risk assessment methods in their decision making process. This paper discusses several reliability engineering applications that NASA has used over the year to support the design, development, and operation of critical space flight hardware. Specifically, the paper discusses several reliability engineering applications used by NASA in areas such as risk management, inspection policies, components upgrades, reliability growth, integrated failure analysis, and physics based probabilistic engineering analysis. In each of these areas, the paper provides a brief discussion of a case study to demonstrate the value added and the criticality of reliability engineering in supporting NASA project and program decisions to fly safely. Examples of these case studies discussed are reliability based life limit extension of Shuttle Space Main Engine (SSME) hardware, Reliability based inspection policies for Auxiliary Power Unit (APU) turbine disc, probabilistic structural engineering analysis for reliability prediction of the SSME alternate turbo-pump development, impact of ET foam reliability on the Space Shuttle System risk, and reliability based Space Shuttle upgrade for safety. Special attention is given in this paper to the physics based probabilistic engineering analysis applications and their critical role in evaluating the reliability of NASA development hardware including their potential use in a research and technology development environment.

  11. Reliability analysis of an offshore structure

    DEFF Research Database (Denmark)

    Sorensen, J. D.; Faber, M. H.; Thoft-Christensen, P.

    1992-01-01

    A jacket type offshore structure from the North Sea is considered. The time variant reliability is estimated for failure defined as brittle fracture and crack through the tubular member walls. The stochastic modelling is described. The hot spot stress spectral moments as function of the stochasti...

  12. A Novel Analytic Technique for the Service Station Reliability in a Discrete-Time Repairable Queue

    Directory of Open Access Journals (Sweden)

    Renbin Liu

    2013-01-01

    Full Text Available This paper presents a decomposition technique for the service station reliability in a discrete-time repairable GeomX/G/1 queueing system, in which the server takes exhaustive service and multiple adaptive delayed vacation discipline. Using such a novel analytic technique, some important reliability indices and reliability relation equations of the service station are derived. Furthermore, the structures of the service station indices are also found. Finally, special cases and numerical examples validate the derived results and show that our analytic technique is applicable to reliability analysis of some complex discrete-time repairable bulk arrival queueing systems.

  13. Software reliability growth models with normal failure time distributions

    International Nuclear Information System (INIS)

    Okamura, Hiroyuki; Dohi, Tadashi; Osaki, Shunji

    2013-01-01

    This paper proposes software reliability growth models (SRGM) where the software failure time follows a normal distribution. The proposed model is mathematically tractable and has sufficient ability of fitting to the software failure data. In particular, we consider the parameter estimation algorithm for the SRGM with normal distribution. The developed algorithm is based on an EM (expectation-maximization) algorithm and is quite simple for implementation as software application. Numerical experiment is devoted to investigating the fitting ability of the SRGMs with normal distribution through 16 types of failure time data collected in real software projects

  14. Reliable real-time applications - and how to use tests to model and understand

    DEFF Research Database (Denmark)

    Jensen, Peter Krogsgaard

    Test and analysis of real-time applications, where temporal properties are inspected, analyzed, and verified in a model developed from timed traces originating from measured test result on a running application......Test and analysis of real-time applications, where temporal properties are inspected, analyzed, and verified in a model developed from timed traces originating from measured test result on a running application...

  15. Evolution of simeprevir-resistant variants over time by ultra-deep sequencing in HCV genotype 1b.

    Science.gov (United States)

    Akuta, Norio; Suzuki, Fumitaka; Sezaki, Hitomi; Suzuki, Yoshiyuki; Hosaka, Tetsuya; Kobayashi, Masahiro; Kobayashi, Mariko; Saitoh, Satoshi; Ikeda, Kenji; Kumada, Hiromitsu

    2014-08-01

    Using ultra-deep sequencing technology, the present study was designed to investigate the evolution of simeprevir-resistant variants (amino acid substitutions of aa80, aa155, aa156, and aa168 positions in HCV NS3 region) over time. In Toranomon Hospital, 18 Japanese patients infected with HCV genotype 1b, received triple therapy of simeprevir/PEG-IFN/ribavirin (DRAGON or CONCERT study). Sustained virological response rate was 67%, and that was significantly higher in patients with IL28B rs8099917 TT than in those with non-TT. Six patients, who did not achieve sustained virological response, were tested for resistant variants by ultra-deep sequencing, at the baseline, at the time of re-elevation of viral loads, and at 96 weeks after the completion of treatment. Twelve of 18 resistant variants, detected at re-elevation of viral load, were de novo resistant variants. Ten of 12 de novo resistant variants become undetectable over time, and that five of seven resistant variants, detected at baseline, persisted over time. In one patient, variants of Q80R at baseline (0.3%) increased at 96-week after the cessation of treatment (10.2%), and de novo resistant variants of D168E (0.3%) also increased at 96-week after the cessation of treatment (9.7%). In conclusion, the present study indicates that the emergence of simeprevir-resistant variants after the start of treatment could not be predicted at baseline, and the majority of de novo resistant variants become undetectable over time. Further large-scale prospective studies should be performed to investigate the clinical utility in detecting simeprevir-resistant variants. © 2014 Wiley Periodicals, Inc.

  16. Practical applications of age-dependent reliability models and analysis of operational data

    Energy Technology Data Exchange (ETDEWEB)

    Lannoy, A.; Nitoi, M.; Backstrom, O.; Burgazzi, L.; Couallier, V.; Nikulin, M.; Derode, A.; Rodionov, A.; Atwood, C.; Fradet, F.; Antonov, A.; Berezhnoy, A.; Choi, S.Y.; Starr, F.; Dawson, J.; Palmen, H.; Clerjaud, L

    2005-07-01

    The purpose of the workshop was to present the experience of practical application of time-dependent reliability models. The program of the workshop comprises the following sessions: -) aging management and aging PSA (Probabilistic Safety Assessment), -) modeling, -) operation experience, and -) accelerating aging tests. In order to introduce time aging effect of particular component to the PSA model, it has been proposed to use the constant unavailability values on the short period of time (one year for example) calculated on the basis of age-dependent reliability models. As for modeling, it appears that the problem of too detailed statistical models for application is the lack of data for required parameters. As for operating experience, several methods of operating experience analysis have been presented (algorithms for reliability data elaboration and statistical identification of aging trend). As for accelerated aging tests, it is demonstrated that a combination of operating experience analysis with the results of accelerated aging tests of naturally aged equipment could provide a good basis for continuous operation of instrumentation and control systems.

  17. Practical applications of age-dependent reliability models and analysis of operational data

    International Nuclear Information System (INIS)

    Lannoy, A.; Nitoi, M.; Backstrom, O.; Burgazzi, L.; Couallier, V.; Nikulin, M.; Derode, A.; Rodionov, A.; Atwood, C.; Fradet, F.; Antonov, A.; Berezhnoy, A.; Choi, S.Y.; Starr, F.; Dawson, J.; Palmen, H.; Clerjaud, L.

    2005-01-01

    The purpose of the workshop was to present the experience of practical application of time-dependent reliability models. The program of the workshop comprises the following sessions: -) aging management and aging PSA (Probabilistic Safety Assessment), -) modeling, -) operation experience, and -) accelerating aging tests. In order to introduce time aging effect of particular component to the PSA model, it has been proposed to use the constant unavailability values on the short period of time (one year for example) calculated on the basis of age-dependent reliability models. As for modeling, it appears that the problem of too detailed statistical models for application is the lack of data for required parameters. As for operating experience, several methods of operating experience analysis have been presented (algorithms for reliability data elaboration and statistical identification of aging trend). As for accelerated aging tests, it is demonstrated that a combination of operating experience analysis with the results of accelerated aging tests of naturally aged equipment could provide a good basis for continuous operation of instrumentation and control systems

  18. Foundations for a time reliability correlation system to quantify human reliability

    International Nuclear Information System (INIS)

    Dougherty, E.M. Jr.; Fragola, J.R.

    1988-01-01

    Time reliability correlations (TRCs) have been used in human reliability analysis (HRA) in conjunction with probabilistic risk assessment (PRA) to quantify post-initiator human failure events. The first TRCs were judgmental but recent data taken from simulators have provided evidence for development of a system of TRCs. This system has the equational form: t = tau R X tau U , where the first factor is the lognormally distributed random variable of successful response time, derived from the simulator data, and the second factor is a unitary lognormal random variable to account for uncertainty in the model. The first random variable is further factored into a median response time and a factor to account for the dominant type of behavior assumed to be involved in the response and a second factor to account for other influences on the reliability of the response

  19. Travel time reliability modeling.

    Science.gov (United States)

    2011-07-01

    This report includes three papers as follows: : 1. Guo F., Rakha H., and Park S. (2010), "A Multi-state Travel Time Reliability Model," : Transportation Research Record: Journal of the Transportation Research Board, n 2188, : pp. 46-54. : 2. Park S.,...

  20. Adaptive time-variant models for fuzzy-time-series forecasting.

    Science.gov (United States)

    Wong, Wai-Keung; Bai, Enjian; Chu, Alice Wai-Ching

    2010-12-01

    A fuzzy time series has been applied to the prediction of enrollment, temperature, stock indices, and other domains. Related studies mainly focus on three factors, namely, the partition of discourse, the content of forecasting rules, and the methods of defuzzification, all of which greatly influence the prediction accuracy of forecasting models. These studies use fixed analysis window sizes for forecasting. In this paper, an adaptive time-variant fuzzy-time-series forecasting model (ATVF) is proposed to improve forecasting accuracy. The proposed model automatically adapts the analysis window size of fuzzy time series based on the prediction accuracy in the training phase and uses heuristic rules to generate forecasting values in the testing phase. The performance of the ATVF model is tested using both simulated and actual time series including the enrollments at the University of Alabama, Tuscaloosa, and the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX). The experiment results show that the proposed ATVF model achieves a significant improvement in forecasting accuracy as compared to other fuzzy-time-series forecasting models.

  1. Average inactivity time model, associated orderings and reliability properties

    Science.gov (United States)

    Kayid, M.; Izadkhah, S.; Abouammoh, A. M.

    2018-02-01

    In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.

  2. A reliability evaluation method for NPP safety DCS application software

    International Nuclear Information System (INIS)

    Li Yunjian; Zhang Lei; Liu Yuan

    2014-01-01

    In the field of nuclear power plant (NPP) digital i and c application, reliability evaluation for safety DCS application software is a key obstacle to be removed. In order to quantitatively evaluate reliability of NPP safety DCS application software, this paper propose a reliability evaluating method based on software development life cycle every stage's v and v defects density characteristics, by which the operating reliability level of the software can be predicted before its delivery, and helps to improve the reliability of NPP safety important software. (authors)

  3. A new measurement of workload in Web application reliability assessment

    Directory of Open Access Journals (Sweden)

    CUI Xia

    2015-02-01

    Full Text Available Web application has been popular in various fields of social life.It becomes more and more important to study the reliability of Web application.In this paper the definition of Web application failure is firstly brought out,and then the definition of Web application reliability.By analyzing data in the IIS server logs and selecting corresponding usage and information delivery failure data,the paper study the feasibility of Web application reliability assessment from the perspective of Web software system based on IIS server logs.Because the usage for a Web site often has certain regularity,a new measurement of workload in Web application reliability assessment is raised.In this method,the unit is removed by weighted average technique;and the weights are assessed by setting objective function and optimization.Finally an experiment was raised for validation.The experiment result shows the assessment of Web application reliability base on the new workload is better.

  4. Variants of Evolutionary Algorithms for Real-World Applications

    CERN Document Server

    Weise, Thomas; Michalewicz, Zbigniew

    2012-01-01

    Evolutionary Algorithms (EAs) are population-based, stochastic search algorithms that mimic natural evolution. Due to their ability to find excellent solutions for conventionally hard and dynamic problems within acceptable time, EAs have attracted interest from many researchers and practitioners in recent years. This book “Variants of Evolutionary Algorithms for Real-World Applications” aims to promote the practitioner’s view on EAs by providing a comprehensive discussion of how EAs can be adapted to the requirements of various applications in the real-world domains. It comprises 14 chapters, including an introductory chapter re-visiting the fundamental question of what an EA is and other chapters addressing a range of real-world problems such as production process planning, inventory system and supply chain network optimisation, task-based jobs assignment, planning for CNC-based work piece construction, mechanical/ship design tasks that involve runtime-intense simulations, data mining for the predictio...

  5. Macroscopic travel time reliability diagrams for freeway networks

    NARCIS (Netherlands)

    Tu, H.; Li, H.; Van Lint, J.W.C.; Knoop, V.L.; Sun, L.

    2013-01-01

    Travel time reliability is considered to be one of the key indicators of transport system performance. Knowledge of the mechanisms of travel time unreliability enables the derivation of explanatory models with which travel time reliability can be predicted and utilized in traffic management.

  6. Application of nonhomogeneous Poisson process to reliability analysis of repairable systems of a nuclear power plant with rates of occurrence of failures time-dependent

    International Nuclear Information System (INIS)

    Saldanha, Pedro L.C.; Simone, Elaine A. de; Melo, Paulo Fernando F.F. e

    1996-01-01

    Aging is used to mean the continuous process which physical characteristics of a system, a structure or an equipment changes with time or use. Their effects are increases in failure probabilities of a system, a structure or an equipment, and their are calculated using time-dependent failure rate models. The purpose of this paper is to present an application of the nonhomogeneous Poisson process as a model to study rates of occurrence of failures when they are time-dependent. To this application, an analysis of reliability of service water pumps of a typical nuclear power plant is made, as long as the pumps are effectively repaired components. (author)

  7. A Mechanism for Reliable Mobility Management for Internet of Things Using CoAP

    Directory of Open Access Journals (Sweden)

    Seung-Man Chun

    2017-01-01

    Full Text Available Under unreliable constrained wireless networks for Internet of Things (IoT environments, the loss of the signaling message may frequently occur. Mobile Internet Protocol version 6 (MIPv6 and its variants do not consider this situation. Consequently, as a constrained device moves around different wireless networks, its Internet Protocol (IP connectivity may be frequently disrupted and power can be drained rapidly. This can result in the loss of important sensing data or a large delay for time-critical IoT services such as healthcare monitoring and disaster management. This paper presents a reliable mobility management mechanism in Internet of Things environments with lossy low-power constrained device and network characteristics. The idea is to use the Internet Engineering Task Force (IETF Constrained Application Protocol (CoAP retransmission mechanism to achieve both reliability and simplicity for reliable IoT mobility management. Detailed architecture, algorithms, and message extensions for reliable mobility management are presented. Finally, performance is evaluated using both mathematical analysis and simulation.

  8. A Mechanism for Reliable Mobility Management for Internet of Things Using CoAP.

    Science.gov (United States)

    Chun, Seung-Man; Park, Jong-Tae

    2017-01-12

    Under unreliable constrained wireless networks for Internet of Things (IoT) environments, the loss of the signaling message may frequently occur. Mobile Internet Protocol version 6 (MIPv6) and its variants do not consider this situation. Consequently, as a constrained device moves around different wireless networks, its Internet Protocol (IP) connectivity may be frequently disrupted and power can be drained rapidly. This can result in the loss of important sensing data or a large delay for time-critical IoT services such as healthcare monitoring and disaster management. This paper presents a reliable mobility management mechanism in Internet of Things environments with lossy low-power constrained device and network characteristics. The idea is to use the Internet Engineering Task Force (IETF) Constrained Application Protocol (CoAP) retransmission mechanism to achieve both reliability and simplicity for reliable IoT mobility management. Detailed architecture, algorithms, and message extensions for reliable mobility management are presented. Finally, performance is evaluated using both mathematical analysis and simulation.

  9. Urban travel time reliability at different traffic conditions

    NARCIS (Netherlands)

    Zheng, Fangfang; Li, Jie; van Zuylen, H.J.; Liu, Xiaobo; Yang, Hongtai

    2017-01-01

    The decision making of travelers for route choice and departure time choice depends on the expected travel time and its reliability. A common understanding of reliability is that it is related to several statistical properties of the travel time distribution, especially to the standard deviation

  10. A Fast Optimization Method for Reliability and Performance of Cloud Services Composition Application

    Directory of Open Access Journals (Sweden)

    Zhao Wu

    2013-01-01

    Full Text Available At present the cloud computing is one of the newest trends of distributed computation, which is propelling another important revolution of software industry. The cloud services composition is one of the key techniques in software development. The optimization for reliability and performance of cloud services composition application, which is a typical stochastic optimization problem, is confronted with severe challenges due to its randomness and long transaction, as well as the characteristics of the cloud computing resources such as openness and dynamic. The traditional reliability and performance optimization techniques, for example, Markov model and state space analysis and so forth, have some defects such as being too time consuming and easy to cause state space explosion and unsatisfied the assumptions of component execution independence. To overcome these defects, we propose a fast optimization method for reliability and performance of cloud services composition application based on universal generating function and genetic algorithm in this paper. At first, a reliability and performance model for cloud service composition application based on the multiple state system theory is presented. Then the reliability and performance definition based on universal generating function is proposed. Based on this, a fast reliability and performance optimization algorithm is presented. In the end, the illustrative examples are given.

  11. Distribution System Reliability Analysis for Smart Grid Applications

    Science.gov (United States)

    Aljohani, Tawfiq Masad

    Reliability of power systems is a key aspect in modern power system planning, design, and operation. The ascendance of the smart grid concept has provided high hopes of developing an intelligent network that is capable of being a self-healing grid, offering the ability to overcome the interruption problems that face the utility and cost it tens of millions in repair and loss. To address its reliability concerns, the power utilities and interested parties have spent extensive amount of time and effort to analyze and study the reliability of the generation and transmission sectors of the power grid. Only recently has attention shifted to be focused on improving the reliability of the distribution network, the connection joint between the power providers and the consumers where most of the electricity problems occur. In this work, we will examine the effect of the smart grid applications in improving the reliability of the power distribution networks. The test system used in conducting this thesis is the IEEE 34 node test feeder, released in 2003 by the Distribution System Analysis Subcommittee of the IEEE Power Engineering Society. The objective is to analyze the feeder for the optimal placement of the automatic switching devices and quantify their proper installation based on the performance of the distribution system. The measures will be the changes in the reliability system indices including SAIDI, SAIFI, and EUE. The goal is to design and simulate the effect of the installation of the Distributed Generators (DGs) on the utility's distribution system and measure the potential improvement of its reliability. The software used in this work is DISREL, which is intelligent power distribution software that is developed by General Reliability Co.

  12. Application of Reliability in Breakwater Design

    DEFF Research Database (Denmark)

    Christiani, Erik

    methods to design certain types of breakwaters. Reliability analyses of the main armour and toe berm interaction is exemplified to show the effect of a multiple set of failure mechanisms. First the limit state equations of the main armour and toe interaction are derived from laboratory tests performed...... response, but in one area information has been lacking; bearing capacity has not been treated in depth in a probabilistic manner for breakwaters. Reliability analysis of conventional rubble mound breakwaters and conventional vertical breakwaters is exemplified for the purpose of establishing new ways...... by Bologna University. Thereafter a multiple system of failure for the interaction is established. Relevant stochastic parameters are characterized prior to the reliability evaluation. Application of reliability in crown wall design is illustrated by deriving relevant single foundation failure modes...

  13. Review of Industrial Applications of Structural Reliability Theory

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle

    For the last two decades we have seen an increasing interest in applying structural reliability theory to many different industries. However, the number of real practical applications is much smaller than what one would expect.......For the last two decades we have seen an increasing interest in applying structural reliability theory to many different industries. However, the number of real practical applications is much smaller than what one would expect....

  14. Reliability-Centric Analysis of Offloaded Computation in Cooperative Wearable Applications

    Directory of Open Access Journals (Sweden)

    Aleksandr Ometov

    2017-01-01

    Full Text Available Motivated by the unprecedented penetration of mobile communications technology, this work carefully brings into perspective the challenges related to heterogeneous communications and offloaded computation operating in cases of fault-tolerant computation, computing, and caching. We specifically focus on the emerging augmented reality applications that require reliable delegation of the computing and caching functionality to proximate resource-rich devices. The corresponding mathematical model proposed in this work becomes of value to assess system-level reliability in cases where one or more nearby collaborating nodes become temporarily unavailable. Our produced analytical and simulation results corroborate the asymptotic insensitivity of the stationary reliability of the system in question (under the “fast” recovery of its elements to the type of the “repair” time distribution, thus supporting the fault-tolerant system operation.

  15. X-real-time executive (X-RTE) an ultra-high reliable real-time executive for safety critical systems

    International Nuclear Information System (INIS)

    Suresh Babu, R.M.

    1995-01-01

    With growing number of application of computers in safety critical systems of nuclear plants there has been a need to assure high quality and reliability of the software used in these systems. One way to assure software quality is to use qualified software components. Since the safety systems and control systems are real-time systems there is a need for a real-time supervisory software to guarantee temporal response of the system. This report describes one such software package, called X-Real-Time Executive (or X-RTE), which was developed in Reactor Control Division, BARC. The report describes all the capabilities and unique features of X-RTE and compares it with a commercially available operating system. The features of X-RTE include pre-emptive scheduling, process synchronization, inter-process communication, multi-processor support, temporal support, debug facility, high portability, high reliability, high quality, and extensive documentation. Examples have been used very liberally to illustrate the underlying concepts. Besides, the report provides a brief description about the methods used, during the software development, to assure high quality and reliability of X-RTE. (author). refs., 11 figs., tabs

  16. Performance comparison of various time variant filters

    Energy Technology Data Exchange (ETDEWEB)

    Kuwata, M [JEOL Engineering Co. Ltd., Akishima, Tokyo (Japan); Husimi, K

    1996-07-01

    This paper describes the advantage of the trapezoidal filter used in semiconductor detector system comparing with the other time variant filters. The trapezoidal filter is the compose of a rectangular pre-filter and a gated integrator. We indicate that the best performance is obtained by the differential-integral summing type rectangular pre-filter. This filter is not only superior in performance, but also has the useful feature that the rising edge of the output waveform is linear. We introduce an example of this feature used in a high-energy experiment. (author)

  17. Time-variant random interval natural frequency analysis of structures

    Science.gov (United States)

    Wu, Binhua; Wu, Di; Gao, Wei; Song, Chongmin

    2018-02-01

    This paper presents a new robust method namely, unified interval Chebyshev-based random perturbation method, to tackle hybrid random interval structural natural frequency problem. In the proposed approach, random perturbation method is implemented to furnish the statistical features (i.e., mean and standard deviation) and Chebyshev surrogate model strategy is incorporated to formulate the statistical information of natural frequency with regards to the interval inputs. The comprehensive analysis framework combines the superiority of both methods in a way that computational cost is dramatically reduced. This presented method is thus capable of investigating the day-to-day based time-variant natural frequency of structures accurately and efficiently under concrete intrinsic creep effect with probabilistic and interval uncertain variables. The extreme bounds of the mean and standard deviation of natural frequency are captured through the embedded optimization strategy within the analysis procedure. Three particularly motivated numerical examples with progressive relationship in perspective of both structure type and uncertainty variables are demonstrated to justify the computational applicability, accuracy and efficiency of the proposed method.

  18. Reliability concepts applied to cutting tool change time

    Energy Technology Data Exchange (ETDEWEB)

    Patino Rodriguez, Carmen Elena, E-mail: cpatino@udea.edu.c [Department of Industrial Engineering, University of Antioquia, Medellin (Colombia); Department of Mechatronics and Mechanical Systems, Polytechnic School, University of Sao Paulo, Sao Paulo (Brazil); Francisco Martha de Souza, Gilberto [Department of Mechatronics and Mechanical Systems, Polytechnic School, University of Sao Paulo, Sao Paulo (Brazil)

    2010-08-15

    This paper presents a reliability-based analysis for calculating critical tool life in machining processes. It is possible to determine the running time for each tool involved in the process by obtaining the operations sequence for the machining procedure. Usually, the reliability of an operation depends on three independent factors: operator, machine-tool and cutting tool. The reliability of a part manufacturing process is mainly determined by the cutting time for each job and by the sequence of operations, defined by the series configuration. An algorithm is presented to define when the cutting tool must be changed. The proposed algorithm is used to evaluate the reliability of a manufacturing process composed of turning and drilling operations. The reliability of the turning operation is modeled based on data presented in the literature, and from experimental results, a statistical distribution of drilling tool wear was defined, and the reliability of the drilling process was modeled.

  19. Reliability concepts applied to cutting tool change time

    International Nuclear Information System (INIS)

    Patino Rodriguez, Carmen Elena; Francisco Martha de Souza, Gilberto

    2010-01-01

    This paper presents a reliability-based analysis for calculating critical tool life in machining processes. It is possible to determine the running time for each tool involved in the process by obtaining the operations sequence for the machining procedure. Usually, the reliability of an operation depends on three independent factors: operator, machine-tool and cutting tool. The reliability of a part manufacturing process is mainly determined by the cutting time for each job and by the sequence of operations, defined by the series configuration. An algorithm is presented to define when the cutting tool must be changed. The proposed algorithm is used to evaluate the reliability of a manufacturing process composed of turning and drilling operations. The reliability of the turning operation is modeled based on data presented in the literature, and from experimental results, a statistical distribution of drilling tool wear was defined, and the reliability of the drilling process was modeled.

  20. Dynamic Allan Variance Analysis Method with Time-Variant Window Length Based on Fuzzy Control

    Directory of Open Access Journals (Sweden)

    Shanshan Gu

    2015-01-01

    Full Text Available To solve the problem that dynamic Allan variance (DAVAR with fixed length of window cannot meet the identification accuracy requirement of fiber optic gyro (FOG signal over all time domains, a dynamic Allan variance analysis method with time-variant window length based on fuzzy control is proposed. According to the characteristic of FOG signal, a fuzzy controller with the inputs of the first and second derivatives of FOG signal is designed to estimate the window length of the DAVAR. Then the Allan variances of the signals during the time-variant window are simulated to obtain the DAVAR of the FOG signal to describe the dynamic characteristic of the time-varying FOG signal. Additionally, a performance evaluation index of the algorithm based on radar chart is proposed. Experiment results show that, compared with different fixed window lengths DAVAR methods, the change of FOG signal with time can be identified effectively and the evaluation index of performance can be enhanced by 30% at least by the DAVAR method with time-variant window length based on fuzzy control.

  1. Quality and reliability management and its applications

    CERN Document Server

    2016-01-01

    Integrating development processes, policies, and reliability predictions from the beginning of the product development lifecycle to ensure high levels of product performance and safety, this book helps companies overcome the challenges posed by increasingly complex systems in today’s competitive marketplace.   Examining both research on and practical aspects of product quality and reliability management with an emphasis on applications, the book features contributions written by active researchers and/or experienced practitioners in the field, so as to effectively bridge the gap between theory and practice and address new research challenges in reliability and quality management in practice.    Postgraduates, researchers and practitioners in the areas of reliability engineering and management, amongst others, will find the book to offer a state-of-the-art survey of quality and reliability management and practices.

  2. Application of reliability centered maintenance to Embalse NPP

    International Nuclear Information System (INIS)

    Torres, Antonio; Perdomo, Manuel; Fornero, Damian; Corchera, Roberto

    2010-01-01

    One of the most recent applications of Probabilistic Safety Analysis to Embalse NPP is the Safety Oriented Maintenance Program developed through the Reliability Centered Maintenance (RCM) methodology. Such an application was carried out by a cooperated effort between the staff of nuclear safety department of NPP and experts from Instituto Superior de Tecnologias y Ciencias Aplicadas of Cuba. So far 6 technological systems have been analyzed with important results regarding the optimization of preventive and predictive maintenance program of those systems. Any tasks of RCM were automated via MOSEG code. The results of this study were focused on the elaboration and modification of the Preventive Program, prioritization of stocks, reorientation of predictive techniques and modification in the time parameters of maintenance. (author)

  3. Reliability analysis of prestressed concrete containment structures

    International Nuclear Information System (INIS)

    Jiang, J.; Zhao, Y.; Sun, J.

    1993-01-01

    The reliability analysis of prestressed concrete containment structures subjected to combinations of static and dynamic loads with consideration of uncertainties of structural and load parameters is presented. Limit state probabilities for given parameters are calculated using the procedure developed at BNL, while that with consideration of parameter uncertainties are calculated by a fast integration for time variant structural reliability. The limit state surface of the prestressed concrete containment is constructed directly incorporating the prestress. The sensitivities of the Choleskey decomposition matrix and the natural vibration character are calculated by simplified procedures. (author)

  4. Reliability of application of inspection procedures

    Energy Technology Data Exchange (ETDEWEB)

    Murgatroyd, R A

    1988-12-31

    This document deals with the reliability of application of inspection procedures. A method to ensure that the inspection of defects thanks to fracture mechanics is reliable is described. The Systematic Human Error Reduction and Prediction Analysis (SHERPA) methodology is applied to every task performed by the inspector to estimate the possibility of error. It appears that it is essential that inspection procedures should be sufficiently rigorous to avoid substantial errors, and that the selection procedures and the training period for inspectors should be optimised. (TEC). 3 refs.

  5. Reliability of application of inspection procedures

    International Nuclear Information System (INIS)

    Murgatroyd, R.A.

    1988-01-01

    This document deals with the reliability of application of inspection procedures. A method to ensure that the inspection of defects thanks to fracture mechanics is reliable is described. The Systematic Human Error Reduction and Prediction Analysis (SHERPA) methodology is applied to every task performed by the inspector to estimate the possibility of error. It appears that it is essential that inspection procedures should be sufficiently rigorous to avoid substantial errors, and that the selection procedures and the training period for inspectors should be optimised. (TEC)

  6. On Industrial Application of Structural Reliability Theory

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle

    For the last two decades we have seen an increasing interest in applying structural reliability theory to many different industries. However, the number of real applications is much smaller than what one would expect. At the beginning most applications were in the design/analyses area especially...

  7. Sub-nanosecond jitter, repetitive impulse generators for high reliability applications

    International Nuclear Information System (INIS)

    Krausse, G.J.; Sarjeant, W.J.

    1981-01-01

    Low jitter, high reliability impulse generator development has recently become of ever increasing importance for developing nuclear physics and weapons applications. The research and development of very low jitter (< 30 ps), multikilovolt generators for high reliability, minimum maintenance trigger applications utilizing a new class of high-pressure tetrode thyratrons now commercially available are described. The overall system design philosophy is described followed by a detailed analysis of the subsystem component elements. A multi-variable experimental analysis of this new tetrode thyratron was undertaken, in a low-inductance configuration, as a function of externally available parameters. For specific thyratron trigger conditions, rise times of 18 ns into 6.0-Ω loads were achieved at jitters as low as 24 ps. Using this database, an integrated trigger generator system with solid-state front-end is described in some detail. The generator was developed to serve as the Master Trigger Generator for a large neutrino detector installation at the Los Alamos Meson Physics Facility

  8. Incorporating travel time reliability into the Highway Capacity Manual.

    Science.gov (United States)

    2014-01-01

    This final report documents the activities performed during SHRP 2 Reliability Project L08: Incorporating Travel Time Reliability into the Highway Capacity Manual. It serves as a supplement to the proposed chapters for incorporating travel time relia...

  9. Systems reliability analysis: applications of the SPARCS System-Reliability Assessment Computer Program

    International Nuclear Information System (INIS)

    Locks, M.O.

    1978-01-01

    SPARCS-2 (Simulation Program for Assessing the Reliabilities of Complex Systems, Version 2) is a PL/1 computer program for assessing (establishing interval estimates for) the reliability and the MTBF of a large and complex s-coherent system of any modular configuration. The system can consist of a complex logical assembly of independently failing attribute (binomial-Bernoulli) and time-to-failure (Poisson-exponential) components, without regard to their placement. Alternatively, it can be a configuration of independently failing modules, where each module has either or both attribute and time-to-failure components. SPARCS-2 also has an improved super modularity feature. Modules with minimal-cut unreliabiliy calculations can be mixed with those having minimal-path reliability calculations. All output has been standardized to system reliability or probability of success, regardless of the form in which the input data is presented, and whatever the configuration of modules or elements within modules

  10. ERP application of real-time vdc-enabled last planner system for planning reliability improvement

    DEFF Research Database (Denmark)

    Cho, S.; Sørensen, Kristian Birch; Fischer, M.

    2009-01-01

    The Last Planner System (LPS) has since its introduction in 1994 become a widely used method of AEC practitioners for improvement of planning reliability and tracking and monitoring of project progress. However, the observations presented in this paper indicate that the last planners...... and coordinators are in need of a new system that integrates the existing LPS with Virtual Design and Construction (VDC), Enterprise Resource Planning (ERP) systems, and automatic object identification by means of Radio Frequency Identification (RFID) technology. This is because current practice of the LPS...... implementations is guesswork-driven, textual report-generated, hand-updated, and even interpersonal trust-oriented, resulting in less accurate and reliable plans. This research introduces a prototype development of the VREL (VDC + RFID + ERP + LPS) integration to generate a real-time updated cost + physical...

  11. Measuring time and risk preferences: Reliability, stability, domain specificity

    NARCIS (Netherlands)

    Wölbert, E.M.; Riedl, A.M.

    2013-01-01

    To accurately predict behavior economists need reliable measures of individual time preferences and attitudes toward risk and typically need to assume stability of these characteristics over time and across decision domains. We test the reliability of two choice tasks for eliciting discount rates,

  12. Reliability and Availability Evaluation of Wireless Sensor Networks for Industrial Applications

    Science.gov (United States)

    Silva, Ivanovitch; Guedes, Luiz Affonso; Portugal, Paulo; Vasques, Francisco

    2012-01-01

    Wireless Sensor Networks (WSN) currently represent the best candidate to be adopted as the communication solution for the last mile connection in process control and monitoring applications in industrial environments. Most of these applications have stringent dependability (reliability and availability) requirements, as a system failure may result in economic losses, put people in danger or lead to environmental damages. Among the different type of faults that can lead to a system failure, permanent faults on network devices have a major impact. They can hamper communications over long periods of time and consequently disturb, or even disable, control algorithms. The lack of a structured approach enabling the evaluation of permanent faults, prevents system designers to optimize decisions that minimize these occurrences. In this work we propose a methodology based on an automatic generation of a fault tree to evaluate the reliability and availability of Wireless Sensor Networks, when permanent faults occur on network devices. The proposal supports any topology, different levels of redundancy, network reconfigurations, criticality of devices and arbitrary failure conditions. The proposed methodology is particularly suitable for the design and validation of Wireless Sensor Networks when trying to optimize its reliability and availability requirements. PMID:22368497

  13. effect of uncertainty on the fatigue reliability of reinforced concrete ...

    African Journals Online (AJOL)

    In this paper, a reliability time-variant fatigue analysis and uncertainty effect on the serviceability of reinforced concrete bridge deck was carried out. A simply supported 15m bridge deck was specifically used for the investigation. Mathematical models were developed and the uncertainties in structural resistance, applied ...

  14. Girsanov's transformation based variance reduced Monte Carlo simulation schemes for reliability estimation in nonlinear stochastic dynamics

    Science.gov (United States)

    Kanjilal, Oindrila; Manohar, C. S.

    2017-07-01

    The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the second explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations.

  15. Procedure for Application of Software Reliability Growth Models to NPP PSA

    International Nuclear Information System (INIS)

    Son, Han Seong; Kang, Hyun Gook; Chang, Seung Cheol

    2009-01-01

    As the use of software increases at nuclear power plants (NPPs), the necessity for including software reliability and/or safety into the NPP Probabilistic Safety Assessment (PSA) rises. This work proposes an application procedure of software reliability growth models (RGMs), which are most widely used to quantify software reliability, to NPP PSA. Through the proposed procedure, it can be determined if a software reliability growth model can be applied to the NPP PSA before its real application. The procedure proposed in this work is expected to be very helpful for incorporating software into NPP PSA

  16. Imaging a Time-variant Earthquake Focal Region along an Interplate Boundary

    Science.gov (United States)

    Tsuruga, K.; Kasahara, J.; Hasada, Y.; Fujii, N.

    2010-12-01

    We show a preliminary result of a trial for detecting a time-variant earthquake focal region along an interplate boundary by means of a new imaging method through a numerical simulation. Remarkable seismic reflections from the interplate boundaries of a subducting oceanic plate have been observed in Japan Trench (Mochizuki et al, 2005) and in Nankai Trough (Iidaka et al., 2003). Those strong seismic reflection existing in the current aseismic zones suggest the existence of fluid along the subduction boundary, and it is considered that they closely relate to a future huge earthquake. Seismic ACROSS has a potential to monitor some changes of transfer function along the propagating ray paths, by using an accurately-controlled transmission and receiving of the steady continuous signals repeatedly (Kumazawa et al., 2000). If the physical state in a focal region along the interplate would be changed enough in the time and space, for instance, by increasing or decreasing of fluid flow, we could detect some differences of the amplitude and/or travel-time of the particular reflection phases from the time-variant target region. In this study, we first investigated the seismic characteristics of seismograms and their differences before and after the change of a target region through a numerical simulation. Then, as one of the trials, we attempted to make an image of such time-variant target region by applying a finite-difference back-propagation technique in the time and space to the differences of waveforms (after Kasahara et al., 2010). We here used a 2-D seismic velocity model in the central Japan (Tsuruga et al., 2005), assuming a time-variant target region with a 200-m thickness along a subducting Philippine Sea plate at 30 km in depth. Seismograms were calculated at a 500-m interval for 260 km long by using FDM software (Larsen, 2000), in the case that P- and S-wave velocities (Vp amd Vs) in the target region decreased about 30 % before to after the change (e.g., Vp=3

  17. Conformal prediction for reliable machine learning theory, adaptations and applications

    CERN Document Server

    Balasubramanian, Vineeth; Vovk, Vladimir

    2014-01-01

    The conformal predictions framework is a recent development in machine learning that can associate a reliable measure of confidence with a prediction in any real-world pattern recognition application, including risk-sensitive applications such as medical diagnosis, face recognition, and financial risk prediction. Conformal Predictions for Reliable Machine Learning: Theory, Adaptations and Applications captures the basic theory of the framework, demonstrates how to apply it to real-world problems, and presents several adaptations, including active learning, change detection, and anomaly detecti

  18. Probabilistic risk assessment for a loss of coolant accident in McMaster Nuclear Reactor and application of reliability physics model for modeling human reliability

    Science.gov (United States)

    Ha, Taesung

    A probabilistic risk assessment (PRA) was conducted for a loss of coolant accident, (LOCA) in the McMaster Nuclear Reactor (MNR). A level 1 PRA was completed including event sequence modeling, system modeling, and quantification. To support the quantification of the accident sequence identified, data analysis using the Bayesian method and human reliability analysis (HRA) using the accident sequence evaluation procedure (ASEP) approach were performed. Since human performance in research reactors is significantly different from that in power reactors, a time-oriented HRA model (reliability physics model) was applied for the human error probability (HEP) estimation of the core relocation. This model is based on two competing random variables: phenomenological time and performance time. The response surface and direct Monte Carlo simulation with Latin Hypercube sampling were applied for estimating the phenomenological time, whereas the performance time was obtained from interviews with operators. An appropriate probability distribution for the phenomenological time was assigned by statistical goodness-of-fit tests. The human error probability (HEP) for the core relocation was estimated from these two competing quantities: phenomenological time and operators' performance time. The sensitivity of each probability distribution in human reliability estimation was investigated. In order to quantify the uncertainty in the predicted HEPs, a Bayesian approach was selected due to its capability of incorporating uncertainties in model itself and the parameters in that model. The HEP from the current time-oriented model was compared with that from the ASEP approach. Both results were used to evaluate the sensitivity of alternative huinan reliability modeling for the manual core relocation in the LOCA risk model. This exercise demonstrated the applicability of a reliability physics model supplemented with a. Bayesian approach for modeling human reliability and its potential

  19. Mission reliability of semi-Markov systems under generalized operational time requirements

    International Nuclear Information System (INIS)

    Wu, Xiaoyue; Hillston, Jane

    2015-01-01

    Mission reliability of a system depends on specific criteria for mission success. To evaluate the mission reliability of some mission systems that do not need to work normally for the whole mission time, two types of mission reliability for such systems are studied. The first type corresponds to the mission requirement that the system must remain operational continuously for a minimum time within the given mission time interval, while the second corresponds to the mission requirement that the total operational time of the system within the mission time window must be greater than a given value. Based on Markov renewal properties, matrix integral equations are derived for semi-Markov systems. Numerical algorithms and a simulation procedure are provided for both types of mission reliability. Two examples are used for illustration purposes. One is a one-unit repairable Markov system, and the other is a cold standby semi-Markov system consisting of two components. By the proposed approaches, the mission reliability of systems with time redundancy can be more precisely estimated to avoid possible unnecessary redundancy of system resources. - Highlights: • Two types of mission reliability under generalized requirements are defined. • Equations for both types of reliability are derived for semi-Markov systems. • Numerical methods are given for solving both types of reliability. • Simulation procedure is given for estimating both types of reliability. • Verification of the numerical methods is given by the results of simulation

  20. Fundamentals of reliability engineering applications in multistage interconnection networks

    CERN Document Server

    Gunawan, Indra

    2014-01-01

    This book presents fundamentals of reliability engineering with its applications in evaluating reliability of multistage interconnection networks. In the first part of the book, it introduces the concept of reliability engineering, elements of probability theory, probability distributions, availability and data analysis.  The second part of the book provides an overview of parallel/distributed computing, network design considerations, and more.  The book covers a comprehensive reliability engineering methods and its practical aspects in the interconnection network systems. Students, engineers, researchers, managers will find this book as a valuable reference source.

  1. Application of Fault Tree Analysis for Estimating Temperature Alarm Circuit Reliability

    International Nuclear Information System (INIS)

    El-Shanshoury, A.I.; El-Shanshoury, G.I.

    2011-01-01

    Fault Tree Analysis (FTA) is one of the most widely-used methods in system reliability analysis. It is a graphical technique that provides a systematic description of the combinations of possible occurrences in a system, which can result in an undesirable outcome. The presented paper deals with the application of FTA method in analyzing temperature alarm circuit. The criticality failure of this circuit comes from failing to alarm when temperature exceeds a certain limit. In order for a circuit to be safe, a detailed analysis of the faults causing circuit failure is performed by configuring fault tree diagram (qualitative analysis). Calculations of circuit quantitative reliability parameters such as Failure Rate (FR) and Mean Time between Failures (MTBF) are also done by using Relex 2009 computer program. Benefits of FTA are assessing system reliability or safety during operation, improving understanding of the system, and identifying root causes of equipment failures

  2. On reliable discovery of molecular signatures

    Directory of Open Access Journals (Sweden)

    Björkegren Johan

    2009-01-01

    Full Text Available Abstract Background Molecular signatures are sets of genes, proteins, genetic variants or other variables that can be used as markers for a particular phenotype. Reliable signature discovery methods could yield valuable insight into cell biology and mechanisms of human disease. However, it is currently not clear how to control error rates such as the false discovery rate (FDR in signature discovery. Moreover, signatures for cancer gene expression have been shown to be unstable, that is, difficult to replicate in independent studies, casting doubts on their reliability. Results We demonstrate that with modern prediction methods, signatures that yield accurate predictions may still have a high FDR. Further, we show that even signatures with low FDR may fail to replicate in independent studies due to limited statistical power. Thus, neither stability nor predictive accuracy are relevant when FDR control is the primary goal. We therefore develop a general statistical hypothesis testing framework that for the first time provides FDR control for signature discovery. Our method is demonstrated to be correct in simulation studies. When applied to five cancer data sets, the method was able to discover molecular signatures with 5% FDR in three cases, while two data sets yielded no significant findings. Conclusion Our approach enables reliable discovery of molecular signatures from genome-wide data with current sample sizes. The statistical framework developed herein is potentially applicable to a wide range of prediction problems in bioinformatics.

  3. Impact of data source on travel time reliability assessment.

    Science.gov (United States)

    2014-08-01

    Travel time reliability measures are becoming an increasingly important input to the mobility and : congestion management studies. In the case of Maryland State Highway Administration, reliability : measures are key elements in the agencys Annual ...

  4. Joint interval reliability for Markov systems with an application in transmission line reliability

    International Nuclear Information System (INIS)

    Csenki, Attila

    2007-01-01

    We consider Markov reliability models whose finite state space is partitioned into the set of up states U and the set of down states D . Given a collection of k disjoint time intervals I l =[t l ,t l +x l ], l=1,...,k, the joint interval reliability is defined as the probability of the system being in U for all time instances in I 1 union ... union I k . A closed form expression is derived here for the joint interval reliability for this class of models. The result is applied to power transmission lines in a two-state fluctuating environment. We use the Linux versions of the free packages Maxima and Scilab in our implementation for symbolic and numerical work, respectively

  5. Benchmarking distributed data warehouse solutions for storing genomic variant information

    Science.gov (United States)

    Wiewiórka, Marek S.; Wysakowicz, Dawid P.; Okoniewski, Michał J.

    2017-01-01

    Abstract Genomic-based personalized medicine encompasses storing, analysing and interpreting genomic variants as its central issues. At a time when thousands of patientss sequenced exomes and genomes are becoming available, there is a growing need for efficient database storage and querying. The answer could be the application of modern distributed storage systems and query engines. However, the application of large genomic variant databases to this problem has not been sufficiently far explored so far in the literature. To investigate the effectiveness of modern columnar storage [column-oriented Database Management System (DBMS)] and query engines, we have developed a prototypic genomic variant data warehouse, populated with large generated content of genomic variants and phenotypic data. Next, we have benchmarked performance of a number of combinations of distributed storages and query engines on a set of SQL queries that address biological questions essential for both research and medical applications. In addition, a non-distributed, analytical database (MonetDB) has been used as a baseline. Comparison of query execution times confirms that distributed data warehousing solutions outperform classic relational DBMSs. Moreover, pre-aggregation and further denormalization of data, which reduce the number of distributed join operations, significantly improve query performance by several orders of magnitude. Most of distributed back-ends offer a good performance for complex analytical queries, while the Optimized Row Columnar (ORC) format paired with Presto and Parquet with Spark 2 query engines provide, on average, the lowest execution times. Apache Kudu on the other hand, is the only solution that guarantees a sub-second performance for simple genome range queries returning a small subset of data, where low-latency response is expected, while still offering decent performance for running analytical queries. In summary, research and clinical applications that require

  6. Real Time Grid Reliability Management 2005

    Energy Technology Data Exchange (ETDEWEB)

    Eto, Joe; Eto, Joe; Lesieutre, Bernard; Lewis, Nancy Jo; Parashar, Manu

    2008-07-07

    The increased need to manage California?s electricity grid in real time is a result of the ongoing transition from a system operated by vertically-integrated utilities serving native loads to one operated by an independent system operator supporting competitive energy markets. During this transition period, the traditional approach to reliability management -- construction of new transmission lines -- has not been pursued due to unresolved issues related to the financing and recovery of transmission project costs. In the absence of investments in new transmission infrastructure, the best strategy for managing reliability is to equip system operators with better real-time information about actual operating margins so that they can better understand and manage the risk of operating closer to the edge. A companion strategy is to address known deficiencies in offline modeling tools that are needed to ground the use of improved real-time tools. This project: (1) developed and conducted first-ever demonstrations of two prototype real-time software tools for voltage security assessment and phasor monitoring; and (2) prepared a scoping study on improving load and generator response models. Additional funding through two separate subsequent work authorizations has already been provided to build upon the work initiated in this project.

  7. Reliability calculations

    International Nuclear Information System (INIS)

    Petersen, K.E.

    1986-03-01

    Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the growing requirements. Two different areas of application were analysed. In structural reliability probabilistic approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very complex systems. In order to increase the applicability of the programs variance reduction techniques can be applied to speed up the calculation process. Variance reduction techniques have been studied and procedures for implementation of importance sampling are suggested. (author)

  8. Optimal reliability design for over-actuated systems based on the MIT rule: Application to an octocopter helicopter testbed

    International Nuclear Information System (INIS)

    Chamseddine, Abbas; Theilliol, Didier; Sadeghzadeh, Iman; Zhang, Youmin; Weber, Philippe

    2014-01-01

    This paper addresses the problem of optimal reliability in over-actuated systems. Overloading an actuator decreases its overall lifetime and reduces its average performance over a long time. Therefore, performance and reliability are two conflicting requirements. While appropriate reliability is related to average loads, good performance is related to fast response and sufficient loads generated by actuators. Actuator redundancy allows us to address both performance and reliability at the same time by properly allocating desired loads among redundant actuators. The main contribution of this paper is the on-line optimization of the overall plant reliability according to performance objective using an MIT (Massachusetts Institute of Technology) rule-based method. The effectiveness of the proposed method is illustrated through an experimental application to an octocopter helicopter testbed

  9. Joint interval reliability for Markov systems with an application in transmission line reliability

    Energy Technology Data Exchange (ETDEWEB)

    Csenki, Attila [School of Computing and Mathematics, University of Bradford, Bradford, West Yorkshire, BD7 1DP (United Kingdom)]. E-mail: a.csenki@bradford.ac.uk

    2007-06-15

    We consider Markov reliability models whose finite state space is partitioned into the set of up states {sub U} and the set of down states {sub D}. Given a collection of k disjoint time intervals I{sub l}=[t{sub l},t{sub l}+x{sub l}], l=1,...,k, the joint interval reliability is defined as the probability of the system being in {sub U} for all time instances in I{sub 1} union ... union I{sub k}. A closed form expression is derived here for the joint interval reliability for this class of models. The result is applied to power transmission lines in a two-state fluctuating environment. We use the Linux versions of the free packages Maxima and Scilab in our implementation for symbolic and numerical work, respectively.

  10. Mixing Bayes and empirical Bayes inference to anticipate the realization of engineering concerns about variant system designs

    International Nuclear Information System (INIS)

    Quigley, John; Walls, Lesley

    2011-01-01

    Mixing Bayes and Empirical Bayes inference provides reliability estimates for variant system designs by using relevant failure data - observed and anticipated - about engineering changes arising due to modification and innovation. A coherent inference framework is proposed to predict the realization of engineering concerns during product development so that informed decisions can be made about the system design and the analysis conducted to prove reliability. The proposed method involves combining subjective prior distributions for the number of engineering concerns with empirical priors for the non-parametric distribution of time to realize these concerns in such a way that we can cross-tabulate classes of concerns to failure events within time partitions at an appropriate level of granularity. To support efficient implementation, a computationally convenient hypergeometric approximation is developed for the counting distributions appropriate to our underlying stochastic model. The accuracy of our approximation over first-order alternatives is examined, and demonstrated, through an evaluation experiment. An industrial application illustrates model implementation and shows how estimates can be updated using information arising during development test and analysis.

  11. Software reliability for safety-critical applications

    International Nuclear Information System (INIS)

    Everett, B.; Musa, J.

    1994-01-01

    In this talk, the authors address the question open-quotes Can Software Reliability Engineering measurement and modeling techniques be applied to safety-critical applications?close quotes Quantitative techniques have long been applied in engineering hardware components of safety-critical applications. The authors have seen a growing acceptance and use of quantitative techniques in engineering software systems but a continuing reluctance in using such techniques in safety-critical applications. The general case posed against using quantitative techniques for software components runs along the following lines: safety-critical applications should be engineered such that catastrophic failures occur less frequently than one in a billion hours of operation; current software measurement/modeling techniques rely on using failure history data collected during testing; one would have to accumulate over a billion operational hours to verify failure rate objectives of about one per billion hours

  12. Rater reliability and construct validity of a mobile application for posture analysis.

    Science.gov (United States)

    Szucs, Kimberly A; Brown, Elena V Donoso

    2018-01-01

    [Purpose] Measurement of posture is important for those with a clinical diagnosis as well as researchers aiming to understand the impact of faulty postures on the development of musculoskeletal disorders. A reliable, cost-effective and low tech posture measure may be beneficial for research and clinical applications. The purpose of this study was to determine rater reliability and construct validity of a posture screening mobile application in healthy young adults. [Subjects and Methods] Pictures of subjects were taken in three standing positions. Two raters independently digitized the static standing posture image twice. The app calculated posture variables, including sagittal and coronal plane translations and angulations. Intra- and inter-rater reliability were calculated using the appropriate ICC models for complete agreement. Construct validity was determined through comparison of known groups using repeated measures ANOVA. [Results] Intra-rater reliability ranged from 0.71 to 0.99. Inter-rater reliability was good to excellent for all translations. ICCs were stronger for translations versus angulations. The construct validity analysis found that the app was able to detect the change in the four variables selected. [Conclusion] The posture mobile application has demonstrated strong rater reliability and preliminary evidence of construct validity. This application may have utility in clinical and research settings.

  13. Analysis of travel time reliability on Indiana interstates.

    Science.gov (United States)

    2009-09-15

    Travel-time reliability is a key performance measure in any transportation system. It is a : measure of quality of travel time experienced by transportation system users and reflects the efficiency : of the transportation system to serve citizens, bu...

  14. Value of Travel Time Reliability: A review of current evidence

    OpenAIRE

    Carlos Carrion; David Levinson

    2010-01-01

    Travel time reliability is a fundamental factor in travel behavior. It represents the temporal uncertainty experienced by users in their movement between any two nodes in a network. The importance of the time reliability depends on the penalties incurred by the users. In road networks, travelers consider the existence of a trip travel time uncertainty in different choice situations (departure time, route, mode, and others). In this paper, a systematic review of the current state of research i...

  15. A Tutorial on Nonlinear Time-Series Data Mining in Engineering Asset Health and Reliability Prediction: Concepts, Models, and Algorithms

    Directory of Open Access Journals (Sweden)

    Ming Dong

    2010-01-01

    Full Text Available The primary objective of engineering asset management is to optimize assets service delivery potential and to minimize the related risks and costs over their entire life through the development and application of asset health and usage management in which the health and reliability prediction plays an important role. In real-life situations where an engineering asset operates under dynamic operational and environmental conditions, the lifetime of an engineering asset is generally described as monitored nonlinear time-series data and subject to high levels of uncertainty and unpredictability. It has been proved that application of data mining techniques is very useful for extracting relevant features which can be used as parameters for assets diagnosis and prognosis. In this paper, a tutorial on nonlinear time-series data mining in engineering asset health and reliability prediction is given. Besides that an overview on health and reliability prediction techniques for engineering assets is covered, this tutorial will focus on concepts, models, algorithms, and applications of hidden Markov models (HMMs and hidden semi-Markov models (HSMMs in engineering asset health prognosis, which are representatives of recent engineering asset health prediction techniques.

  16. Establishing monitoring programs for travel time reliability.

    Science.gov (United States)

    2014-01-01

    Within the second Strategic Highway Research Program (SHRP 2), Project L02 focused on creating a suite of methods by which transportation agencies could monitor and evaluate travel time reliability. Creation of the methods also produced an improved u...

  17. Reliability of Bluetooth Technology for Travel Time Estimation

    DEFF Research Database (Denmark)

    Araghi, Bahar Namaki; Olesen, Jonas Hammershøj; Krishnan, Rajesh

    2015-01-01

    . However, their corresponding impacts on accuracy and reliability of estimated travel time have not been evaluated. In this study, a controlled field experiment is conducted to collect both Bluetooth and GPS data for 1000 trips to be used as the basis for evaluation. Data obtained by GPS logger is used...... to calculate actual travel time, referred to as ground truth, and to geo-code the Bluetooth detection events. In this setting, reliability is defined as the percentage of devices captured per trip during the experiment. It is found that, on average, Bluetooth-enabled devices will be detected 80% of the time......-range antennae detect Bluetooth-enabled devices in a closer location to the sensor, thus providing a more accurate travel time estimate. However, the smaller the size of the detection zone, the lower the penetration rate, which could itself influence the accuracy of estimates. Therefore, there has to be a trade...

  18. Development of RBDGG Solver and Its Application to System Reliability Analysis

    International Nuclear Information System (INIS)

    Kim, Man Cheol

    2010-01-01

    For the purpose of making system reliability analysis easier and more intuitive, RBDGG (Reliability Block diagram with General Gates) methodology was introduced as an extension of the conventional reliability block diagram. The advantage of the RBDGG methodology is that the structure of a RBDGG model is very similar to the actual structure of the analyzed system, and therefore the modeling of a system for system reliability and unavailability analysis becomes very intuitive and easy. The main idea of the development of the RBDGG methodology is similar with that of the development of the RGGG (Reliability Graph with General Gates) methodology, which is an extension of a conventional reliability graph. The newly proposed methodology is now implemented into a software tool, RBDGG Solver. RBDGG Solver was developed as a WIN32 console application. RBDGG Solver receives information on the failure modes and failure probabilities of each component in the system, along with the connection structure and connection logics among the components in the system. Based on the received information, RBDGG Solver automatically generates a system reliability analysis model for the system, and then provides the analysis results. In this paper, application of RBDGG Solver to the reliability analysis of an example system, and verification of the calculation results are provided for the purpose of demonstrating how RBDGG Solver is used for system reliability analysis

  19. Applications of majorization and Schur functions in reliability and life testing

    International Nuclear Information System (INIS)

    Proschan, F.

    1975-01-01

    This is an expository paper presenting basic definitions and properties of majorization and Schur functions, and displaying a variety of applications of these concepts in reliability prediction and modelling, and in reliability inference and life testing

  20. Commercial Off-The-Shelf (COTS) Electronics Reliability for Space Applications

    Science.gov (United States)

    Pellish, Jonathan

    2018-01-01

    This presentation describes the accelerating use of Commercial off the Shelf (COTS) parts in space applications. Component reliability and threats in the context of the mission, environment, application, and lifetime. Provides overview of traditional approaches applied to COTS parts in flight applications, and shows challenges and potential paths forward for COTS systems in flight applications it's all about data!

  1. Measuring older adults' sedentary time: reliability, validity, and responsiveness.

    Science.gov (United States)

    Gardiner, Paul A; Clark, Bronwyn K; Healy, Genevieve N; Eakin, Elizabeth G; Winkler, Elisabeth A H; Owen, Neville

    2011-11-01

    With evidence that prolonged sitting has deleterious health consequences, decreasing sedentary time is a potentially important preventive health target. High-quality measures, particularly for use with older adults, who are the most sedentary population group, are needed to evaluate the effect of sedentary behavior interventions. We examined the reliability, validity, and responsiveness to change of a self-report sedentary behavior questionnaire that assessed time spent in behaviors common among older adults: watching television, computer use, reading, socializing, transport and hobbies, and a summary measure (total sedentary time). In the context of a sedentary behavior intervention, nonworking older adults (n = 48, age = 73 ± 8 yr (mean ± SD)) completed the questionnaire on three occasions during a 2-wk period (7 d between administrations) and wore an accelerometer (ActiGraph model GT1M) for two periods of 6 d. Test-retest reliability (for the individual items and the summary measure) and validity (self-reported total sedentary time compared with accelerometer-derived sedentary time) were assessed during the 1-wk preintervention period, using Spearman (ρ) correlations and 95% confidence intervals (CI). Responsiveness to change after the intervention was assessed using the responsiveness statistic (RS). Test-retest reliability was excellent for television viewing time (ρ (95% CI) = 0.78 (0.63-0.89)), computer use (ρ (95% CI) = 0.90 (0.83-0.94)), and reading (ρ (95% CI) = 0.77 (0.62-0.86)); acceptable for hobbies (ρ (95% CI) = 0.61 (0.39-0.76)); and poor for socializing and transport (ρ < 0.45). Total sedentary time had acceptable test-retest reliability (ρ (95% CI) = 0.52 (0.27-0.70)) and validity (ρ (95% CI) = 0.30 (0.02-0.54)). Self-report total sedentary time was similarly responsive to change (RS = 0.47) as accelerometer-derived sedentary time (RS = 0.39). The summary measure of total sedentary time has good repeatability and modest validity and is

  2. Design optimization for security-and safety-critical distributed real-time applications

    DEFF Research Database (Denmark)

    Jiang, Wei; Pop, Paul; Jiang, Ke

    2016-01-01

    requirements on confidentiality of messages, task replication is used to enhance system reliability, and dynamic voltage and frequency scaling is used for energy efficiency of tasks. It is challenging to address these factors simultaneously, e.g., better security protections need more computing resources......In this paper, we are interested in the design of real-time applications with security, safety, timing, and energy requirements. The applications are scheduled with cyclic scheduling, and are mapped on distributed heterogeneous architectures. Cryptographic services are deployed to satisfy security...... and consume more energy, while lower voltages and frequencies may impair schedulability and security, and also lead to reliability degradation. We introduce a vulnerability based method to quantify the security performance of communications on distributed systems. We then focus on determining the appropriate...

  3. Annotating pathogenic non-coding variants in genic regions.

    Science.gov (United States)

    Gelfman, Sahar; Wang, Quanli; McSweeney, K Melodi; Ren, Zhong; La Carpia, Francesca; Halvorsen, Matt; Schoch, Kelly; Ratzon, Fanni; Heinzen, Erin L; Boland, Michael J; Petrovski, Slavé; Goldstein, David B

    2017-08-09

    Identifying the underlying causes of disease requires accurate interpretation of genetic variants. Current methods ineffectively capture pathogenic non-coding variants in genic regions, resulting in overlooking synonymous and intronic variants when searching for disease risk. Here we present the Transcript-inferred Pathogenicity (TraP) score, which uses sequence context alterations to reliably identify non-coding variation that causes disease. High TraP scores single out extremely rare variants with lower minor allele frequencies than missense variants. TraP accurately distinguishes known pathogenic and benign variants in synonymous (AUC = 0.88) and intronic (AUC = 0.83) public datasets, dismissing benign variants with exceptionally high specificity. TraP analysis of 843 exomes from epilepsy family trios identifies synonymous variants in known epilepsy genes, thus pinpointing risk factors of disease from non-coding sequence data. TraP outperforms leading methods in identifying non-coding variants that are pathogenic and is therefore a valuable tool for use in gene discovery and the interpretation of personal genomes.While non-coding synonymous and intronic variants are often not under strong selective constraint, they can be pathogenic through affecting splicing or transcription. Here, the authors develop a score that uses sequence context alterations to predict pathogenicity of synonymous and non-coding genetic variants, and provide a web server of pre-computed scores.

  4. Reliability Calculations

    DEFF Research Database (Denmark)

    Petersen, Kurt Erling

    1986-01-01

    Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety...... and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the growing requirements. Two different areas of application were analysed. In structural reliability probabilistic...... approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very...

  5. The reliable solution and computation time of variable parameters Logistic model

    OpenAIRE

    Pengfei, Wang; Xinnong, Pan

    2016-01-01

    The reliable computation time (RCT, marked as Tc) when applying a double precision computation of a variable parameters logistic map (VPLM) is studied. First, using the method proposed, the reliable solutions for the logistic map are obtained. Second, for a time-dependent non-stationary parameters VPLM, 10000 samples of reliable experiments are constructed, and the mean Tc is then computed. The results indicate that for each different initial value, the Tcs of the VPLM are generally different...

  6. Addressing the problem of the relevance of reliability data to varied applications

    International Nuclear Information System (INIS)

    McIntyre, P.J.; Gibson, I.K.

    1989-01-01

    Reliability data is collected for many reasons on a wide range of components and applications. Sometimes data is collected for a specific purpose whilst in other situations data may be collected simply to provide an available pool of historical data. Data can also be extracted from information that was gathered without recognition that it could be adapted for use as reliability data at a later stage. It is not surprising that there should be significant differences in the strengths and weaknesses of data obtained in such different circumstances. This paper describes work undertaken to investigate how to make best use of available data to provide specific and reliable predictions of valve reliability for nuclear power station applications. (orig.)

  7. Go-flow: a reliability analysis methodology applicable to piping system

    International Nuclear Information System (INIS)

    Matsuoka, T.; Kobayashi, M.

    1985-01-01

    Since the completion of the Reactor Safety Study, the use of probabilistic risk assessment technique has been becoming more widespread in the nuclear community. Several analytical methods are used for the reliability analysis of nuclear power plants. The GO methodology is one of these methods. Using the GO methodology, the authors performed a reliability analysis of the emergency decay heat removal system of the nuclear ship Mutsu, in order to examine its applicability to piping systems. By this analysis, the authors have found out some disadvantages of the GO methodology. In the GO methodology, the signal is on-to-off or off-to-on signal, therefore the GO finds out the time point at which the state of a system changes, and can not treat a system which state changes as off-on-off. Several computer runs are required to obtain the time dependent failure probability of a system. In order to overcome these disadvantages, the authors propose a new analytical methodology: GO-FLOW. In GO-FLOW, the modeling method (chart) and the calculation procedure are similar to those in the GO methodology, but the meaning of signal and time point, and the definitions of operators are essentially different. In the paper, the GO-FLOW methodology is explained and two examples of the analysis by GO-FLOW are given

  8. The effects of spatial variability of the aggressiveness of soil on system reliability of corroding underground pipelines

    International Nuclear Information System (INIS)

    Sahraoui, Yacine; Chateauneuf, Alaa

    2016-01-01

    In this paper, a probabilistic methodology is presented for assessing the time-variant reliability of corroded underground pipelines subjected to space-variant soil aggressiveness. The Karhunen-Loève expansion is used to model the spatial variability of soil as a correlated stochastic field. The pipeline is considered as a series system for which the component and system failure probabilities are computed by Monte Carlo simulations. The probabilistic model provides a realistic time and space modelling of stochastic variations, leading to appropriate estimation of the lifetime distribution. The numerical analyses allow us to investigate the impact of various parameters on the reliability of underground pipelines, such as the soil aggressiveness, the pipe design variables, the soil correlation length and the pipeline length. The results show that neglecting the effect of spatial variability leads to pessimistic estimation of the residual lifetime and can lead to condemn prematurely the structure. - Highlights: • The role of soil heterogeneity in pipeline reliability assessment has been shown. • The impact of pipe length and soil correlation length has been examined. • The effect of the uncertainties related to design variables has been observed. • Pipe thickness design for homogeneous reliability has been proposed.

  9. The establish and application of equipment reliability database in Nuclear Power Plant

    International Nuclear Information System (INIS)

    Zheng Wei; Li He

    2006-03-01

    Take the case of Daya Bay Nuclear Power Plant, the collecting and handling of equipment reliability data, the calculation method of reliability parameters and the establish and application of reliability databases, etc. are discussed. The data source involved the design information of the equipment, the operation information, the maintenance information and periodically test record, etc. Equipment reliability database built on a base of the operation experience. It provided the valid tool for thoroughly and objectively recording the operation history and the present condition of various equipment of the plant; supervising the appearance of the equipment, especially the safety-related equipment, provided the very practical worth information for enhancing the safety and availability management of the equipment and insuring the safety and economic operation of the plant; and provided the essential data for the research and applications in safety management, reliability analysis, probabilistic safety assessment, reliability centered maintenance and economic management in nuclear power plant. (authors)

  10. In-plant application of industry experience to enhance human reliability

    International Nuclear Information System (INIS)

    Hannaman, G.W.; Singh, A.

    1993-01-01

    This paper describes the way that modern data-base computer tools can enhance the ability to collect, organize, evaluate, and use industry experience. By combining the computer tools with knowledge from human reliability assessment tools, data, and frameworks, the data base can become a tool for collecting and assessing the lessons learned from past events. By integrating the data-base system with plant risk models, engineers can focus on those activities that can enhance over-all system reliability. The evaluation helps identify technology and tools to reduce human errors during operations and maintenance. Learning from both in-plant and industry experience can help enhance safety and reduce the cost of plant operations. Utility engineers currently assess events that occur in nuclear plants throughout the world for in-plant applicability. Established computer information networks, documents, bulletins, and other information sources provide a large number of event descriptions to help individual plants benefit from this industry experience. The activities for coordinating reviews of event descriptions from other plants for in-plant applications require substantial engineering time to collect, organize, evaluate, and apply. Data-base tools can help engineers efficiently handle and sort the data so that they can concentrate on understanding the importance of the event, developing cost-effective interventions, and communicating implementation plans for plant improvement. An Electric Power Research Institute human reliability project has developed a classification system with modern data-base software to help engineers efficiently process, assess, and apply information contained in the events to enhance plant operation. Plant-specific classification of industry experience provides a practical method for efficiently taking into account industry when planning maintenance activities and reviewing plant safety

  11. Can simple mobile phone applications provide reliable counts of respiratory rates in sick infants and children? An initial evaluation of three new applications.

    Science.gov (United States)

    Black, James; Gerdtz, Marie; Nicholson, Pat; Crellin, Dianne; Browning, Laura; Simpson, Julie; Bell, Lauren; Santamaria, Nick

    2015-05-01

    Respiratory rate is an important sign that is commonly either not recorded or recorded incorrectly. Mobile phone ownership is increasing even in resource-poor settings. Phone applications may improve the accuracy and ease of counting of respiratory rates. The study assessed the reliability and initial users' impressions of four mobile phone respiratory timer approaches, compared to a 60-second count by the same participants. Three mobile applications (applying four different counting approaches plus a standard 60-second count) were created using the Java Mobile Edition and tested on Nokia C1-01 phones. Apart from the 60-second timer application, the others included a counter based on the time for ten breaths, and three based on the time interval between breaths ('Once-per-Breath', in which the user presses for each breath and the application calculates the rate after 10 or 20 breaths, or after 60s). Nursing and physiotherapy students used the applications to count respiratory rates in a set of brief video recordings of children with different respiratory illnesses. Limits of agreement (compared to the same participant's standard 60-second count), intra-class correlation coefficients and standard errors of measurement were calculated to compare the reliability of the four approaches, and a usability questionnaire was completed by the participants. There was considerable variation in the counts, with large components of the variation related to the participants and the videos, as well as the methods. None of the methods was entirely reliable, with no limits of agreement better than -10 to +9 breaths/min. Some of the methods were superior to the others, with ICCs from 0.24 to 0.92. By ICC the Once-per-Breath 60-second count and the Once-per-Breath 20-breath count were the most consistent, better even than the 60-second count by the participants. The 10-breath approaches performed least well. Users' initial impressions were positive, with little difference between the

  12. Application of a Pivot Profile Variant Using CATA Questions in the Development of a Whey-Based Fermented Beverage

    Directory of Open Access Journals (Sweden)

    Marcelo Miraballes

    2018-02-01

    Full Text Available During the development of a food product, the application of rapid descriptive sensory methodologies is very useful to determine the influence of different variables on the sensory characteristics of the product under development. The Pivot profile (PP and a variant of the technique that includes check-all-that-apply questions (PP + CATA were used for the development of a milk drink fermented from demineralised sweet whey. Starting from a base formula of partially demineralised sweet whey and gelatin, nine samples were elaborated, to which various concentrations of commercial sucrose, modified cassava starch, and whole milk powder were added. Differences in sucrose content affected the sample texture and flavour and the modified starch was able to decrease the fluidity and increase the texture of creaminess and firmness, of the samples. The two applied sensory methodologies achieved good discrimination between the samples and very similar results, although the data analysis was clearly simplified in relation to the difficulty and time consumed in the PP + CATA variant.

  13. Time-dependent reliability analysis of nuclear reactor operators using probabilistic network models

    International Nuclear Information System (INIS)

    Oka, Y.; Miyata, K.; Kodaira, H.; Murakami, S.; Kondo, S.; Togo, Y.

    1987-01-01

    Human factors are very important for the reliability of a nuclear power plant. Human behavior has essentially a time-dependent nature. The details of thinking and decision making processes are important for detailed analysis of human reliability. They have, however, not been well considered by the conventional methods of human reliability analysis. The present paper describes the models for the time-dependent and detailed human reliability analysis. Recovery by an operator is taken into account and two-operators models are also presented

  14. Coupling finite elements and reliability methods - application to safety evaluation of pressurized water reactor vessels

    International Nuclear Information System (INIS)

    Pitner, P.; Venturini, V.

    1995-02-01

    When reliability studies are extended form deterministic calculations in mechanics, it is necessary to take into account input parameters variabilities which are linked to the different sources of uncertainty. Integrals must then be calculated to evaluate the failure risk. This can be performed either by simulation methods, or by approximations ones (FORM/SORM). Model in mechanics often require to perform calculation codes. These ones must then be coupled with the reliability calculations. Theses codes can involve large calculation times when they are invoked numerous times during simulations sequences or in complex iterative procedures. Response surface method gives an approximation of the real response from a reduced number of points for which the finite element code is run. Thus, when it is combined with FORM/SORM methods, a coupling can be carried out which gives results in a reasonable calculation time. An application of response surface method to mechanics reliability coupling for a mechanical model which calls for a finite element code is presented. It corresponds to a probabilistic fracture mechanics study of a pressurized water reactor vessel. (authors). 5 refs., 3 figs

  15. Reliability analysis and utilization of PEMs in space application

    Science.gov (United States)

    Jiang, Xiujie; Wang, Zhihua; Sun, Huixian; Chen, Xiaomin; Zhao, Tianlin; Yu, Guanghua; Zhou, Changyi

    2009-11-01

    More and more plastic encapsulated microcircuits (PEMs) are used in space missions to achieve high performance. Since PEMs are designed for use in terrestrial operating conditions, the successful usage of PEMs in space harsh environment is closely related to reliability issues, which should be considered firstly. However, there is no ready-made methodology for PEMs in space applications. This paper discusses the reliability for the usage of PEMs in space. This reliability analysis can be divided into five categories: radiation test, radiation hardness, screening test, reliability calculation and reliability assessment. One case study is also presented to illuminate the details of the process, in which a PEM part is used in a joint space program Double-Star Project between the European Space Agency (ESA) and China. The influence of environmental constrains including radiation, humidity, temperature and mechanics on the PEM part has been considered. Both Double-Star Project satellites are still running well in space now.

  16. Reliability of capacitors for DC-link applications - An overview

    DEFF Research Database (Denmark)

    Wang, Huai; Blaabjerg, Frede

    2013-01-01

    DC-link capacitors are an important part in the majority of power electronic converters which contribute to cost, size and failure rate on a considerable scale. From capacitor users' viewpoint, this paper presents a review on the improvement of reliability of DC-link in power electronic converters...... from two aspects: 1) reliability-oriented DC-link design solutions; 2) conditioning monitoring of DC-link capacitors during operation. Failure mechanisms, failure modes and lifetime models of capacitors suitable for the applications are also discussed as a basis to understand the physics......-of-failure. This review serves to provide a clear picture of the state-of-the-art research in this area and to identify the corresponding challenges and future research directions for capacitors and their DC-link applications....

  17. Time Capture Tool (TimeCaT): development of a comprehensive application to support data capture for Time Motion Studies.

    Science.gov (United States)

    Lopetegui, Marcelo; Yen, Po-Yin; Lai, Albert M; Embi, Peter J; Payne, Philip R O

    2012-01-01

    Time Motion Studies (TMS) have proved to be the gold standard method to measure and quantify clinical workflow, and have been widely used to assess the impact of health information systems implementation. Although there are tools available to conduct TMS, they provide different approaches for multitasking, interruptions, inter-observer reliability assessment and task taxonomy, making results across studies not comparable. We postulate that a significant contributing factor towards the standardization and spread of TMS would be the availability and spread of an accessible, scalable and dynamic tool. We present the development of a comprehensive Time Capture Tool (TimeCaT): a web application developed to support data capture for TMS. Ongoing and continuous development of TimeCaT includes the development and validation of a realistic inter-observer reliability scoring algorithm, the creation of an online clinical tasks ontology, and a novel quantitative workflow comparison method.

  18. Development of web-based reliability data analysis algorithm model and its application

    International Nuclear Information System (INIS)

    Hwang, Seok-Won; Oh, Ji-Yong; Moosung-Jae

    2010-01-01

    For this study, a database model of plant reliability was developed for the effective acquisition and management of plant-specific data that can be used in various applications of plant programs as well as in Probabilistic Safety Assessment (PSA). Through the development of a web-based reliability data analysis algorithm, this approach systematically gathers specific plant data such as component failure history, maintenance history, and shift diary. First, for the application of the developed algorithm, this study reestablished the raw data types, data deposition procedures and features of the Enterprise Resource Planning (ERP) system process. The component codes and system codes were standardized to make statistical analysis between different types of plants possible. This standardization contributes to the establishment of a flexible database model that allows the customization of reliability data for the various applications depending on component types and systems. In addition, this approach makes it possible for users to perform trend analyses and data comparisons for the significant plant components and systems. The validation of the algorithm is performed through a comparison of the importance measure value (Fussel-Vesely) of the mathematical calculation and that of the algorithm application. The development of a reliability database algorithm is one of the best approaches for providing systemic management of plant-specific reliability data with transparency and continuity. This proposed algorithm reinforces the relationships between raw data and application results so that it can provide a comprehensive database that offers everything from basic plant-related data to final customized data.

  19. Development of web-based reliability data analysis algorithm model and its application

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Seok-Won, E-mail: swhwang@khnp.co.k [Korea Hydro and Nuclear Power Co. Ltd., Jang-Dong 25-1, Yuseong-Gu, 305-343 Daejeon (Korea, Republic of); Oh, Ji-Yong [Korea Hydro and Nuclear Power Co. Ltd., Jang-Dong 25-1, Yuseong-Gu, 305-343 Daejeon (Korea, Republic of); Moosung-Jae [Department of Nuclear Engineering Hanyang University 17 Haengdang, Sungdong, Seoul (Korea, Republic of)

    2010-02-15

    For this study, a database model of plant reliability was developed for the effective acquisition and management of plant-specific data that can be used in various applications of plant programs as well as in Probabilistic Safety Assessment (PSA). Through the development of a web-based reliability data analysis algorithm, this approach systematically gathers specific plant data such as component failure history, maintenance history, and shift diary. First, for the application of the developed algorithm, this study reestablished the raw data types, data deposition procedures and features of the Enterprise Resource Planning (ERP) system process. The component codes and system codes were standardized to make statistical analysis between different types of plants possible. This standardization contributes to the establishment of a flexible database model that allows the customization of reliability data for the various applications depending on component types and systems. In addition, this approach makes it possible for users to perform trend analyses and data comparisons for the significant plant components and systems. The validation of the algorithm is performed through a comparison of the importance measure value (Fussel-Vesely) of the mathematical calculation and that of the algorithm application. The development of a reliability database algorithm is one of the best approaches for providing systemic management of plant-specific reliability data with transparency and continuity. This proposed algorithm reinforces the relationships between raw data and application results so that it can provide a comprehensive database that offers everything from basic plant-related data to final customized data.

  20. A critique of reliability prediction techniques for avionics applications

    Directory of Open Access Journals (Sweden)

    Guru Prasad PANDIAN

    2018-01-01

    Full Text Available Avionics (aeronautics and aerospace industries must rely on components and systems of demonstrated high reliability. For this, handbook-based methods have been traditionally used to design for reliability, develop test plans, and define maintenance requirements and sustainment logistics. However, these methods have been criticized as flawed and leading to inaccurate and misleading results. In its recent report on enhancing defense system reliability, the U.S. National Academy of Sciences has recently discredited these methods, judging the Military Handbook (MIL-HDBK-217 and its progeny as invalid and inaccurate. This paper discusses the issues that arise with the use of handbook-based methods in commercial and military avionics applications. Alternative approaches to reliability design (and its demonstration are also discussed, including similarity analysis, testing, physics-of-failure, and data analytics for prognostics and systems health management.

  1. The art of progressive censoring applications to reliability and quality

    CERN Document Server

    Balakrishnan, N

    2014-01-01

    This monograph offers a thorough and updated guide to the theory and methods of progressive censoring, an area that has experienced tremendous growth in recent years. Progressive censoring, originally proposed in the 1950s, is an efficient method of handling samples from industrial experiments involving lifetimes of units that have either failed or censored in a progressive fashion during the life test, with many practical applications to reliability and quality. Key topics and features: Data sets from the literature as well as newly simulated data sets are used to illustrate concepts throughout the text Emphasis on real-life applications to life testing, reliability, and quality control Discussion of parametric and nonparametric inference Coverage of experimental design with optimal progressive censoring The Art of Progressive Censoring is a valuable reference for graduate students, researchers, and practitioners in applied statistics, quality control, life testing, and reliability. With its accessible style...

  2. Effectiveness of different approaches to disseminating traveler information on travel time reliability.

    Science.gov (United States)

    2014-01-01

    The second Strategic Highway Research Program (SHRP 2) Reliability program aims to improve trip time reliability by reducing the frequency and effects of events that cause travel times to fluctuate unpredictably. Congestion caused by unreliable, or n...

  3. Detection of GSTM1, GSTT1 and the Ile105Val GSTP1 gene variants

    DEFF Research Database (Denmark)

    Buchard, Anders; Sanchez, Juan J.; Dalhoff, Kim

    2008-01-01

    We have developed a PCR multiplex method that in a fast, inexpensive and reliable manner can detect if a person has two, one or no GSTM1 and GSTT1 genes and which at the same time can detect the allelic status of the GSTP1 Ile105Val genetic variant. A total of 200 Danes, 100 Somalis and 100...

  4. Travel Time Reliability for Urban Networks : Modelling and Empirics

    NARCIS (Netherlands)

    Zheng, F.; Liu, Xiaobo; van Zuylen, H.J.; Li, Jie; Lu, Chao

    2017-01-01

    The importance of travel time reliability in traffic management, control, and network design has received a lot of attention in the past decade. In this paper, a network travel time distribution model based on the Johnson curve system is proposed. The model is applied to field travel time data

  5. Advantages and Drawbacks of Applying Periodic Time-Variant Modal Analysis to Spur Gear Dynamics

    DEFF Research Database (Denmark)

    Pedersen, Rune; Santos, Ilmar; Hede, Ivan Arthur

    2010-01-01

    to ensure sufficient accuracy of the results. The method of time-variant modal analysis is applied, and the changes in the fundamental and the parametric resonance frequencies as a function of the rotational speed of the gears, are found. By obtaining the stationary and parametric parts of the time...... of applying the methodology to wind turbine gearboxes are addressed and elucidated....

  6. Data Applicability of Heritage and New Hardware For Launch Vehicle Reliability Models

    Science.gov (United States)

    Al Hassan, Mohammad; Novack, Steven

    2015-01-01

    Bayesian reliability requires the development of a prior distribution to represent degree of belief about the value of a parameter (such as a component's failure rate) before system specific data become available from testing or operations. Generic failure data are often provided in reliability databases as point estimates (mean or median). A component's failure rate is considered a random variable where all possible values are represented by a probability distribution. The applicability of the generic data source is a significant source of uncertainty that affects the spread of the distribution. This presentation discusses heuristic guidelines for quantifying uncertainty due to generic data applicability when developing prior distributions mainly from reliability predictions.

  7. Reliable Rescue Routing Optimization for Urban Emergency Logistics under Travel Time Uncertainty

    Directory of Open Access Journals (Sweden)

    Qiuping Li

    2018-02-01

    Full Text Available The reliability of rescue routes is critical for urban emergency logistics during disasters. However, studies on reliable rescue routing under stochastic networks are still rare. This paper proposes a multiobjective rescue routing model for urban emergency logistics under travel time reliability. A hybrid metaheuristic integrating ant colony optimization (ACO and tabu search (TS was designed to solve the model. An experiment optimizing rescue routing plans under a real urban storm event, was carried out to validate the proposed model. The experimental results showed how our approach can improve rescue efficiency with high travel time reliability.

  8. Post-event human decision errors: operator action tree/time reliability correlation

    International Nuclear Information System (INIS)

    Hall, R.E.; Fragola, J.; Wreathall, J.

    1982-11-01

    This report documents an interim framework for the quantification of the probability of errors of decision on the part of nuclear power plant operators after the initiation of an accident. The framework can easily be incorporated into an event tree/fault tree analysis. The method presented consists of a structure called the operator action tree and a time reliability correlation which assumes the time available for making a decision to be the dominating factor in situations requiring cognitive human response. This limited approach decreases the magnitude and complexity of the decision modeling task. Specifically, in the past, some human performance models have attempted prediction by trying to emulate sequences of human actions, or by identifying and modeling the information processing approach applicable to the task. The model developed here is directed at describing the statistical performance of a representative group of hypothetical individuals responding to generalized situations

  9. Post-event human decision errors: operator action tree/time reliability correlation

    Energy Technology Data Exchange (ETDEWEB)

    Hall, R E; Fragola, J; Wreathall, J

    1982-11-01

    This report documents an interim framework for the quantification of the probability of errors of decision on the part of nuclear power plant operators after the initiation of an accident. The framework can easily be incorporated into an event tree/fault tree analysis. The method presented consists of a structure called the operator action tree and a time reliability correlation which assumes the time available for making a decision to be the dominating factor in situations requiring cognitive human response. This limited approach decreases the magnitude and complexity of the decision modeling task. Specifically, in the past, some human performance models have attempted prediction by trying to emulate sequences of human actions, or by identifying and modeling the information processing approach applicable to the task. The model developed here is directed at describing the statistical performance of a representative group of hypothetical individuals responding to generalized situations.

  10. Porting Your Applications and Saving Data In Cloud As Reliable Entity.

    Directory of Open Access Journals (Sweden)

    Cosmin Cătălin Olteanu

    2013-12-01

    Full Text Available The main purpose of the paper is to illustrate the importance of a reliable service in the meanings of cloud computing. The dynamics of an organization shows us that porting customs applications in cloud can makes the difference to be a successful company and to deliver what the client need just in time. Every employ should be able to access and enter data from everywhere. Remember that the office is moving along with the employ nowadays. But this concept comes with disadvantages of how safe is your data if you cannot control exactly, by your employs, those machines.

  11. Exploring Continuity of Care in Patients with Alcohol Use Disorders Using Time-Variant Measures

    NARCIS (Netherlands)

    S.C. de Vries (Sjoerd); A.I. Wierdsma (André)

    2008-01-01

    textabstractBackground/Aims: We used time-variant measures of continuity of care to study fluctuations in long-term treatment use by patients with alcohol-related disorders. Methods: Data on service use were extracted from the Psychiatric Case Register for the Rotterdam Region, The Netherlands.

  12. Application of fuzzy-MOORA method: Ranking of components for reliability estimation of component-based software systems

    Directory of Open Access Journals (Sweden)

    Zeeshan Ali Siddiqui

    2016-01-01

    Full Text Available Component-based software system (CBSS development technique is an emerging discipline that promises to take software development into a new era. As hardware systems are presently being constructed from kits of parts, software systems may also be assembled from components. It is more reliable to reuse software than to create. It is the glue code and individual components reliability that contribute to the reliability of the overall system. Every component contributes to overall system reliability according to the number of times it is being used, some components are of critical usage, known as usage frequency of component. The usage frequency decides the weight of each component. According to their weights, each component contributes to the overall reliability of the system. Therefore, ranking of components may be obtained by analyzing their reliability impacts on overall application. In this paper, we propose the application of fuzzy multi-objective optimization on the basis of ratio analysis, Fuzzy-MOORA. The method helps us find the best suitable alternative, software component, from a set of available feasible alternatives named software components. It is an accurate and easy to understand tool for solving multi-criteria decision making problems that have imprecise and vague evaluation data. By the use of ratio analysis, the proposed method determines the most suitable alternative among all possible alternatives, and dimensionless measurement will realize the job of ranking of components for estimating CBSS reliability in a non-subjective way. Finally, three case studies are shown to illustrate the use of the proposed technique.

  13. Limitations in simulator time-based human reliability analysis methods

    International Nuclear Information System (INIS)

    Wreathall, J.

    1989-01-01

    Developments in human reliability analysis (HRA) methods have evolved slowly. Current methods are little changed from those of almost a decade ago, particularly in the use of time-reliability relationships. While these methods were suitable as an interim step, the time (and the need) has come to specify the next evolution of HRA methods. As with any performance-oriented data source, power plant simulator data have no direct connection to HRA models. Errors reported in data are normal deficiencies observed in human performance; failures are events modeled in probabilistic risk assessments (PRAs). Not all errors cause failures; not all failures are caused by errors. Second, the times at which actions are taken provide no measure of the likelihood of failures to act correctly within an accident scenario. Inferences can be made about human reliability, but they must be made with great care. Specific limitations are discussed. Simulator performance data are useful in providing qualitative evidence of the variety of error types and their potential influences on operating systems. More work is required to combine recent developments in the psychology of error with the qualitative data collected at stimulators. Until data become openly available, however, such an advance will not be practical

  14. Multinomial-exponential reliability function: a software reliability model

    International Nuclear Information System (INIS)

    Saiz de Bustamante, Amalio; Saiz de Bustamante, Barbara

    2003-01-01

    The multinomial-exponential reliability function (MERF) was developed during a detailed study of the software failure/correction processes. Later on MERF was approximated by a much simpler exponential reliability function (EARF), which keeps most of MERF mathematical properties, so the two functions together makes up a single reliability model. The reliability model MERF/EARF considers the software failure process as a non-homogeneous Poisson process (NHPP), and the repair (correction) process, a multinomial distribution. The model supposes that both processes are statistically independent. The paper discusses the model's theoretical basis, its mathematical properties and its application to software reliability. Nevertheless it is foreseen model applications to inspection and maintenance of physical systems. The paper includes a complete numerical example of the model application to a software reliability analysis

  15. Reliability and validity of the Youth Leisure-time Sedentary Behavior Questionnaire (YLSBQ).

    Science.gov (United States)

    Cabanas-Sánchez, Verónica; Martínez-Gómez, David; Esteban-Cornejo, Irene; Castro-Piñero, José; Conde-Caveda, Julio; Veiga, Óscar L

    2018-01-01

    To develop a questionnaire able to assess time spent by youth in a wide range of leisure-time sedentary behaviors (SB) and evaluate its test-retest reliability and criterion validity. Cross-sectional observational. The reliability sample included 194 youth, aged 10-18 years, who completed the questionnaire twice, separated by one-week interval. The validity study comprised 1207 participants aged 8-18 years. Participants wore an accelerometer for 7 consecutive days. The questionnaire was designed to assess the amount of time spent in twelve different SB during weekdays and weekends, separately. In order to avoid usual phenomenon of time over reporting, values were adjusted to real available leisure-time (LT) for each participant. Reliability was assessed by using Intraclass Correlation Coefficients (ICC) and weighted (quadratic) kappa (k), and validity was assessed by using Pearson correlation and Bland-Altman plots. The reliability of questionnaire showed a moderate-to-substantial agreement for the most (91%) of items (k=0.43-0.74; ICC=0.41-0.79) with three items (4%) reaching an almost perfect agreement (ICC=0.82-0.83). Only 'sitting and talking' evidenced fair-to-moderate reliability (k=0.27-0.39; ICC=0.34-0.46). The relationship between average sedentary time assessed by the questionnaire and accelerometry was moderate (r=0.36; pquestionnaire and accelerometer sedentary time for average day (r=0.05; p=0.11) but Bland-Altman plots suggest moderate discrepancies between both methods of SB measurement (mean=19.86; limits of agreement=-280.04 to 319.76). The questionnaire showed moderate to good test-retest reliability and a moderate level of validity for assessing SB in youth, similar or slightly better to previously published in this population. Copyright © 2017 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  16. Trial application of reliability technology to emergency diesel generators at the Trojan Nuclear Power Plant

    International Nuclear Information System (INIS)

    Wong, S.M.; Boccio, J.L.; Karimian, S.; Azarm, M.A.; Carbonaro, J.; DeMoss, G.

    1986-01-01

    In this paper, a trial application of reliability technology to the emergency diesel generator system at the Trojan Nuclear Power Plant is presented. An approach for formulating a reliability program plan for this system is being developed. The trial application has shown that a reliability program process, using risk- and reliability-based techniques, can be interwoven into current plant operational activities to help in controlling, analyzing, and predicting faults that can challenge safety systems. With the cooperation of the utility, Portland General Electric Co., this reliability program can eventually be implemented at Trojan to track its effectiveness

  17. Polynomial-time computability of the edge-reliability of graphs using Gilbert's formula

    Directory of Open Access Journals (Sweden)

    Marlowe Thomas J.

    1998-01-01

    Full Text Available Reliability is an important consideration in analyzing computer and other communication networks, but current techniques are extremely limited in the classes of graphs which can be analyzed efficiently. While Gilbert's formula establishes a theoretically elegant recursive relationship between the edge reliability of a graph and the reliability of its subgraphs, naive evaluation requires consideration of all sequences of deletions of individual vertices, and for many graphs has time complexity essentially Θ (N!. We discuss a general approach which significantly reduces complexity, encoding subgraph isomorphism in a finer partition by invariants, and recursing through the set of invariants. We illustrate this approach using threshhold graphs, and show that any computation of reliability using Gilbert's formula will be polynomial-time if and only if the number of invariants considered is polynomial; we then show families of graphs with polynomial-time, and non-polynomial reliability computation, and show that these encompass most previously known results. We then codify our approach to indicate how it can be used for other classes of graphs, and suggest several classes to which the technique can be applied.

  18. Reliability Oriented Circuit Design For Power Electronics Applications

    DEFF Research Database (Denmark)

    Sintamarean, Nicolae Cristian

    is presented. Chapter 3 presents the electro-thermal model validation and the reliability studies performed by the proposed tool. The chapter ends with a detailed lifetime analysis, which emphasizes the mission-profile variation and gate-driver parameters variation impact on the PV-inverter devices lifetime......Highly reliable components are required in order to minimize the downtime during the lifetime of the converter and implicitly the maintenance costs. Therefore, the design of high reliable converters under constrained reliability and cost is a great challenge to be overcome in the future....... Moreover, the impact of the mission-profile sampling time on the lifetime estimation accuracy is also determined. The second part of the thesis introduced in Chapter 4, presents a novel gate-driver concept which reduces the dependency of the device power losses variations on the device loading variations...

  19. The Use of Non-Variant Sites to Improve the Clinical Assessment of Whole-Genome Sequence Data.

    Directory of Open Access Journals (Sweden)

    Alberto Ferrarini

    Full Text Available Genetic testing, which is now a routine part of clinical practice and disease management protocols, is often based on the assessment of small panels of variants or genes. On the other hand, continuous improvements in the speed and per-base costs of sequencing have now made whole exome sequencing (WES and whole genome sequencing (WGS viable strategies for targeted or complete genetic analysis, respectively. Standard WGS/WES data analytical workflows generally rely on calling of sequence variants respect to the reference genome sequence. However, the reference genome sequence contains a large number of sites represented by rare alleles, by known pathogenic alleles and by alleles strongly associated to disease by GWAS. It's thus critical, for clinical applications of WGS and WES, to interpret whether non-variant sites are homozygous for the reference allele or if the corresponding genotype cannot be reliably called. Here we show that an alternative analytical approach based on the analysis of both variant and non-variant sites from WGS data allows to genotype more than 92% of sites corresponding to known SNPs compared to 6% genotyped by standard variant analysis. These include homozygous reference sites of clinical interest, thus leading to a broad and comprehensive characterization of variation necessary to an accurate evaluation of disease risk. Altogether, our findings indicate that characterization of both variant and non-variant clinically informative sites in the genome is necessary to allow an accurate clinical assessment of a personal genome. Finally, we propose a highly efficient extended VCF (eVCF file format which allows to store genotype calls for sites of clinical interest while remaining compatible with current variant interpretation software.

  20. Somatic cancer variant curation and harmonization through consensus minimum variant level data

    Directory of Open Access Journals (Sweden)

    Deborah I. Ritter

    2016-11-01

    Full Text Available Abstract Background To truly achieve personalized medicine in oncology, it is critical to catalog and curate cancer sequence variants for their clinical relevance. The Somatic Working Group (WG of the Clinical Genome Resource (ClinGen, in cooperation with ClinVar and multiple cancer variant curation stakeholders, has developed a consensus set of minimal variant level data (MVLD. MVLD is a framework of standardized data elements to curate cancer variants for clinical utility. With implementation of MVLD standards, and in a working partnership with ClinVar, we aim to streamline the somatic variant curation efforts in the community and reduce redundancy and time burden for the interpretation of cancer variants in clinical practice. Methods We developed MVLD through a consensus approach by i reviewing clinical actionability interpretations from institutions participating in the WG, ii conducting extensive literature search of clinical somatic interpretation schemas, and iii survey of cancer variant web portals. A forthcoming guideline on cancer variant interpretation, from the Association of Molecular Pathology (AMP, can be incorporated into MVLD. Results Along with harmonizing standardized terminology for allele interpretive and descriptive fields that are collected by many databases, the MVLD includes unique fields for cancer variants such as Biomarker Class, Therapeutic Context and Effect. In addition, MVLD includes recommendations for controlled semantics and ontologies. The Somatic WG is collaborating with ClinVar to evaluate MVLD use for somatic variant submissions. ClinVar is an open and centralized repository where sequencing laboratories can report summary-level variant data with clinical significance, and ClinVar accepts cancer variant data. Conclusions We expect the use of the MVLD to streamline clinical interpretation of cancer variants, enhance interoperability among multiple redundant curation efforts, and increase submission of

  1. Making the most of RNA-seq: Pre-processing sequencing data with Opossum for reliable SNP variant detection [version 2; referees: 2 approved, 1 approved with reservations

    Directory of Open Access Journals (Sweden)

    Laura Oikkonen

    2017-03-01

    Full Text Available Identifying variants from RNA-seq (transcriptome sequencing data is a cost-effective and versatile complement to whole-exome (WES and whole-genome sequencing (WGS analysis. RNA-seq (transcriptome sequencing is primarily considered a method of gene expression analysis but it can also be used to detect DNA variants in expressed regions of the genome. However, current variant callers do not generally behave well with RNA-seq data due to reads encompassing intronic regions. We have developed a software programme called Opossum to address this problem. Opossum pre-processes RNA-seq reads prior to variant calling, and although it has been designed to work specifically with Platypus, it can be used equally well with other variant callers such as GATK HaplotypeCaller. In this work, we show that using Opossum in conjunction with either Platypus or GATK HaplotypeCaller maintains precision and improves the sensitivity for SNP detection compared to the GATK Best Practices pipeline. In addition, using it in combination with Platypus offers a substantial reduction in run times compared to the GATK pipeline so it is ideal when there are only limited time or computational resources available.

  2. Making the most of RNA-seq: Pre-processing sequencing data with Opossum for reliable SNP variant detection [version 1; referees: 2 approved, 1 approved with reservations

    Directory of Open Access Journals (Sweden)

    Laura Oikkonen

    2017-01-01

    Full Text Available Identifying variants from RNA-seq (transcriptome sequencing data is a cost-effective and versatile alternative to whole-genome sequencing. However, current variant callers do not generally behave well with RNA-seq data due to reads encompassing intronic regions. We have developed a software programme called Opossum to address this problem. Opossum pre-processes RNA-seq reads prior to variant calling, and although it has been designed to work specifically with Platypus, it can be used equally well with other variant callers such as GATK HaplotypeCaller. In this work, we show that using Opossum in conjunction with either Platypus or GATK HaplotypeCaller maintains precision and improves the sensitivity for SNP detection compared to the GATK Best Practices pipeline. In addition, using it in combination with Platypus offers a substantial reduction in run times compared to the GATK pipeline so it is ideal when there are only limited time or computational resources available.

  3. PSA applications and piping reliability analysis: where do we stand?

    International Nuclear Information System (INIS)

    Lydell, B.O.Y.

    1997-01-01

    This reviews a recently proposed framework for piping reliability analysis. The framework was developed to promote critical interpretations of operational data on pipe failures, and to support application-specific-parameter estimation

  4. Girsanov's transformation based variance reduced Monte Carlo simulation schemes for reliability estimation in nonlinear stochastic dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Kanjilal, Oindrila, E-mail: oindrila@civil.iisc.ernet.in; Manohar, C.S., E-mail: manohar@civil.iisc.ernet.in

    2017-07-15

    The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the second explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations. - Highlights: • The distance minimizing control forces minimize a bound on the sampling variance. • Establishing Girsanov controls via solution of a two-point boundary value problem. • Girsanov controls via Volterra's series representation for the transfer functions.

  5. Scheduling for energy and reliability management on multiprocessor real-time systems

    Science.gov (United States)

    Qi, Xuan

    Scheduling algorithms for multiprocessor real-time systems have been studied for years with many well-recognized algorithms proposed. However, it is still an evolving research area and many problems remain open due to their intrinsic complexities. With the emergence of multicore processors, it is necessary to re-investigate the scheduling problems and design/develop efficient algorithms for better system utilization, low scheduling overhead, high energy efficiency, and better system reliability. Focusing cluster schedulings with optimal global schedulers, we study the utilization bound and scheduling overhead for a class of cluster-optimal schedulers. Then, taking energy/power consumption into consideration, we developed energy-efficient scheduling algorithms for real-time systems, especially for the proliferating embedded systems with limited energy budget. As the commonly deployed energy-saving technique (e.g. dynamic voltage frequency scaling (DVFS)) will significantly affect system reliability, we study schedulers that have intelligent mechanisms to recuperate system reliability to satisfy the quality assurance requirements. Extensive simulation is conducted to evaluate the performance of the proposed algorithms on reduction of scheduling overhead, energy saving, and reliability improvement. The simulation results show that the proposed reliability-aware power management schemes could preserve the system reliability while still achieving substantial energy saving.

  6. Data Applicability of Heritage and New Hardware for Launch Vehicle System Reliability Models

    Science.gov (United States)

    Al Hassan Mohammad; Novack, Steven

    2015-01-01

    Many launch vehicle systems are designed and developed using heritage and new hardware. In most cases, the heritage hardware undergoes modifications to fit new functional system requirements, impacting the failure rates and, ultimately, the reliability data. New hardware, which lacks historical data, is often compared to like systems when estimating failure rates. Some qualification of applicability for the data source to the current system should be made. Accurately characterizing the reliability data applicability and quality under these circumstances is crucial to developing model estimations that support confident decisions on design changes and trade studies. This presentation will demonstrate a data-source classification method that ranks reliability data according to applicability and quality criteria to a new launch vehicle. This method accounts for similarities/dissimilarities in source and applicability, as well as operating environments like vibrations, acoustic regime, and shock. This classification approach will be followed by uncertainty-importance routines to assess the need for additional data to reduce uncertainty.

  7. Assessing segment- and corridor-based travel-time reliability on urban freeways : final report.

    Science.gov (United States)

    2016-09-01

    Travel time and its reliability are intuitive performance measures for freeway traffic operations. The objective of this project was to quantify segment-based and corridor-based travel time reliability measures on urban freeways. To achieve this obje...

  8. Assessment of microelectronics packaging for high temperature, high reliability applications

    Energy Technology Data Exchange (ETDEWEB)

    Uribe, F.

    1997-04-01

    This report details characterization and development activities in electronic packaging for high temperature applications. This project was conducted through a Department of Energy sponsored Cooperative Research and Development Agreement between Sandia National Laboratories and General Motors. Even though the target application of this collaborative effort is an automotive electronic throttle control system which would be located in the engine compartment, results of this work are directly applicable to Sandia`s national security mission. The component count associated with the throttle control dictates the use of high density packaging not offered by conventional surface mount. An enabling packaging technology was selected and thermal models defined which characterized the thermal and mechanical response of the throttle control module. These models were used to optimize thick film multichip module design, characterize the thermal signatures of the electronic components inside the module, and to determine the temperature field and resulting thermal stresses under conditions that may be encountered during the operational life of the throttle control module. Because the need to use unpackaged devices limits the level of testing that can be performed either at the wafer level or as individual dice, an approach to assure a high level of reliability of the unpackaged components was formulated. Component assembly and interconnect technologies were also evaluated and characterized for high temperature applications. Electrical, mechanical and chemical characterizations of enabling die and component attach technologies were performed. Additionally, studies were conducted to assess the performance and reliability of gold and aluminum wire bonding to thick film conductor inks. Kinetic models were developed and validated to estimate wire bond reliability.

  9. Solutions to time variant problems of real-time expert systems

    Science.gov (United States)

    Yeh, Show-Way; Wu, Chuan-Lin; Hung, Chaw-Kwei

    1988-01-01

    Real-time expert systems for monitoring and control are driven by input data which changes with time. One of the subtle problems of this field is the propagation of time variant problems from rule to rule. This propagation problem is even complicated under a multiprogramming environment where the expert system may issue test commands to the system to get data and to access time consuming devices to retrieve data for concurrent reasoning. Two approaches are used to handle the flood of input data. Snapshots can be taken to freeze the system from time to time. The expert system treats the system as a stationary one and traces changes by comparing consecutive snapshots. In the other approach, when an input is available, the rules associated with it are evaluated. For both approaches, if the premise condition of a fired rule is changed to being false, the downstream rules should be deactivated. If the status change is due to disappearance of a transient problem, actions taken by the fired downstream rules which are no longer true may need to be undone. If a downstream rule is being evaluated, it should not be fired. Three mechanisms for solving this problem are discussed: tracing, backward checking, and censor setting. In the forward tracing mechanism, when the premise conditions of a fired rule become false, the premise conditions of downstream rules which have been fired or are being evaluated due to the firing of that rule are reevaluated. A tree with its root at the rule being deactivated is traversed. In the backward checking mechanism, when a rule is being fired, the expert system checks back on the premise conditions of the upstream rules that result in evaluation of the rule to see whether it should be fired. The root of the tree being traversed is the rule being fired. In the censor setting mechanism, when a rule is to be evaluated, a censor is constructed based on the premise conditions of the upstream rules and the censor is evaluated just before the rule is

  10. Hairy cell leukemia-variant

    International Nuclear Information System (INIS)

    Quadri, Mohammad I.; Al-Sheikh, Iman H.

    2001-01-01

    Hairy cell leukaemia variant is a very rare chronic lymphoproliferative disorder and is closely related to hairy cell leukemia. We hereby describe a case of hairy cell leukaemia variant for the first time in Saudi Arabia. An elderly Saudi man presented with pallor, massive splenomegaly, and moderate hepatomegaly. Hemoglobin was 7.7 g/dl, Platelets were 134 x109/l and white blood count was 140x10 9/l with 97% being abnormal lymphoid cells with cytoplasmic projections. The morphology, cytochemistry, and immunophenotype of the lymphoid cells were classical of hairy cell leukaemia variant. The bone marrow was easily aspirated and findings were consistent with hairy cell leukaemia variant. (author)

  11. Instrument reliability for high-level nuclear-waste-repository applications

    International Nuclear Information System (INIS)

    Rogue, F.; Binnall, E.P.; Armantrout, G.A.

    1983-01-01

    Reliable instrumentation will be needed to evaluate the characteristics of proposed high-level nuclear-wasted-repository sites and to monitor the performance of selected sites during the operational period and into repository closure. A study has been done to assess the reliability of instruments used in Department of Energy (DOE) waste repository related experiments and in other similar geological applications. The study included experiences with geotechnical, hydrological, geochemical, environmental, and radiological instrumentation and associated data acquisition equipment. Though this paper includes some findings on the reliability of instruments in each of these categories, the emphasis is on experiences with geotechnical instrumentation in hostile repository-type environments. We review the failure modes, rates, and mechanisms, along with manufacturers modifications and design changes to enhance and improve instrument performance; and include recommendations on areas where further improvements are needed

  12. Reliable reconstruction of HIV-1 whole genome haplotypes reveals clonal interference and genetic hitchhiking among immune escape variants

    Science.gov (United States)

    2014-01-01

    Background Following transmission, HIV-1 evolves into a diverse population, and next generation sequencing enables us to detect variants occurring at low frequencies. Studying viral evolution at the level of whole genomes was hitherto not possible because next generation sequencing delivers relatively short reads. Results We here provide a proof of principle that whole HIV-1 genomes can be reliably reconstructed from short reads, and use this to study the selection of immune escape mutations at the level of whole genome haplotypes. Using realistically simulated HIV-1 populations, we demonstrate that reconstruction of complete genome haplotypes is feasible with high fidelity. We do not reconstruct all genetically distinct genomes, but each reconstructed haplotype represents one or more of the quasispecies in the HIV-1 population. We then reconstruct 30 whole genome haplotypes from published short sequence reads sampled longitudinally from a single HIV-1 infected patient. We confirm the reliability of the reconstruction by validating our predicted haplotype genes with single genome amplification sequences, and by comparing haplotype frequencies with observed epitope escape frequencies. Conclusions Phylogenetic analysis shows that the HIV-1 population undergoes selection driven evolution, with successive replacement of the viral population by novel dominant strains. We demonstrate that immune escape mutants evolve in a dependent manner with various mutations hitchhiking along with others. As a consequence of this clonal interference, selection coefficients have to be estimated for complete haplotypes and not for individual immune escapes. PMID:24996694

  13. Semantic prioritization of novel causative genomic variants

    KAUST Repository

    Boudellioua, Imene

    2017-04-17

    Discriminating the causative disease variant(s) for individuals with inherited or de novo mutations presents one of the main challenges faced by the clinical genetics community today. Computational approaches for variant prioritization include machine learning methods utilizing a large number of features, including molecular information, interaction networks, or phenotypes. Here, we demonstrate the PhenomeNET Variant Predictor (PVP) system that exploits semantic technologies and automated reasoning over genotype-phenotype relations to filter and prioritize variants in whole exome and whole genome sequencing datasets. We demonstrate the performance of PVP in identifying causative variants on a large number of synthetic whole exome and whole genome sequences, covering a wide range of diseases and syndromes. In a retrospective study, we further illustrate the application of PVP for the interpretation of whole exome sequencing data in patients suffering from congenital hypothyroidism. We find that PVP accurately identifies causative variants in whole exome and whole genome sequencing datasets and provides a powerful resource for the discovery of causal variants.

  14. Semantic prioritization of novel causative genomic variants

    KAUST Repository

    Boudellioua, Imene; Mohamad Razali, Rozaimi; Kulmanov, Maxat; Hashish, Yasmeen; Bajic, Vladimir B.; Goncalves-Serra, Eva; Schoenmakers, Nadia; Gkoutos, Georgios V.; Schofield, Paul N.; Hoehndorf, Robert

    2017-01-01

    Discriminating the causative disease variant(s) for individuals with inherited or de novo mutations presents one of the main challenges faced by the clinical genetics community today. Computational approaches for variant prioritization include machine learning methods utilizing a large number of features, including molecular information, interaction networks, or phenotypes. Here, we demonstrate the PhenomeNET Variant Predictor (PVP) system that exploits semantic technologies and automated reasoning over genotype-phenotype relations to filter and prioritize variants in whole exome and whole genome sequencing datasets. We demonstrate the performance of PVP in identifying causative variants on a large number of synthetic whole exome and whole genome sequences, covering a wide range of diseases and syndromes. In a retrospective study, we further illustrate the application of PVP for the interpretation of whole exome sequencing data in patients suffering from congenital hypothyroidism. We find that PVP accurately identifies causative variants in whole exome and whole genome sequencing datasets and provides a powerful resource for the discovery of causal variants.

  15. Contributions of Function-Altering Variants in Genes Implicated in Pubertal Timing and Body Mass for Self-Limited Delayed Puberty.

    Science.gov (United States)

    Howard, Sasha R; Guasti, Leonardo; Poliandri, Ariel; David, Alessia; Cabrera, Claudia P; Barnes, Michael R; Wehkalampi, Karoliina; O'Rahilly, Stephen; Aiken, Catherine E; Coll, Anthony P; Ma, Marcella; Rimmington, Debra; Yeo, Giles S H; Dunkel, Leo

    2018-02-01

    Self-limited delayed puberty (DP) is often associated with a delay in physical maturation, but although highly heritable the causal genetic factors remain elusive. Genome-wide association studies of the timing of puberty have identified multiple loci for age at menarche in females and voice break in males, particularly in pathways controlling energy balance. We sought to assess the contribution of rare variants in such genes to the phenotype of familial DP. We performed whole-exome sequencing in 67 pedigrees (125 individuals with DP and 35 unaffected controls) from our unique cohort of familial self-limited DP. Using a whole-exome sequencing filtering pipeline one candidate gene [fat mass and obesity-associated gene (FTO)] was identified. In silico, in vitro, and mouse model studies were performed to investigate the pathogenicity of FTO variants and timing of puberty in FTO+/- mice. We identified potentially pathogenic, rare variants in genes in linkage disequilibrium with genome-wide association studies of age at menarche loci in 283 genes. Of these, five genes were implicated in the control of body mass. After filtering for segregation with trait, one candidate, FTO, was retained. Two FTO variants, found in 14 affected individuals from three families, were also associated with leanness in these patients with DP. One variant (p.Leu44Val) demonstrated altered demethylation activity of the mutant protein in vitro. Fto+/- mice displayed a significantly delayed timing of pubertal onset (P puberty in the general population may contribute to the pathogenesis of self-limited DP. Copyright © 2017 Endocrine Society

  16. REMOTE SENSING APPLICATIONS WITH HIGH RELIABILITY IN CHANGJIANG WATER RESOURCE MANAGEMENT

    Directory of Open Access Journals (Sweden)

    L. Ma

    2018-04-01

    Full Text Available Remote sensing technology has been widely used in many fields. But most of the applications cannot get the information with high reliability and high accuracy in large scale, especially for the applications using automatic interpretation methods. We have designed an application-oriented technology system (PIR composed of a series of accurate interpretation techniques,which can get over 85 % correctness in Water Resource Management from the view of photogrammetry and expert knowledge. The techniques compose of the spatial positioning techniques from the view of photogrammetry, the feature interpretation techniques from the view of expert knowledge, and the rationality analysis techniques from the view of data mining. Each interpreted polygon is accurate enough to be applied to the accuracy sensitive projects, such as the Three Gorge Project and the South - to - North Water Diversion Project. In this paper, we present several remote sensing applications with high reliability in Changjiang Water Resource Management,including water pollution investigation, illegal construction inspection, and water conservation monitoring, etc.

  17. Remote Sensing Applications with High Reliability in Changjiang Water Resource Management

    Science.gov (United States)

    Ma, L.; Gao, S.; Yang, A.

    2018-04-01

    Remote sensing technology has been widely used in many fields. But most of the applications cannot get the information with high reliability and high accuracy in large scale, especially for the applications using automatic interpretation methods. We have designed an application-oriented technology system (PIR) composed of a series of accurate interpretation techniques,which can get over 85 % correctness in Water Resource Management from the view of photogrammetry and expert knowledge. The techniques compose of the spatial positioning techniques from the view of photogrammetry, the feature interpretation techniques from the view of expert knowledge, and the rationality analysis techniques from the view of data mining. Each interpreted polygon is accurate enough to be applied to the accuracy sensitive projects, such as the Three Gorge Project and the South - to - North Water Diversion Project. In this paper, we present several remote sensing applications with high reliability in Changjiang Water Resource Management,including water pollution investigation, illegal construction inspection, and water conservation monitoring, etc.

  18. Association of Adiposity Genetic Variants With Menarche Timing in 92,105 Women of European Descent

    NARCIS (Netherlands)

    Fernández-Rhodes, L.; Demerath, E.W.; Cousminer, D.L.; Tao, R.; Dreyfus, J.G.; Esko, T.; Smith, A.V.; Gudnason, V.; Harris, T.B.; Launer, L.; McArdle, P.F.; Yerges-Armstrong, L.M.; Elks, C.E.; Strachan, D.P.; Kutalik, Z.; Vollenweider, P.; Feenstra, B.; Boyd, H.A.; Metspalu, A.; Mihailov, E.; Broer, L.; Zillikens, M.C.; Oostra, B.A.; van Duijn, C.M.; Lunetta, K.L.; Perry, J.R.; Murray, A.; Koller, D.L.; Lai, D.; Corre, T.; Toniolo, D.; Albrecht, E.; Stöckl, D.; Grallert, H.; Gieger, C.; Hayward, C.; Polasek, O.; Rudan, I.; Wilson, J.F.; He, C.; Kraft, P.; Hu, F.B.; Hunter, D.J.; Hottenga, J.J.; Willemsen, G.; Boomsma, D.I.; Byrne, E.M.; Martin, N.G.; Montgomery, G.W.; Warrington, N.M.; Pennell, C.E.; Stolk, L.; Visser, J.A.; Hofman, A.; Uitterlinden, A.G.; Rivadeneira, F.; Lin, P.; Fisher, S.L.; Bierut, L.J.; Crisponi, L.; Porcu, E.; Mangino, M.; Zhai, G.; Spector, T.D.; Buring, J.E.; Rose, L.M.; Ridker, P.M.; Poole, C.; Hirschhorn, J.N.; Murabito, J.M.; Chasman, D.I.; Widén, E.; North, K.E.; Ong, K.K.; Franceschini, N.

    2013-01-01

    Obesity is of global health concern. There are well-described inverse relationships between female pubertal timing and obesity. Recent genome-wide association studies of age at menarche identified several obesity-related variants. Using data from the ReproGen Consortium, we employed meta-analytical

  19. A Survey on the Reliability of Power Electronics in Electro-Mobility Applications

    DEFF Research Database (Denmark)

    Gadalla, Brwene Salah Abdelkarim; Schaltz, Erik; Blaabjerg, Frede

    2015-01-01

    Reliability is an important issue in the field of power electronics since most of the electrical energy is today processed by power electronics. In most of the electro-mobility applications, e.g. electric and hybridelectric vehicles, power electronic are commonly used in very harsh environment...... and extending the service lifetime as well. Research within power electronics is of high interest as it has an important impact in the industry of the electro-mobility applications. According to the aforementioned explanations, this paper will provide an overview of the common factors (thermal cycles, power...... cycles, vibrations, voltage stress and current ripple stress) affecting the reliability of power electronics in electromobility applications. Also, the researchers perspective is summarized from 2001 to 2015....

  20. Modeling, implementation, and validation of arterial travel time reliability : [summary].

    Science.gov (United States)

    2013-11-01

    Travel time reliability (TTR) has been proposed as : a better measure of a facilitys performance than : a statistical measure like peak hour demand. TTR : is based on more information about average traffic : flows and longer time periods, thus inc...

  1. Gamma processes and peaks-over-threshold distributions for time-dependent reliability

    International Nuclear Information System (INIS)

    Noortwijk, J.M. van; Weide, J.A.M. van der; Kallen, M.J.; Pandey, M.D.

    2007-01-01

    In the evaluation of structural reliability, a failure is defined as the event in which stress exceeds a resistance that is liable to deterioration. This paper presents a method to combine the two stochastic processes of deteriorating resistance and fluctuating load for computing the time-dependent reliability of a structural component. The deterioration process is modelled as a gamma process, which is a stochastic process with independent non-negative increments having a gamma distribution with identical scale parameter. The stochastic process of loads is generated by a Poisson process. The variability of the random loads is modelled by a peaks-over-threshold distribution (such as the generalised Pareto distribution). These stochastic processes of deterioration and load are combined to evaluate the time-dependent reliability

  2. System-level Reliability Assessment of Power Stage in Fuel Cell Application

    DEFF Research Database (Denmark)

    Zhou, Dao; Wang, Huai; Blaabjerg, Frede

    2016-01-01

    reliability. In a case study of a 5 kW fuel cell power stage, the parameter variations of the lifetime model prove that the exponential factor of the junction temperature fluctuation is the most sensitive parameter. Besides, if a 5-out-of-6 redundancy is used, it is concluded both the B10 and the B1 system......High efficient and less pollutant fuel cell stacks are emerging and strong candidates of the power solution used for mobile base stations. In the application of the backup power, the availability and reliability hold the highest priority. This paper considers the reliability metrics from...... the component-level to the system-level for the power stage used in a fuel cell application. It starts with an estimation of the annual accumulated damage for the key power electronic components according to the real mission profile of the fuel cell system. Then, considering the parameter variations in both...

  3. ESTIMATING RELIABILITY OF DISTURBANCES IN SATELLITE TIME SERIES DATA BASED ON STATISTICAL ANALYSIS

    Directory of Open Access Journals (Sweden)

    Z.-G. Zhou

    2016-06-01

    Full Text Available Normally, the status of land cover is inherently dynamic and changing continuously on temporal scale. However, disturbances or abnormal changes of land cover — caused by such as forest fire, flood, deforestation, and plant diseases — occur worldwide at unknown times and locations. Timely detection and characterization of these disturbances is of importance for land cover monitoring. Recently, many time-series-analysis methods have been developed for near real-time or online disturbance detection, using satellite image time series. However, the detection results were only labelled with “Change/ No change” by most of the present methods, while few methods focus on estimating reliability (or confidence level of the detected disturbances in image time series. To this end, this paper propose a statistical analysis method for estimating reliability of disturbances in new available remote sensing image time series, through analysis of full temporal information laid in time series data. The method consists of three main steps. (1 Segmenting and modelling of historical time series data based on Breaks for Additive Seasonal and Trend (BFAST. (2 Forecasting and detecting disturbances in new time series data. (3 Estimating reliability of each detected disturbance using statistical analysis based on Confidence Interval (CI and Confidence Levels (CL. The method was validated by estimating reliability of disturbance regions caused by a recent severe flooding occurred around the border of Russia and China. Results demonstrated that the method can estimate reliability of disturbances detected in satellite image with estimation error less than 5% and overall accuracy up to 90%.

  4. Modeling, implementation, and validation of arterial travel time reliability.

    Science.gov (United States)

    2013-11-01

    Previous research funded by Florida Department of Transportation (FDOT) developed a method for estimating : travel time reliability for arterials. This method was not initially implemented or validated using field data. This : project evaluated and r...

  5. A double-loop adaptive sampling approach for sensitivity-free dynamic reliability analysis

    International Nuclear Information System (INIS)

    Wang, Zequn; Wang, Pingfeng

    2015-01-01

    Dynamic reliability measures reliability of an engineered system considering time-variant operation condition and component deterioration. Due to high computational costs, conducting dynamic reliability analysis at an early system design stage remains challenging. This paper presents a confidence-based meta-modeling approach, referred to as double-loop adaptive sampling (DLAS), for efficient sensitivity-free dynamic reliability analysis. The DLAS builds a Gaussian process (GP) model sequentially to approximate extreme system responses over time, so that Monte Carlo simulation (MCS) can be employed directly to estimate dynamic reliability. A generic confidence measure is developed to evaluate the accuracy of dynamic reliability estimation while using the MCS approach based on developed GP models. A double-loop adaptive sampling scheme is developed to efficiently update the GP model in a sequential manner, by considering system input variables and time concurrently in two sampling loops. The model updating process using the developed sampling scheme can be terminated once the user defined confidence target is satisfied. The developed DLAS approach eliminates computationally expensive sensitivity analysis process, thus substantially improves the efficiency of dynamic reliability analysis. Three case studies are used to demonstrate the efficacy of DLAS for dynamic reliability analysis. - Highlights: • Developed a novel adaptive sampling approach for dynamic reliability analysis. • POD Developed a new metric to quantify the accuracy of dynamic reliability estimation. • Developed a new sequential sampling scheme to efficiently update surrogate models. • Three case studies were used to demonstrate the efficacy of the new approach. • Case study results showed substantially enhanced efficiency with high accuracy

  6. The reliable solution and computation time of variable parameters logistic model

    Science.gov (United States)

    Wang, Pengfei; Pan, Xinnong

    2018-05-01

    The study investigates the reliable computation time (RCT, termed as T c) by applying a double-precision computation of a variable parameters logistic map (VPLM). Firstly, by using the proposed method, we obtain the reliable solutions for the logistic map. Secondly, we construct 10,000 samples of reliable experiments from a time-dependent non-stationary parameters VPLM and then calculate the mean T c. The results indicate that, for each different initial value, the T cs of the VPLM are generally different. However, the mean T c trends to a constant value when the sample number is large enough. The maximum, minimum, and probable distribution functions of T c are also obtained, which can help us to identify the robustness of applying a nonlinear time series theory to forecasting by using the VPLM output. In addition, the T c of the fixed parameter experiments of the logistic map is obtained, and the results suggest that this T c matches the theoretical formula-predicted value.

  7. Realization of Timed Reliable Communication over Off-The-Shelf Wireless Technologies

    DEFF Research Database (Denmark)

    Malinowsky, B.; Groenbaek, Jesper; Schwefel, Hans-Peter

    2013-01-01

    Industrial and safety-critical applications pose strict requirements for timeliness and reliability for the communication solution. Thereby the use of off-the-shelf (OTS) wireless communication technologies can be attractive to achieve low cost and easy deployment. This paper presents and analyse...

  8. Application-Driven Reliability Measures and Evaluation Tool for Fault-Tolerant Real-Time Systems

    National Research Council Canada - National Science Library

    Krishna, C

    2001-01-01

    .... The measure combines graphic-theoretic concepts in evaluating the underlying reliability of the network and other means to evaluate the ability of the network to support interprocessor traffic...

  9. Application of a truncated normal failure distribution in reliability testing

    Science.gov (United States)

    Groves, C., Jr.

    1968-01-01

    Statistical truncated normal distribution function is applied as a time-to-failure distribution function in equipment reliability estimations. Age-dependent characteristics of the truncated function provide a basis for formulating a system of high-reliability testing that effectively merges statistical, engineering, and cost considerations.

  10. Accounting for Model Uncertainties Using Reliability Methods - Application to Carbon Dioxide Geologic Sequestration System. Final Report

    International Nuclear Information System (INIS)

    Mok, Chin Man; Doughty, Christine; Zhang, Keni; Pruess, Karsten; Kiureghian, Armen; Zhang, Miao; Kaback, Dawn

    2010-01-01

    A new computer code, CALRELTOUGH, which uses reliability methods to incorporate parameter sensitivity and uncertainty analysis into subsurface flow and transport models, was developed by Geomatrix Consultants, Inc. in collaboration with Lawrence Berkeley National Laboratory and University of California at Berkeley. The CALREL reliability code was developed at the University of California at Berkely for geotechnical applications and the TOUGH family of codes was developed at Lawrence Berkeley National Laboratory for subsurface flow and tranport applications. The integration of the two codes provides provides a new approach to deal with uncertainties in flow and transport modeling of the subsurface, such as those uncertainties associated with hydrogeology parameters, boundary conditions, and initial conditions of subsurface flow and transport using data from site characterization and monitoring for conditioning. The new code enables computation of the reliability of a system and the components that make up the system, instead of calculating the complete probability distributions of model predictions at all locations at all times. The new CALRELTOUGH code has tremendous potential to advance subsurface understanding for a variety of applications including subsurface energy storage, nuclear waste disposal, carbon sequestration, extraction of natural resources, and environmental remediation. The new code was tested on a carbon sequestration problem as part of the Phase I project. Phase iI was not awarded.

  11. Critical spare parts ordering decisions using conditional reliability and stochastic lead time

    International Nuclear Information System (INIS)

    Godoy, David R.; Pascual, Rodrigo; Knights, Peter

    2013-01-01

    Asset-intensive companies face great pressure to reduce operation costs and increase utilization. This scenario often leads to over-stress on critical equipment and its spare parts associated, affecting availability, reliability, and system performance. As these resources impact considerably on financial and operational structures, the opportunity is given by demand for decision-making methods for the management of spare parts processes. We proposed an ordering decision-aid technique which uses a measurement of spare performance, based on the stress–strength interference theory; which we have called Condition-Based Service Level (CBSL). We focus on Condition Managed Critical Spares (CMS), namely, spares which are expensive, highly reliable, with higher lead times, and are not available in store. As a mitigation measure, CMS are under condition monitoring. The aim of the paper is orienting the decision time for CMS ordering or just continuing the operation. The paper presents a graphic technique which considers a rule for decision based on both condition-based reliability function and a stochastic/fixed lead time. For the stochastic lead time case, results show that technique is effective to determine the time when the system operation is reliable and can withstand the lead time variability, satisfying a desired service level. Additionally, for the constant lead time case, the technique helps to define insurance spares. In conclusion, presented ordering decision rule is useful to asset managers for enhancing the operational continuity affected by spare parts

  12. Reliability and validity of a smartphone pulse rate application for the assessment of resting and elevated pulse rate.

    Science.gov (United States)

    Mitchell, Katy; Graff, Megan; Hedt, Corbin; Simmons, James

    2016-08-01

    Purpose/hypothesis: This study was designed to investigate the test-retest reliability, concurrent validity, and the standard error of measurement (SEm) of a pulse rate assessment application (Azumio®'s Instant Heart Rate) on both Android® and iOS® (iphone operating system) smartphones as compared to a FT7 Polar® Heart Rate monitor. Number of subjects: 111. Resting (sitting) pulse rate was assessed twice and then the participants were asked to complete a 1-min standing step test and then immediately re-assessed. The smartphone assessors were blinded to their measurements. Test-retest reliability (intraclass correlation coefficient [ICC 2,1] and 95% confidence interval) for the three tools at rest (time 1/time 2): iOS® (0.76 [0.67-0.83]); Polar® (0.84 [0.78-0.89]); and Android® (0.82 [0.75-0.88]). Concurrent validity at rest time 2 (ICC 2,1) with the Polar® device: IOS® (0.92 [0.88-0.94]) and Android® (0.95 [0.92-0.96]). Concurrent validity post-exercise (time 3) (ICC) with the Polar® device: iOS® (0.90 [0.86-0.93]) and Android® (0.94 [0.91-0.96]). The SEm values for the three devices at rest: iOS® (5.77 beats per minute [BPM]), Polar® (4.56 BPM) and Android® (4.96 BPM). The Android®, iOS®, and Polar® devices showed acceptable test-retest reliability at rest and post-exercise. Both the smartphone platforms demonstrated concurrent validity with the Polar® at rest and post-exercise. The Azumio® Instant Heart Rate application when used by either platform appears to be a reliable and valid tool to assess pulse rate in healthy individuals.

  13. Reliability of surface electromyography timing parameters in gait in cervical spondylotic myelopathy.

    LENUS (Irish Health Repository)

    Malone, Ailish

    2012-02-01

    The aims of this study were to validate a computerised method to detect muscle activity from surface electromyography (SEMG) signals in gait in patients with cervical spondylotic myelopathy (CSM), and to evaluate the test-retest reliability of the activation times designated by this method. SEMG signals were recorded from rectus femoris (RF), biceps femoris (BF), tibialis anterior (TA), and medial gastrocnemius (MG), during gait in 12 participants with CSM on two separate test days. Four computerised activity detection methods, based on the Teager-Kaiser Energy Operator (TKEO), were applied to a subset of signals and compared to visual interpretation of muscle activation. The most accurate method was then applied to all signals for evaluation of test-retest reliability. A detection method based on a combined slope and amplitude threshold showed the highest agreement (87.5%) with visual interpretation. With respect to reliability, the standard error of measurement (SEM) of the timing of RF, TA and MG between test days was 5.5% stride duration or less, while the SEM of BF was 9.4%. The timing parameters of RF, TA and MG designated by this method were considered sufficiently reliable for use in clinical practice, however the reliability of BF was questionable.

  14. Data-variant kernel analysis

    CERN Document Server

    Motai, Yuichi

    2015-01-01

    Describes and discusses the variants of kernel analysis methods for data types that have been intensely studied in recent years This book covers kernel analysis topics ranging from the fundamental theory of kernel functions to its applications. The book surveys the current status, popular trends, and developments in kernel analysis studies. The author discusses multiple kernel learning algorithms and how to choose the appropriate kernels during the learning phase. Data-Variant Kernel Analysis is a new pattern analysis framework for different types of data configurations. The chapters include

  15. Establishing monitoring programs for travel time reliability. [supporting datasets

    Science.gov (United States)

    2014-01-01

    The objective of this project was to develop system designs for programs to monitor travel time reliability and to prepare a guidebook that practitioners and others can use to design, build, operate, and maintain such systems. Generally, such travel ...

  16. Reliability in automotive and mechanical engineering determination of component and system reliability

    CERN Document Server

    Bertsche, Bernd

    2008-01-01

    In the present contemporary climate of global competition in every branch of engineering and manufacture it has been shown from extensive customer surveys that above every other attribute, reliability stands as the most desired feature in a finished product. To survive this relentless fight for survival any organisation, which neglect the plea of attaining to excellence in reliability, will do so at a serious cost Reliability in Automotive and Mechanical Engineering draws together a wide spectrum of diverse and relevant applications and analyses on reliability engineering. This is distilled into this attractive and well documented volume and practising engineers are challenged with the formidable task of simultaneously improving reliability and reducing the costs and down-time due to maintenance. The volume brings together eleven chapters to highlight the importance of the interrelated reliability and maintenance disciplines. They represent the development trends and progress resulting in making this book ess...

  17. New SP-values of time and reliability for freight transport in the Netherlands

    NARCIS (Netherlands)

    Jong, G. de; Kouwenhoven, M.; Bates, J.; Koster, P.; Verhoef, E.; Tavasszy, L.; Warffemius, P.

    2014-01-01

    This paper discusses the methods used in a study on the values of time and reliability in freight transport in the Netherlands. SP surveys were carried out among more than 800 shippers and carriers. A novel feature is that both for the value of time and reliability two additive components are

  18. Plastic packaged microcircuits: Quality, reliability, and cost issues

    Science.gov (United States)

    Pecht, Michael G.; Agarwal, Rakesh; Quearry, Dan

    1993-12-01

    Plastic encapsulated microcircuits (PEMs) find their main application in commercial and telecommunication electronics. The advantages of PEMs in cost, size, weight, performance, and market lead-time, have attracted 97% of the market share of worldwide microcircuit sales. However, PEMs have always been resisted in US Government and military applications due to the perception that PEM reliability is low. This paper surveys plastic packaging with respect to the issues of reliability, market lead-time, performance, cost, and weight as a means to guide part-selection and system-design.

  19. Reliability of Capacitors for DC-Link Applications in Power Electronic Converters

    DEFF Research Database (Denmark)

    Wang, Huai; Blaabjerg, Frede

    2014-01-01

    DC-link capacitors are an important part in the majority of power electronic converters which contribute to cost, size and failure rate on a considerable scale. From capacitor users' viewpoint, this paper presents a review on the improvement of reliability of dc link in power electronic converters...... from two aspects: 1) reliability-oriented dc-link design solutions; 2) conditioning monitoring of dc-link capacitors during operation. Failure mechanisms, failure modes and lifetime models of capacitors suitable for the applications are also discussed as a basis to understand the physics......-of-failure. This review serves to provide a clear picture of the state-of-the-art research in this area and to identify the corresponding challenges and future research directions for capacitors and their dc-link applications....

  20. Software reliability models for critical applications

    Energy Technology Data Exchange (ETDEWEB)

    Pham, H.; Pham, M.

    1991-12-01

    This report presents the results of the first phase of the ongoing EG&G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place. 407 refs., 4 figs., 2 tabs.

  1. Software reliability models for critical applications

    Energy Technology Data Exchange (ETDEWEB)

    Pham, H.; Pham, M.

    1991-12-01

    This report presents the results of the first phase of the ongoing EG G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place. 407 refs., 4 figs., 2 tabs.

  2. The value of reliability with endogenous meeting time

    DEFF Research Database (Denmark)

    Abegaz, Dereje Fentie; Fosgerau, Mogens

    for transport policy. Some consensus has been reached regarding the theoretical basis for measuring the cost of travel time variability (Small & Verhoef, 2007). Usually, the value of travel time variability is modeled using one of two broad theoretical approaches. The approaches differ in their interpretation...... times are correlated. Moreover, trip costs are found to increase with increasing variance of the difference of individual travel times. In this paper, we extend the Fosgerau et al. (2012) model by adding the concept of an agreed meeting start time as well as penalties for being late relative...... to this time. We extend the model to incorporate a framework where individuals bargain to choose the meeting start time. In this model, we are able to derive the value to both individuals of an improvement in the reliability of travel times for either person. A marginal improvement in travel time variability...

  3. A discrete-time Bayesian network reliability modeling and analysis framework

    International Nuclear Information System (INIS)

    Boudali, H.; Dugan, J.B.

    2005-01-01

    Dependability tools are becoming an indispensable tool for modeling and analyzing (critical) systems. However the growing complexity of such systems calls for increasing sophistication of these tools. Dependability tools need to not only capture the complex dynamic behavior of the system components, but they must be also easy to use, intuitive, and computationally efficient. In general, current tools have a number of shortcomings including lack of modeling power, incapacity to efficiently handle general component failure distributions, and ineffectiveness in solving large models that exhibit complex dependencies between their components. We propose a novel reliability modeling and analysis framework based on the Bayesian network (BN) formalism. The overall approach is to investigate timed Bayesian networks and to find a suitable reliability framework for dynamic systems. We have applied our methodology to two example systems and preliminary results are promising. We have defined a discrete-time BN reliability formalism and demonstrated its capabilities from a modeling and analysis point of view. This research shows that a BN based reliability formalism is a powerful potential solution to modeling and analyzing various kinds of system components behaviors and interactions. Moreover, being based on the BN formalism, the framework is easy to use and intuitive for non-experts, and provides a basis for more advanced and useful analyses such as system diagnosis

  4. Dependent systems reliability estimation by structural reliability approach

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2014-01-01

    Estimation of system reliability by classical system reliability methods generally assumes that the components are statistically independent, thus limiting its applicability in many practical situations. A method is proposed for estimation of the system reliability with dependent components, where...... the leading failure mechanism(s) is described by physics of failure model(s). The proposed method is based on structural reliability techniques and accounts for both statistical and failure effect correlations. It is assumed that failure of any component is due to increasing damage (fatigue phenomena...... identification. Application of the proposed method can be found in many real world systems....

  5. Reliability theory with applications to preventive maintenance

    CERN Document Server

    Gertsbakh, Ilya

    2000-01-01

    The material in this book was first presented as a one-semester course in Relia­ bility Theory and Preventive Maintenance for M.Sc. students of the Industrial Engineering Department of Ben Gurion University in the 1997/98 and 1998/99 academic years. Engineering students are mainly interested in the applied part of this theory. The value of preventive maintenance theory lies in the possibility of its imple­ mentation, which crucially depends on how we handle statistical reliability data. The very nature of the object of reliability theory - system lifetime - makes it extremely difficult to collect large amounts of data. The data available are usu­ ally incomplete, e.g. heavily censored. Thus, the desire to make the course material more applicable led me to include in the course topics such as mod­ eling system lifetime distributions (Chaps. 1,2) and the maximum likelihood techniques for lifetime data processing (Chap. 3). A course in the theory of statistics is aprerequisite for these lectures. Stan­ dard...

  6. RareVar: A Framework for Detecting Low-Frequency Single-Nucleotide Variants.

    Science.gov (United States)

    Hao, Yangyang; Xuei, Xiaoling; Li, Lang; Nakshatri, Harikrishna; Edenberg, Howard J; Liu, Yunlong

    2017-07-01

    Accurate identification of low-frequency somatic point mutations in tumor samples has important clinical utilities. Although high-throughput sequencing technology enables capturing such variants while sequencing primary tumor samples, our ability for accurate detection is compromised when the variant frequency is close to the sequencer error rate. Most current experimental and bioinformatic strategies target mutations with ≥5% allele frequency, which limits our ability to understand the cancer etiology and tumor evolution. We present an experimental and computational modeling framework, RareVar, to reliably identify low-frequency single-nucleotide variants from high-throughput sequencing data under standard experimental protocols. RareVar protocol includes a benchmark design by pooling DNAs from already sequenced individuals at various concentrations to target variants at desired frequencies, 0.5%-3% in our case. By applying a generalized, linear model-based, position-specific error model, followed by machine-learning-based variant calibration, our approach outperforms existing methods. Our method can be applied on most capture and sequencing platforms without modifying the experimental protocol.

  7. Identifying structural variants using linked-read sequencing data.

    Science.gov (United States)

    Elyanow, Rebecca; Wu, Hsin-Ta; Raphael, Benjamin J

    2017-11-03

    Structural variation, including large deletions, duplications, inversions, translocations, and other rearrangements, is common in human and cancer genomes. A number of methods have been developed to identify structural variants from Illumina short-read sequencing data. However, reliable identification of structural variants remains challenging because many variants have breakpoints in repetitive regions of the genome and thus are difficult to identify with short reads. The recently developed linked-read sequencing technology from 10X Genomics combines a novel barcoding strategy with Illumina sequencing. This technology labels all reads that originate from a small number (~5-10) DNA molecules ~50Kbp in length with the same molecular barcode. These barcoded reads contain long-range sequence information that is advantageous for identification of structural variants. We present Novel Adjacency Identification with Barcoded Reads (NAIBR), an algorithm to identify structural variants in linked-read sequencing data. NAIBR predicts novel adjacencies in a individual genome resulting from structural variants using a probabilistic model that combines multiple signals in barcoded reads. We show that NAIBR outperforms several existing methods for structural variant identification - including two recent methods that also analyze linked-reads - on simulated sequencing data and 10X whole-genome sequencing data from the NA12878 human genome and the HCC1954 breast cancer cell line. Several of the novel somatic structural variants identified in HCC1954 overlap known cancer genes. Software is available at compbio.cs.brown.edu/software. braphael@princeton.edu. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  8. Characterization of reliability of spike timing in spinal interneurons during oscillating inputs

    DEFF Research Database (Denmark)

    Beierholm, Ulrik; Nielsen, Carsten D.; Ryge, Jesper

    2001-01-01

    that interneurons can respond with a high reliability of spike timing, but only by combining fast and slow oscillations is it possible to obtain a high reliability of firing during rhythmic locomotor movements. Theoretical analysis of the rotation number provided new insights into the mechanism for obtaining......The spike timing in rhythmically active interneurons in the mammalian spinal locomotor network varies from cycle to cycle. We tested the contribution from passive membrane properties to this variable firing pattern, by measuring the reliability of spike timing, P, in interneurons in the isolated...... the analysis we used a leaky integrate and fire (LIF) model with a noise term added. The LIF model was able to reproduce the experimentally observed properties of P as well as the low-pass properties of the membrane. The LIF model enabled us to use the mathematical theory of nonlinear oscillators to analyze...

  9. Towards more accurate and reliable predictions for nuclear applications

    International Nuclear Information System (INIS)

    Goriely, S.

    2015-01-01

    The need for nuclear data far from the valley of stability, for applications such as nuclear astrophysics or future nuclear facilities, challenges the robustness as well as the predictive power of present nuclear models. Most of the nuclear data evaluation and prediction are still performed on the basis of phenomenological nuclear models. For the last decades, important progress has been achieved in fundamental nuclear physics, making it now feasible to use more reliable, but also more complex microscopic or semi-microscopic models in the evaluation and prediction of nuclear data for practical applications. In the present contribution, the reliability and accuracy of recent nuclear theories are discussed for most of the relevant quantities needed to estimate reaction cross sections and beta-decay rates, namely nuclear masses, nuclear level densities, gamma-ray strength, fission properties and beta-strength functions. It is shown that nowadays, mean-field models can be tuned at the same level of accuracy as the phenomenological models, renormalized on experimental data if needed, and therefore can replace the phenomenogical inputs in the prediction of nuclear data. While fundamental nuclear physicists keep on improving state-of-the-art models, e.g. within the shell model or ab initio models, nuclear applications could make use of their most recent results as quantitative constraints or guides to improve the predictions in energy or mass domain that will remain inaccessible experimentally. (orig.)

  10. Research on Control Method Based on Real-Time Operational Reliability Evaluation for Space Manipulator

    Directory of Open Access Journals (Sweden)

    Yifan Wang

    2014-05-01

    Full Text Available A control method based on real-time operational reliability evaluation for space manipulator is presented for improving the success rate of a manipulator during the execution of a task. In this paper, a method for quantitative analysis of operational reliability is given when manipulator is executing a specified task; then a control model which could control the quantitative operational reliability is built. First, the control process is described by using a state space equation. Second, process parameters are estimated in real time using Bayesian method. Third, the expression of the system's real-time operational reliability is deduced based on the state space equation and process parameters which are estimated using Bayesian method. Finally, a control variable regulation strategy which considers the cost of control is given based on the Theory of Statistical Process Control. It is shown via simulations that this method effectively improves the operational reliability of space manipulator control system.

  11. Bayesian methodology for reliability model acceptance

    International Nuclear Information System (INIS)

    Zhang Ruoxue; Mahadevan, Sankaran

    2003-01-01

    This paper develops a methodology to assess the reliability computation model validity using the concept of Bayesian hypothesis testing, by comparing the model prediction and experimental observation, when there is only one computational model available to evaluate system behavior. Time-independent and time-dependent problems are investigated, with consideration of both cases: with and without statistical uncertainty in the model. The case of time-independent failure probability prediction with no statistical uncertainty is a straightforward application of Bayesian hypothesis testing. However, for the life prediction (time-dependent reliability) problem, a new methodology is developed in this paper to make the same Bayesian hypothesis testing concept applicable. With the existence of statistical uncertainty in the model, in addition to the application of a predictor estimator of the Bayes factor, the uncertainty in the Bayes factor is explicitly quantified through treating it as a random variable and calculating the probability that it exceeds a specified value. The developed method provides a rational criterion to decision-makers for the acceptance or rejection of the computational model

  12. Demand Response Application forReliability Enhancement in Electricity Market

    OpenAIRE

    Romera Pérez, Javier

    2015-01-01

    The term reliability is related with the adequacy and security during operation of theelectric power system, supplying the electricity demand over time and saving thepossible contingencies because every inhabitant needs to be supplied with electricity intheir day to day. Operating the system in this way entails spending money. The first partof the project is going to be an analysis of the reliability and the economic impact of it.During the last decade, electric utilities and companies had be...

  13. Quantitative Single-letter Sequencing: a method for simultaneously monitoring numerous known allelic variants in single DNA samples

    Directory of Open Access Journals (Sweden)

    Duborjal Hervé

    2008-02-01

    Full Text Available Abstract Background Pathogens such as fungi, bacteria and especially viruses, are highly variable even within an individual host, intensifying the difficulty of distinguishing and accurately quantifying numerous allelic variants co-existing in a single nucleic acid sample. The majority of currently available techniques are based on real-time PCR or primer extension and often require multiplexing adjustments that impose a practical limitation of the number of alleles that can be monitored simultaneously at a single locus. Results Here, we describe a novel method that allows the simultaneous quantification of numerous allelic variants in a single reaction tube and without multiplexing. Quantitative Single-letter Sequencing (QSS begins with a single PCR amplification step using a pair of primers flanking the polymorphic region of interest. Next, PCR products are submitted to single-letter sequencing with a fluorescently-labelled primer located upstream of the polymorphic region. The resulting monochromatic electropherogram shows numerous specific diagnostic peaks, attributable to specific variants, signifying their presence/absence in the DNA sample. Moreover, peak fluorescence can be quantified and used to estimate the frequency of the corresponding variant in the DNA population. Using engineered allelic markers in the genome of Cauliflower mosaic virus, we reliably monitored six different viral genotypes in DNA extracted from infected plants. Evaluation of the intrinsic variance of this method, as applied to both artificial plasmid DNA mixes and viral genome populations, demonstrates that QSS is a robust and reliable method of detection and quantification for variants with a relative frequency of between 0.05 and 1. Conclusion This simple method is easily transferable to many other biological systems and questions, including those involving high throughput analysis, and can be performed in any laboratory since it does not require specialized

  14. A time-variant analysis of the 1/f^(2) phase noise in CMOS parallel LC-Tank quadrature oscillators

    DEFF Research Database (Denmark)

    Andreani, Pietro

    2006-01-01

    This paper presents a study of 1/f2 phase noise in quadrature oscillators built by connecting two differential LC-tank oscillators in a parallel fashion. The analysis clearly demonstrates the necessity of adopting a time-variant theory of phase noise, where a more simplistic, time...

  15. Scaled CMOS Technology Reliability Users Guide

    Science.gov (United States)

    White, Mark

    2010-01-01

    The desire to assess the reliability of emerging scaled microelectronics technologies through faster reliability trials and more accurate acceleration models is the precursor for further research and experimentation in this relevant field. The effect of semiconductor scaling on microelectronics product reliability is an important aspect to the high reliability application user. From the perspective of a customer or user, who in many cases must deal with very limited, if any, manufacturer's reliability data to assess the product for a highly-reliable application, product-level testing is critical in the characterization and reliability assessment of advanced nanometer semiconductor scaling effects on microelectronics reliability. A methodology on how to accomplish this and techniques for deriving the expected product-level reliability on commercial memory products are provided.Competing mechanism theory and the multiple failure mechanism model are applied to the experimental results of scaled SDRAM products. Accelerated stress testing at multiple conditions is applied at the product level of several scaled memory products to assess the performance degradation and product reliability. Acceleration models are derived for each case. For several scaled SDRAM products, retention time degradation is studied and two distinct soft error populations are observed with each technology generation: early breakdown, characterized by randomly distributed weak bits with Weibull slope (beta)=1, and a main population breakdown with an increasing failure rate. Retention time soft error rates are calculated and a multiple failure mechanism acceleration model with parameters is derived for each technology. Defect densities are calculated and reflect a decreasing trend in the percentage of random defective bits for each successive product generation. A normalized soft error failure rate of the memory data retention time in FIT/Gb and FIT/cm2 for several scaled SDRAM generations is

  16. Reliability physics and engineering time-to-failure modeling

    CERN Document Server

    McPherson, J W

    2013-01-01

    Reliability Physics and Engineering provides critically important information that is needed for designing and building reliable cost-effective products. Key features include:  ·       Materials/Device Degradation ·       Degradation Kinetics ·       Time-To-Failure Modeling ·       Statistical Tools ·       Failure-Rate Modeling ·       Accelerated Testing ·       Ramp-To-Failure Testing ·       Important Failure Mechanisms for Integrated Circuits ·       Important Failure Mechanisms for  Mechanical Components ·       Conversion of Dynamic  Stresses into Static Equivalents ·       Small Design Changes Producing Major Reliability Improvements ·       Screening Methods ·       Heat Generation and Dissipation ·       Sampling Plans and Confidence Intervals This textbook includes numerous example problems with solutions. Also, exercise problems along with the answers are included at the end of each chapter. Relia...

  17. A Simplified Network Model for Travel Time Reliability Analysis in a Road Network

    Directory of Open Access Journals (Sweden)

    Kenetsu Uchida

    2017-01-01

    Full Text Available This paper proposes a simplified network model which analyzes travel time reliability in a road network. A risk-averse driver is assumed in the simplified model. The risk-averse driver chooses a path by taking into account both a path travel time variance and a mean path travel time. The uncertainty addressed in this model is that of traffic flows (i.e., stochastic demand flows. In the simplified network model, the path travel time variance is not calculated by considering all travel time covariance between two links in the network. The path travel time variance is calculated by considering all travel time covariance between two adjacent links in the network. Numerical experiments are carried out to illustrate the applicability and validity of the proposed model. The experiments introduce the path choice behavior of a risk-neutral driver and several types of risk-averse drivers. It is shown that the mean link flows calculated by introducing the risk-neutral driver differ as a whole from those calculated by introducing several types of risk-averse drivers. It is also shown that the mean link flows calculated by the simplified network model are almost the same as the flows calculated by using the exact path travel time variance.

  18. Application of nonparametric statistics to material strength/reliability assessment

    International Nuclear Information System (INIS)

    Arai, Taketoshi

    1992-01-01

    An advanced material technology requires data base on a wide variety of material behavior which need to be established experimentally. It may often happen that experiments are practically limited in terms of reproducibility or a range of test parameters. Statistical methods can be applied to understanding uncertainties in such a quantitative manner as required from the reliability point of view. Statistical assessment involves determinations of a most probable value and the maximum and/or minimum value as one-sided or two-sided confidence limit. A scatter of test data can be approximated by a theoretical distribution only if the goodness of fit satisfies a test criterion. Alternatively, nonparametric statistics (NPS) or distribution-free statistics can be applied. Mathematical procedures by NPS are well established for dealing with most reliability problems. They handle only order statistics of a sample. Mathematical formulas and some applications to engineering assessments are described. They include confidence limits of median, population coverage of sample, required minimum number of a sample, and confidence limits of fracture probability. These applications demonstrate that a nonparametric statistical estimation is useful in logical decision making in the case a large uncertainty exists. (author)

  19. Standard semiconductor packaging for high-reliability low-cost MEMS applications

    Science.gov (United States)

    Harney, Kieran P.

    2005-01-01

    Microelectronic packaging technology has evolved over the years in response to the needs of IC technology. The fundamental purpose of the package is to provide protection for the silicon chip and to provide electrical connection to the circuit board. Major change has been witnessed in packaging and today wafer level packaging technology has further revolutionized the industry. MEMS (Micro Electro Mechanical Systems) technology has created new challenges for packaging that do not exist in standard ICs. However, the fundamental objective of MEMS packaging is the same as traditional ICs, the low cost and reliable presentation of the MEMS chip to the next level interconnect. Inertial MEMS is one of the best examples of the successful commercialization of MEMS technology. The adoption of MEMS accelerometers for automotive airbag applications has created a high volume market that demands the highest reliability at low cost. The suppliers to these markets have responded by exploiting standard semiconductor packaging infrastructures. However, there are special packaging needs for MEMS that cannot be ignored. New applications for inertial MEMS devices are emerging in the consumer space that adds the imperative of small size to the need for reliability and low cost. These trends are not unique to MEMS accelerometers. For any MEMS technology to be successful the packaging must provide the basic reliability and interconnection functions, adding the least possible cost to the product. This paper will discuss the evolution of MEMS packaging in the accelerometer industry and identify the main issues that needed to be addressed to enable the successful commercialization of the technology in the automotive and consumer markets.

  20. A Simple, Reliable Precision Time Analyser

    Energy Technology Data Exchange (ETDEWEB)

    Joshi, B. V.; Nargundkar, V. R.; Subbarao, K.; Kamath, M. S.; Eligar, S. K. [Atomic Energy Establishment Trombay, Bombay (India)

    1966-06-15

    A 30-channel time analyser is described. The time analyser was designed and built for pulsed neutron research but can be applied to other uses. Most of the logic is performed by means of ferrite memory core and transistor switching circuits. This leads to great versatility, low power consumption, extreme reliability and low cost. The analyser described provides channel Widths from 10 {mu}s to 10 ms; arbitrarily wider channels are easily obtainable. It can handle counting rates up to 2000 counts/min in each channel with less than 1% dead time loss. There is a provision for an initial delay equal to 100 channel widths. An input pulse de-randomizer unit using tunnel diodes ensures exactly equal channel widths. A brief description of the principles involved in core switching circuitry is given. The core-transistor transfer loop is compared with the usual core-diode loops and is shown to be more versatile and better adapted to the making of a time analyser. The circuits derived from the basic loop are described. These include the scale of ten, the frequency dividers and the delay generator. The current drivers developed for driving the cores are described. The crystal-controlled clock which controls the width of the time channels and synchronizes the operation of the various circuits is described. The detector pulse derandomizer unit using tunnel diodes is described. The scheme of the time analyser is then described showing how the various circuits can be integrated together to form a versatile time analyser. (author)

  1. Capillary Isoelectric Focusing-Mass Spectrometry Method for the Separation and Online Characterization of Intact Monoclonal Antibody Charge Variants.

    Science.gov (United States)

    Dai, Jun; Lamp, Jared; Xia, Qiangwei; Zhang, Yingru

    2018-02-06

    We report a new online capillary isoelectric focusing-mass spectrometry (CIEF-MS) method for monoclonal antibody (mAb) charge variant analysis using an electrokinetically pumped sheath-flow nanospray ion source and a time-of-flight MS with pressure-assisted chemical mobilization. To develop a successful, reliable CIEF-MS method for mAb, we have selected and optimized many critical, interrelating reagents and parameters that include (1) MS-friendly anolyte and catholyte; (2) a glycerol enhanced sample mixture that reduced non-CIEF electrophoretic mobility and band broadening; (3) ampholyte selected for balancing resolution and MS sensitivity; (4) sheath liquid composition optimized for efficient focusing, mobilization, and electrospray ionization; (5) judiciously selected CIEF running parameters including injection amount, field strength, and applied pressure. The fundamental premise of CIEF was well maintained as verified by the linear correlation (R 2 = 0.99) between pI values and migration time using a mixture of pI markers. In addition, the charge variant profiles of trastuzumab, bevacizumab, infliximab, and cetuximab, obtained using this CIEF-MS method, were corroborated by imaged CIEF-UV (iCIEF-UV) analyses. The relative standard deviations (RSD) of absolute migration time of pI markers were all less than 5% (n = 4). Triplicate analyses of bevacizumab showed RSD less than 1% for relative migration time to an internal standard and RSD of 7% for absolute MS peak area. Moreover, the antibody charge variants were characterized using the online intact MS data. To the best of our knowledge, this is the first time that direct online MS detection and characterization were achieved for mAb charge variants resolved by CIEF as indicated by a well-established linear pH gradient and correlated CIEF-UV charge variant profiles.

  2. Reliability Measure Model for Assistive Care Loop Framework Using Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Venki Balasubramanian

    2010-01-01

    Full Text Available Body area wireless sensor networks (BAWSNs are time-critical systems that rely on the collective data of a group of sensor nodes. Reliable data received at the sink is based on the collective data provided by all the source sensor nodes and not on individual data. Unlike conventional reliability, the definition of retransmission is inapplicable in a BAWSN and would only lead to an elapsed data arrival that is not acceptable for time-critical application. Time-driven applications require high data reliability to maintain detection and responses. Hence, the transmission reliability for the BAWSN should be based on the critical time. In this paper, we develop a theoretical model to measure a BAWSN's transmission reliability, based on the critical time. The proposed model is evaluated through simulation and then compared with the experimental results conducted in our existing Active Care Loop Framework (ACLF. We further show the effect of the sink buffer in transmission reliability after a detailed study of various other co-existing parameters.

  3. Towards Reliable Integrated Services for Dependable Systems

    DEFF Research Database (Denmark)

    Schiøler, Henrik; Ravn, Anders Peter; Izadi-Zamanabadi, Roozbeh

    Reliability issues for various technical systems are discussed and focus is directed towards distributed systems, where communication facilities are vital to maintain system functionality. Reliability in communication subsystems is considered as a resource to be shared among a number of logical c...... applications residing on alternative routes. Details are provided for the operation of RRRSVP based on reliability slack calculus. Conclusions summarize the considerations and give directions for future research....... connections and a reliability management framework is suggested. We suggest a network layer level reliability management protocol RRSVP (Reliability Resource Reservation Protocol) as a counterpart of the RSVP for bandwidth and time resource management. Active and passive standby redundancy by background...

  4. Towards Reliable Integrated Services for Dependable Systems

    DEFF Research Database (Denmark)

    Schiøler, Henrik; Ravn, Anders Peter; Izadi-Zamanabadi, Roozbeh

    2003-01-01

    Reliability issues for various technical systems are discussed and focus is directed towards distributed systems, where communication facilities are vital to maintain system functionality. Reliability in communication subsystems is considered as a resource to be shared among a number of logical c...... applications residing on alternative routes. Details are provided for the operation of RRRSVP based on reliability slack calculus. Conclusions summarize the considerations and give directions for future research....... connections and a reliability management framework is suggested. We suggest a network layer level reliability management protocol RRSVP (Reliability Resource Reservation Protocol) as a counterpart of the RSVP for bandwidth and time resource management. Active and passive standby redundancy by background...

  5. Method to ensure the reliability of power semiconductors depending on the application; Verfahren zur anwendungsspezifischen Sicherstellung der Zuverlaessigkeit von Leistungshalbleiter-Bauelementen

    Energy Technology Data Exchange (ETDEWEB)

    Grieger, Folkhart; Lindemann, Andreas [Magdeburg Univ. (Germany). Inst. fuer Elektrische Energiesysteme

    2011-07-01

    Load dependent conduction and switching losses during operation heat up power semiconductor devices. They this way age; lifetime can be limited e.g. by bond wire lift-off or solder fatigue. Components thus need to be dimensioned in a way that they can be expected to reach sufficient reliability during system lifetime. Electromobility or new applications in electric transmission and distribution are demanding in this respect because of high reliability requirements and long operation times. (orig.)

  6. Reliability and radiation tolerance of robots for nuclear applications

    Energy Technology Data Exchange (ETDEWEB)

    Lauridsen, K [Risoe National Lab. (Denmark); Decreton, M [SCK.CEN (Belgium); Seifert, C C [Siemens AG (Germany); Sharp, R [AEA Technology (United Kingdom)

    1996-10-01

    The reliability of a robot for nuclear applications will be affected by environmental factors such as dust, water, vibrations, heat, and, in particular, ionising radiation. The present report describes the work carried out in a project addressing the reliability and radiation tolerance of such robots. A widely representative range of components and materials has been radiation tested and the test results have been collated in a database along with data provided by the participants from earlier work and data acquired from other sources. A radiation effects guide has been written for the use by designers of electronic equipment for robots. A generic reliability model has been set up together with generic failure strategies, forming the basis for specific reliability modelling carried out in other projects. Modelling tools have been examined and developed for the prediction of the performance of electronic circuits subjected to radiation. Reports have been produced dealing with the prediction and detection of upcoming failures in electronic systems. Operational experience from the use of robots in radiation work in various contexts has been compiled in a report, and another report has been written on cost/benefit considerations about the use of robots. Also the possible impact of robots on the safety of the surrounding plant has been considered and reported. (au) 16 ills., 236 refs.

  7. Reliability and radiation tolerance of robots for nuclear applications

    International Nuclear Information System (INIS)

    Lauridsen, K.; Decreton, M.; Seifert, C.C.; Sharp, R.

    1996-10-01

    The reliability of a robot for nuclear applications will be affected by environmental factors such as dust, water, vibrations, heat, and, in particular, ionising radiation. The present report describes the work carried out in a project addressing the reliability and radiation tolerance of such robots. A widely representative range of components and materials has been radiation tested and the test results have been collated in a database along with data provided by the participants from earlier work and data acquired from other sources. A radiation effects guide has been written for the use by designers of electronic equipment for robots. A generic reliability model has been set up together with generic failure strategies, forming the basis for specific reliability modelling carried out in other projects. Modelling tools have been examined and developed for the prediction of the performance of electronic circuits subjected to radiation. Reports have been produced dealing with the prediction and detection of upcoming failures in electronic systems. Operational experience from the use of robots in radiation work in various contexts has been compiled in a report, and another report has been written on cost/benefit considerations about the use of robots. Also the possible impact of robots on the safety of the surrounding plant has been considered and reported. (au) 16 ills., 236 refs

  8. An assessment of the real-time application capabilities of the SIFT computer system

    Science.gov (United States)

    Butler, R. W.

    1982-01-01

    The real-time capabilities of the SIFT computer system, a highly reliable multicomputer architecture developed to support the flight controls of a relaxed static stability aircraft, are discussed. The SIFT computer system was designed to meet extremely high reliability requirements and to facilitate a formal proof of its correctness. Although SIFT represents a significant achievement in fault-tolerant system research it presents an unusual and restrictive interface to its users. The characteristics of the user interface and its impact on application system design are assessed.

  9. Valuing long-haul and metropolitan freight travel time and reliability

    Science.gov (United States)

    2000-12-01

    Most evaluations and economic assessments of transportation proposal and policies in Australia omit a valuation of time spent in transit for individual items or loads of freight. Knowledge of delays and the practical value of reliability can be usefu...

  10. Reliability of fitness tests using methods and time periods common in sport and occupational management.

    Science.gov (United States)

    Burnstein, Bryan D; Steele, Russell J; Shrier, Ian

    2011-01-01

    Fitness testing is used frequently in many areas of physical activity, but the reliability of these measurements under real-world, practical conditions is unknown. To evaluate the reliability of specific fitness tests using the methods and time periods used in the context of real-world sport and occupational management. Cohort study. Eighteen different Cirque du Soleil shows. Cirque du Soleil physical performers who completed 4 consecutive tests (6-month intervals) and were free of injury or illness at each session (n = 238 of 701 physical performers). Performers completed 6 fitness tests on each assessment date: dynamic balance, Harvard step test, handgrip, vertical jump, pull-ups, and 60-second jump test. We calculated the intraclass coefficient (ICC) and limits of agreement between baseline and each time point and the ICC over all 4 time points combined. Reliability was acceptable (ICC > 0.6) over an 18-month time period for all pairwise comparisons and all time points together for the handgrip, vertical jump, and pull-up assessments. The Harvard step test and 60-second jump test had poor reliability (ICC < 0.6) between baseline and other time points. When we excluded the baseline data and calculated the ICC for 6-month, 12-month, and 18-month time points, both the Harvard step test and 60-second jump test demonstrated acceptable reliability. Dynamic balance was unreliable in all contexts. Limit-of-agreement analysis demonstrated considerable intraindividual variability for some tests and a learning effect by administrators on others. Five of the 6 tests in this battery had acceptable reliability over an 18-month time frame, but the values for certain individuals may vary considerably from time to time for some tests. Specific tests may require a learning period for administrators.

  11. Nonparametric Estimation of Interval Reliability for Discrete-Time Semi-Markov Systems

    DEFF Research Database (Denmark)

    Georgiadis, Stylianos; Limnios, Nikolaos

    2016-01-01

    In this article, we consider a repairable discrete-time semi-Markov system with finite state space. The measure of the interval reliability is given as the probability of the system being operational over a given finite-length time interval. A nonparametric estimator is proposed for the interval...

  12. A Population Based Study of the Genetic Association between Catecholamine Gene Variants and Spontaneous Low-Frequency Fluctuations in Reaction Time.

    Directory of Open Access Journals (Sweden)

    Jojanneke A Bastiaansen

    Full Text Available The catecholamines dopamine and noradrenaline have been implicated in spontaneous low-frequency fluctuations in reaction time, which are associated with attention deficit hyperactivity disorder (ADHD and subclinical attentional problems. The molecular genetic substrates of these behavioral phenotypes, which reflect frequency ranges of intrinsic neuronal oscillations (Slow-4: 0.027-0.073 Hz; Slow-5: 0.010-0.027 Hz, have not yet been investigated. In this study, we performed regression analyses with an additive model to examine associations between low-frequency fluctuations in reaction time during a sustained attention task and genetic markers across 23 autosomal catecholamine genes in a large young adult population cohort (n = 964, which yielded greater than 80% power to detect a small effect size (f(2 = 0.02 and 100% power to detect a small/medium effect size (f(2 = 0.15. At significance levels corrected for multiple comparisons, none of the gene variants were associated with the magnitude of low-frequency fluctuations. Given the study's strong statistical power and dense coverage of the catecholamine genes, this either indicates that associations between low-frequency fluctuation measures and catecholamine gene variants are absent or that they are of very small effect size. Nominally significant associations were observed between variations in the alpha-2A adrenergic receptor gene (ADRA2A and the Slow-5 band. This is in line with previous reports of an association between ADRA2A gene variants and general reaction time variability during response selection tasks, but the specific association of these gene variants and low-frequency fluctuations requires further confirmation. Pharmacological challenge studies could in the future provide convergent evidence for the noradrenergic modulation of both general and time sensitive measures of intra-individual variability in reaction time.

  13. Adhesives technology for electronic applications materials, processing, reliability

    CERN Document Server

    Licari, James J

    2011-01-01

    Adhesives are widely used in the manufacture and assembly of electronic circuits and products. Generally, electronics design engineers and manufacturing engineers are not well versed in adhesives, while adhesion chemists have a limited knowledge of electronics. This book bridges these knowledge gaps and is useful to both groups. The book includes chapters covering types of adhesive, the chemistry on which they are based, and their properties, applications, processes, specifications, and reliability. Coverage of toxicity, environmental impacts and the regulatory framework make this book par

  14. F-15 inlet/engine test techniques and distortion methodologies studies. Volume 2: Time variant data quality analysis plots

    Science.gov (United States)

    Stevens, C. H.; Spong, E. D.; Hammock, M. S.

    1978-01-01

    Time variant data quality analysis plots were used to determine if peak distortion data taken from a subscale inlet model can be used to predict peak distortion levels for a full scale flight test vehicle.

  15. Reliability of the Timed Up and Go test and Ten-Metre Timed Walk Test in Pregnant Women with Pelvic Girdle Pain.

    Science.gov (United States)

    Evensen, Natalie M; Kvåle, Alice; Braekken, Ingeborg H

    2015-09-01

    There is a lack of functional objective tests available to measure functional status in women with pelvic girdle pain (PGP). The purpose of this study was to establish test-retest and intertester reliability of the Timed Up and Go (TUG) test and Ten-metre Timed Walk Test (10mTWT) in pregnant women with PGP. A convenience sample of women was recruited over a 4-month period and tested on two occasions, 1 week apart to determine test-retest reliability. Intertester reliability was established between two assessors at the first testing session. Subjects were instructed to undertake the TUG and 10mTWT at maximum speed. One practise trial and two timed trials for each walking test was undertaken on Day 1 and one practise trial and one timed trial on Day 2. Seventeen women with PGP aged 31.1 years (SD [standard deviation] = 2.3) and 28.7 weeks pregnant (SD = 7.4) completed gait testing. Test-retest reliability using the intraclass correlation coefficient (ICC) was excellent for the TUG (0.88) and good for the 10mTWT (0.74). Intertester reliability was determined in the first 13 participants with excellent ICC values being found for both walking tests (TUG: 0.95; 10mTWT: 0.94). This study demonstrated that the TUG and 10mTWT undertaken at fast pace are reliable, objective functional tests in pregnant women with PGP. While both tests are suitable for use in the clinical and research settings, we would recommend the TUG given the findings of higher test-retest reliability and as this test requires less space and time to set up and score. Future studies in a larger sample size are warranted to confirm the results of this study. Copyright © 2015 John Wiley & Sons, Ltd.

  16. The post-ischemic ventricular dysfunction in PRINZMETAL's variant angina: Radionuclide evaluation

    International Nuclear Information System (INIS)

    Picozzi, R.; Palagi, B.; Baroffio, R.

    1987-01-01

    We studied by equilibrium radionuclide angiography 15 patients admitted to our coronary care unit because of PRINZMETAL's variant angina. Patients were examined mostly in the absence of symptoms. The incidence of ejection fraction abnormalities was low, while regional wall motion was always impaired at the site corresponding to ST-segment elevation at the time of the anginal attack. In 7 patients who underwent coronary angiography, we found an almost complete agreement between the site of atherosclerotic lesions and that of regional wall motion abnormalities. The patients were re-studied during intravenous perfusion of nitroglycerin: A detectable improvement of regional wall motion was found in 8 of them. We concluded that equilibrium radionuclide angiography appears to be a suitable tool for identifying reliably, in patients affected with PRINZMETAL's variant angina, the regional ventricular dysfunction remaining after the remission of symptoms in the presence of normalized ECG or signs of non-transmural ischemia. Equilibrium radionuclide angiography performed during nitroglycerin perfusion allowed us to evaluate in advance the importance of the vasospastic component and hence the efficacy of pharmacologic treatment. (orig.) [de

  17. Holographic representation of space-variant systems: system theory.

    Science.gov (United States)

    Marks Ii, R J; Krile, T F

    1976-09-01

    System theory for holographic representation of linear space-variant systems is derived. The utility of the resulting piecewise isoplanatic approximation (PIA) is illustrated by example application to the invariant system, ideal magnifier, and Fourier transformer. A method previously employed to holographically represent a space-variant system, the discrete approximation, is shown to be a special case of the PIA.

  18. Reliability of structures of industrial installations. Theory and applications of probabilistic mechanics

    International Nuclear Information System (INIS)

    Procaccia, H.; Morilhat, P.; Carle, R.; Menjon, G.

    1996-01-01

    The management of the service life of mechanical materials implies an evaluation of their risk of failure during their use. To evaluate this risk the following methods are used: the classical frequency statistics applied to experience feedback data concerning failures noticed during operation of active parts (pumps, valves, exchangers, circuit breakers etc..); the Bayesian approach in the case of scarce statistical data and when experts are needed to compensate the lack of information; the structures reliability approach when no data are available and when a theoretical model of degradation must be used, in particular for passive structures (pressure vessels, pipes, tanks, etc..). The aim of this book is to describe the principles and applications of this third approach to industrial installations. Chapter 1 recalls the historical aspects of the probabilistic approach to the reliability of structures and the existing codes. Chapter 2 presents the level 1 deterministic method applied so far for the conceiving of passive structures. The Cornell reliability index, already used in civil engineering codes, is defined in chapter 3. The Hasofer and Lind reliability index, a generalization of the Cornell index, is defined in chapter 4. Chapter 5 concerns the application of probabilistic approaches to optimization studies with the introduction of the economical variables linked to the risk and the possible actions to limit this risk (in-service inspection, maintenance, repairing etc..). Chapters 6 and 7 describe the Monte Carlo simulation and approximation methods for failure probabilistic calculations, and recall the fracture mechanics basis and the models of load and degradation of industrial installations. Some applications are given in chapter 9 with the cases of the safety margins quantization of a fissured pipe and the optimizing of the in-service inspection policy of a steam generator. Chapter 10 raises the problem of the coupling between mechanical and reliability

  19. A Standardized DNA Variant Scoring System for Pathogenicity Assessments in Mendelian Disorders.

    Science.gov (United States)

    Karbassi, Izabela; Maston, Glenn A; Love, Angela; DiVincenzo, Christina; Braastad, Corey D; Elzinga, Christopher D; Bright, Alison R; Previte, Domenic; Zhang, Ke; Rowland, Charles M; McCarthy, Michele; Lapierre, Jennifer L; Dubois, Felicita; Medeiros, Katelyn A; Batish, Sat Dev; Jones, Jeffrey; Liaquat, Khalida; Hoffman, Carol A; Jaremko, Malgorzata; Wang, Zhenyuan; Sun, Weimin; Buller-Burckle, Arlene; Strom, Charles M; Keiles, Steven B; Higgins, Joseph J

    2016-01-01

    We developed a rules-based scoring system to classify DNA variants into five categories including pathogenic, likely pathogenic, variant of uncertain significance (VUS), likely benign, and benign. Over 16,500 pathogenicity assessments on 11,894 variants from 338 genes were analyzed for pathogenicity based on prediction tools, population frequency, co-occurrence, segregation, and functional studies collected from internal and external sources. Scores were calculated by trained scientists using a quantitative framework that assigned differential weighting to these five types of data. We performed descriptive and comparative statistics on the dataset and tested interobserver concordance among the trained scientists. Private variants defined as variants found within single families (n = 5,182), were either VUS (80.5%; n = 4,169) or likely pathogenic (19.5%; n = 1,013). The remaining variants (n = 6,712) were VUS (38.4%; n = 2,577) or likely benign/benign (34.7%; n = 2,327) or likely pathogenic/pathogenic (26.9%, n = 1,808). Exact agreement between the trained scientists on the final variant score was 98.5% [95% confidence interval (CI) (98.0, 98.9)] with an interobserver consistency of 97% [95% CI (91.5, 99.4)]. Variant scores were stable and showed increasing odds of being in agreement with new data when re-evaluated periodically. This carefully curated, standardized variant pathogenicity scoring system provides reliable pathogenicity scores for DNA variants encountered in a clinical laboratory setting. © 2015 The Authors. **Human Mutation published by Wiley Periodicals, Inc.

  20. Polynomial-time computability of the edge-reliability of graphs using Gilbert's formula

    Directory of Open Access Journals (Sweden)

    Thomas J. Marlowe

    1998-01-01

    Full Text Available Reliability is an important consideration in analyzing computer and other communication networks, but current techniques are extremely limited in the classes of graphs which can be analyzed efficiently. While Gilbert's formula establishes a theoretically elegant recursive relationship between the edge reliability of a graph and the reliability of its subgraphs, naive evaluation requires consideration of all sequences of deletions of individual vertices, and for many graphs has time complexity essentially Θ (N!. We discuss a general approach which significantly reduces complexity, encoding subgraph isomorphism in a finer partition by invariants, and recursing through the set of invariants.

  1. Variants of sequence family B Thermococcus kodakaraensis DNA polymerase with increased mismatch extension selectivity.

    Directory of Open Access Journals (Sweden)

    Claudia Huber

    Full Text Available Fidelity and selectivity of DNA polymerases are critical determinants for the biology of life, as well as important tools for biotechnological applications. DNA polymerases catalyze the formation of DNA strands by adding deoxynucleotides to a primer, which is complementarily bound to a template. To ensure the integrity of the genome, DNA polymerases select the correct nucleotide and further extend the nascent DNA strand. Thus, DNA polymerase fidelity is pivotal for ensuring that cells can replicate their genome with minimal error. DNA polymerases are, however, further optimized for more specific biotechnological or diagnostic applications. Here we report on the semi-rational design of mutant libraries derived by saturation mutagenesis at single sites of a 3'-5'-exonuclease deficient variant of Thermococcus kodakaraensis DNA polymerase (KOD pol and the discovery for variants with enhanced mismatch extension selectivity by screening. Sites of potential interest for saturation mutagenesis were selected by their proximity to primer or template strands. The resulting libraries were screened via quantitative real-time PCR. We identified three variants with single amino acid exchanges-R501C, R606Q, and R606W-which exhibited increased mismatch extension selectivity. These variants were further characterized towards their potential in mismatch discrimination. Additionally, the identified enzymes were also able to differentiate between cytosine and 5-methylcytosine. Our results demonstrate the potential in characterizing and developing DNA polymerases for specific PCR based applications in DNA biotechnology and diagnostics.

  2. Golden Rule of Morphology and Variants of Word forms

    Directory of Open Access Journals (Sweden)

    Hlaváčová Jaroslava

    2017-12-01

    Full Text Available In many languages, some words can be written in several ways. We call them variants. Values of all their morphological categories are identical, which leads to an identical morphological tag. Together with the identical lemma, we have two or more wordforms with the same morphological description. This ambiguity may cause problems in various NLP applications. There are two types of variants – those affecting the whole paradigm (global variants and those affecting only wordforms sharing some combinations of morphological values (inflectional variants. In the paper, we propose means how to tag all wordforms, including their variants, unambiguously. We call this requirement “Golden rule of morphology”. The paper deals mainly with Czech, but the ideas can be applied to other languages as well.

  3. The reliability of linear position transducer, force plate and combined measurement of explosive power-time variables during a loaded jump squat in elite athletes.

    Science.gov (United States)

    Hansen, Keir T; Cronin, John B; Newton, Michael J

    2011-03-01

    The purpose of this study was to determine the between day reliability of power-time measures calculated with data collected using the linear position transducer or the force plate independently, or a combination of the two technologies. Twenty-five male rugby union players performed three jump squats on two occasions one week apart. Ground reaction forces were measured via a force plate and position data were collected using a linear position transducer. From these data, a number of power-time variables were calculated for each method. The force plate, linear position transducer and a combined method were all found to be a reliable means of measuring peak power (ICC = 0.87-0.95, CV = 3.4%-8.0%). The absolute consistency of power-time measures varied between methods (CV = 8.0%-53.4%). Relative consistency of power-time measures was generally comparable between methods and measures, and for many variables was at an acceptable level (ICC = 0.77-0.94). Although a number of time-dependent power variables can be reliably calculated from data acquired from the three methods investigated, the reliability of a number of these measures is below that which is acceptable for use in research and for practical applications.

  4. Radio controlled detonators and sequential real time blast applications

    Energy Technology Data Exchange (ETDEWEB)

    Bernard, T.; Laboz, J.M. [Delta Caps International, Nice (France)

    1995-12-31

    Among the numerous technical evolutions in the blasting environment the authors are going to describe below the concept of electronic detonator sequenced by radio waves, and also its numerous applications. Three major technologies are used in the initiation environment: fused-initiated detonators; electric detonators; and non-electric detonators. The last two technologies were made available under multiple variants. Two major innovations are going to substantially change the way traditional detonators operate: pyrotechnic delays are replaced by electronic delays (greater accuracy); and triggering orders, passing through a cable, is now replaced by radio-waves transmission (possibility to do real time delay pattern). Such a new product provided all the features offered by current detonators, but also allows mastering specific cases that were difficult to control with the current technology, such as: vibration control; underground blast; and building demolition.

  5. Tariff design for communication-capable metering systems in conjunction with time-variant electricity consumption rates; Gestaltung von Tarifen fuer kommunikationsfaehige Messsysteme im Verbund mit zeitvariablen Stromtarifen. Eine empirische Analyse von Praeferenzen privater Stromkunden in Deutschland

    Energy Technology Data Exchange (ETDEWEB)

    Gerpott, Torsten J.; Paukert, Mathias [Duisburg-Essen Univ., Duisburg (Germany). Lehrstuhl Unternehmens- und Technologieplanung, Schwerpunkt Telekommunikationswirtschaft

    2013-06-15

    In Germany too, communication-capable electricity metering systems (CMS) together with time-based differentiation of kWh-rates for energy consumption are increasingly proliferated among household customers. Nevertheless, empirical evidence with respect to preferences of members of this customer group for the design of CMS tariff elements and of time-variant electricity consumption rates is still scarce. The present study captures such preferences by means of conjoint analysis of data obtained in an online survey of 754 German-speaking adults. Examined CMS tariff elements are a one-off installation fee and monthly recurring use charges. The studied characteristics of time-based rates are the number of time/tariff blocks, the maximum spread between kWh-rates for different time windows and the adaptability/predictability of kWh-rates. Most respondents judged multidimensional CMS and electricity consumption tariff offerings mainly in light of the CMS tariff characteristics. The vast majority of the participants perceived kWh-rates, which may change with a minimum lead time of one day as reducing the benefit of CMS and consumption tariff bundles. Tariff preferences on the one hand were only rarely significantly related to customers' socio-demographic and electricity procurement characteristics as well as their CMS-related expectations/assessments on the other. The willingness to accept CMS-related one-off installation and recurring service charges as well as the propensity to opt for time-dependent electricity consumption tariff variants differing clearly from non-differentiated electricity price schemes appear to be positively affected by customers' practical application experience with CMS and time-variant electricity consumption rates. Conclusions are drawn for energy suppliers seeking to propagate CMS-based time-variant tariffs among household customers in Germany and for future scholarly research. (orig.)

  6. Techniques and applications of the human reliability analysis in nuclear facilities

    International Nuclear Information System (INIS)

    Pinto, Fausto C.

    1995-01-01

    The analysis and prediction of the man-machine interaction are the objectives of human reliability analysis. In this work is presented in a manner that could be used by experts in the field of Probabilistic Safety Assessment, considering primarily the aspects of human errors. The Technique of Human Error Rate Prediction (THERP) is used in large scale to obtain data on human error. Applications of this technique are presented, as well as aspects of the state-of-art and of research and development of this particular field of work, where the construction of a reliable data bank is considered essential. In this work is also developed an application of the THERP for the TRIGA Mark 1 IPR R-1 Reactor of the Centro de Desenvolvimento de Tecnologia Nuclear, Brazilian research institute of nuclear technology. The results indicate that some changes must be made in the emergency procedures of the reactor, in order to achieve a higher level of safety

  7. Implementing a Reliability Centered Maintenance Program at NASA's Kennedy Space Center

    National Research Council Canada - National Science Library

    Tuttle, Raymond

    1998-01-01

    .... A reliability centered maintenance (RCM) program seeks to offer equal or greater reliability at decreased cost by insuring only applicable, effective maintenance is performed and by in large part replacing time based maintenance...

  8. Application of modern reliability database techniques to military system data

    International Nuclear Information System (INIS)

    Bunea, Cornel; Mazzuchi, Thomas A.; Sarkani, Shahram; Chang, H.-C.

    2008-01-01

    This paper focuses on analysis techniques of modern reliability databases, with an application to military system data. The analysis of military system data base consists of the following steps: clean the data and perform operation on it in order to obtain good estimators; present simple plots of data; analyze the data with statistical and probabilistic methods. Each step is dealt with separately and the main results are presented. Competing risks theory is advocated as the mathematical support for the analysis. The general framework of competing risks theory is presented together with simple independent and dependent competing risks models available in literature. These models are used to identify the reliability and maintenance indicators required by the operating personnel. Model selection is based on graphical interpretation of plotted data

  9. Houston Methodist variant viewer: An application to support clinical laboratory interpretation of next-generation sequencing data for cancer

    Directory of Open Access Journals (Sweden)

    Paul A Christensen

    2017-01-01

    Full Text Available Introduction: Next-generation-sequencing (NGS is increasingly used in clinical and research protocols for patients with cancer. NGS assays are routinely used in clinical laboratories to detect mutations bearing on cancer diagnosis, prognosis and personalized therapy. A typical assay may interrogate 50 or more gene targets that encompass many thousands of possible gene variants. Analysis of NGS data in cancer is a labor-intensive process that can become overwhelming to the molecular pathologist or research scientist. Although commercial tools for NGS data analysis and interpretation are available, they are often costly, lack key functionality or cannot be customized by the end user. Methods: To facilitate NGS data analysis in our clinical molecular diagnostics laboratory, we created a custom bioinformatics tool termed Houston Methodist Variant Viewer (HMVV. HMVV is a Java-based solution that integrates sequencing instrument output, bioinformatics analysis, storage resources and end user interface. Results: Compared to the predicate method used in our clinical laboratory, HMVV markedly simplifies the bioinformatics workflow for the molecular technologist and facilitates the variant review by the molecular pathologist. Importantly, HMVV reduces time spent researching the biological significance of the variants detected, standardizes the online resources used to perform the variant investigation and assists generation of the annotated report for the electronic medical record. HMVV also maintains a searchable variant database, including the variant annotations generated by the pathologist, which is useful for downstream quality improvement and research projects. Conclusions: HMVV is a clinical grade, low-cost, feature-rich, highly customizable platform that we have made available for continued development by the pathology informatics community.

  10. Study of selected problems of reliability of the supply chain in the trading company

    Directory of Open Access Journals (Sweden)

    2010-06-01

    Full Text Available The paper presents the problems of the reliability of the supply chain as a whole in the dependence on the reliability of its elements. Different variants of reserving of canals (prime and reserve ones and issues connected with their switching are discussed.

  11. Real-Time Reliability Verification for UAV Flight Control System Supporting Airworthiness Certification.

    Science.gov (United States)

    Xu, Haiyang; Wang, Ping

    2016-01-01

    In order to verify the real-time reliability of unmanned aerial vehicle (UAV) flight control system and comply with the airworthiness certification standard, we proposed a model-based integration framework for modeling and verification of time property. Combining with the advantages of MARTE, this framework uses class diagram to create the static model of software system, and utilizes state chart to create the dynamic model. In term of the defined transformation rules, the MARTE model could be transformed to formal integrated model, and the different part of the model could also be verified by using existing formal tools. For the real-time specifications of software system, we also proposed a generating algorithm for temporal logic formula, which could automatically extract real-time property from time-sensitive live sequence chart (TLSC). Finally, we modeled the simplified flight control system of UAV to check its real-time property. The results showed that the framework could be used to create the system model, as well as precisely analyze and verify the real-time reliability of UAV flight control system.

  12. DaMold: A data-mining platform for variant annotation and visualization in molecular diagnostics research.

    Science.gov (United States)

    Pandey, Ram Vinay; Pabinger, Stephan; Kriegner, Albert; Weinhäusel, Andreas

    2017-07-01

    Next-generation sequencing (NGS) has become a powerful and efficient tool for routine mutation screening in clinical research. As each NGS test yields hundreds of variants, the current challenge is to meaningfully interpret the data and select potential candidates. Analyzing each variant while manually investigating several relevant databases to collect specific information is a cumbersome and time-consuming process, and it requires expertise and familiarity with these databases. Thus, a tool that can seamlessly annotate variants with clinically relevant databases under one common interface would be of great help for variant annotation, cross-referencing, and visualization. This tool would allow variants to be processed in an automated and high-throughput manner and facilitate the investigation of variants in several genome browsers. Several analysis tools are available for raw sequencing-read processing and variant identification, but an automated variant filtering, annotation, cross-referencing, and visualization tool is still lacking. To fulfill these requirements, we developed DaMold, a Web-based, user-friendly tool that can filter and annotate variants and can access and compile information from 37 resources. It is easy to use, provides flexible input options, and accepts variants from NGS and Sanger sequencing as well as hotspots in VCF and BED formats. DaMold is available as an online application at http://damold.platomics.com/index.html, and as a Docker container and virtual machine at https://sourceforge.net/projects/damold/. © 2017 Wiley Periodicals, Inc.

  13. Development of high-reliable real-time communication network protocol for SMART

    Energy Technology Data Exchange (ETDEWEB)

    Song, Ki Sang; Kim, Young Sik [Korea National University of Education, Chongwon (Korea); No, Hee Chon [Korea Advanced Institute of Science and Technology, Taejon (Korea)

    1999-04-01

    In this research, we first define protocol subsets for SMART(System-integrated Modular Advanced Reactor) communication network based on the requirement of SMART MMIS transmission delay and traffic requirements and OSI(Open System Interconnection) 7 layers' network protocol functions. Also, current industrial purpose LAN protocols are analyzed and the applicability of commercialized protocols are checked. For the suitability test, we have applied approximated SMART data traffic and maximum allowable transmission delay requirement. With the simulation results, we conclude that IEEE 802.5 and FDDI which is an ANSI standard, is the most suitable for SMART. We further analyzed the FDDI and token ring protocols for SMART and nuclear plant network environment including IEEE 802.4, IEEE 802.5, and ARCnet. The most suitable protocol for SMART is FDDI and FDDI MAC and RMT protocol specifications have been verified with LOTOS and the verification results show that FDDI MAC and RMT satisfy the reachability and liveness, but does not show deadlock and livelock. Therefore, we conclude that FDDI MAC and RMT is highly reliable protocol for SMART MMIS network. After that, we consider the stacking fault of IEEE 802.5 token ring protocol and propose a fault tolerant MAM(Modified Active Monitor) protocol. The simulation results show that the MAM protocol improves lower priority traffic service rate when stacking fault occurs. Therefore, proposed MAM protocol can be applied to SMART communication network for high reliability and hard real-time communication purpose in data acquisition and inter channel network. (author). 37 refs., 79 figs., 39 tabs.

  14. Simple and rapid preparation of [11C]DASB with high quality and reliability for routine applications

    International Nuclear Information System (INIS)

    Haeusler, D.; Mien, L.-K.; Nics, L.; Ungersboeck, J.; Philippe, C.; Lanzenberger, R.R.; Kletter, K.; Dudczak, R.; Mitterhauser, M.; Wadsak, W.

    2009-01-01

    [ 11 C]DASB combines all major prerequisites for a successful SERT-ligand, providing excellent biological properties and in-vivo behaviour. Thus, we aimed to establish a fully automated procedure for the synthesis and purification of [ 11 C]DASB with a high degree of reliability reducing the overall synthesis time while conserving high yields and purity. The optimized [ 11 C]DASB synthesis was applied in more than 60 applications with a very low failure rate (3.2%). We obtained yields up to 8.9 GBq (average 5.3±1.6 GBq). Radiochemical yields based on [ 11 C]CH 3 I, (corrected for decay) were 66.3±6.9% with a specific radioactivity (A s ) of 86.8±24.3 GBq/μmol (both at the end of synthesis, EOS). Time consumption was kept to a minimum, resulting in 43 min from end of bombardment to release of the product after quality control. Form our data, it is evident that the presented method can be implemented for routine preparations of [ 11 C]DASB with high reliability.

  15. Reliability and validity of a simple and clinically applicable pain stimulus

    DEFF Research Database (Denmark)

    O'Neill, Søren; Graven-Nielsen, Thomas; Manniche, Claus

    2014-01-01

    and after conditioned pain modulation by cold-pressor test (CPT). Correlation to pressure pain threshold (PPT) of the infraspinatus muscle and cold-pressor test pain intensity, time to pain onset and time to non-tolerance, was examined. Test/re-test reliability of clamp pain was also assessed...... and the stimulus-response relationship was examined with a set of 6 different clamps.Conclusions: Clamp pain was sensitive to changes in pain sensitivity provoked by conditioned pain modulation (CPM). Test/re-test reliability of the spring-clamp pain was better for healthy volunteers over a period of days, than...

  16. Calculating system reliability with SRFYDO

    Energy Technology Data Exchange (ETDEWEB)

    Morzinski, Jerome [Los Alamos National Laboratory; Anderson - Cook, Christine M [Los Alamos National Laboratory; Klamann, Richard M [Los Alamos National Laboratory

    2010-01-01

    SRFYDO is a process for estimating reliability of complex systems. Using information from all applicable sources, including full-system (flight) data, component test data, and expert (engineering) judgment, SRFYDO produces reliability estimates and predictions. It is appropriate for series systems with possibly several versions of the system which share some common components. It models reliability as a function of age and up to 2 other lifecycle (usage) covariates. Initial output from its Exploratory Data Analysis mode consists of plots and numerical summaries so that the user can check data entry and model assumptions, and help determine a final form for the system model. The System Reliability mode runs a complete reliability calculation using Bayesian methodology. This mode produces results that estimate reliability at the component, sub-system, and system level. The results include estimates of uncertainty, and can predict reliability at some not-too-distant time in the future. This paper presents an overview of the underlying statistical model for the analysis, discusses model assumptions, and demonstrates usage of SRFYDO.

  17. Identifying noncoding risk variants using disease-relevant gene regulatory networks.

    Science.gov (United States)

    Gao, Long; Uzun, Yasin; Gao, Peng; He, Bing; Ma, Xiaoke; Wang, Jiahui; Han, Shizhong; Tan, Kai

    2018-02-16

    Identifying noncoding risk variants remains a challenging task. Because noncoding variants exert their effects in the context of a gene regulatory network (GRN), we hypothesize that explicit use of disease-relevant GRNs can significantly improve the inference accuracy of noncoding risk variants. We describe Annotation of Regulatory Variants using Integrated Networks (ARVIN), a general computational framework for predicting causal noncoding variants. It employs a set of novel regulatory network-based features, combined with sequence-based features to infer noncoding risk variants. Using known causal variants in gene promoters and enhancers in a number of diseases, we show ARVIN outperforms state-of-the-art methods that use sequence-based features alone. Additional experimental validation using reporter assay further demonstrates the accuracy of ARVIN. Application of ARVIN to seven autoimmune diseases provides a holistic view of the gene subnetwork perturbed by the combinatorial action of the entire set of risk noncoding mutations.

  18. Incorporating travel-time reliability into the congestion management process : a primer.

    Science.gov (United States)

    2015-02-01

    This primer explains the value of incorporating travel-time reliability into the Congestion Management Process (CMP) : and identifies the most current tools available to assist with this effort. It draws from applied research and best practices : fro...

  19. A machine learning model to determine the accuracy of variant calls in capture-based next generation sequencing.

    Science.gov (United States)

    van den Akker, Jeroen; Mishne, Gilad; Zimmer, Anjali D; Zhou, Alicia Y

    2018-04-17

    Next generation sequencing (NGS) has become a common technology for clinical genetic tests. The quality of NGS calls varies widely and is influenced by features like reference sequence characteristics, read depth, and mapping accuracy. With recent advances in NGS technology and software tools, the majority of variants called using NGS alone are in fact accurate and reliable. However, a small subset of difficult-to-call variants that still do require orthogonal confirmation exist. For this reason, many clinical laboratories confirm NGS results using orthogonal technologies such as Sanger sequencing. Here, we report the development of a deterministic machine-learning-based model to differentiate between these two types of variant calls: those that do not require confirmation using an orthogonal technology (high confidence), and those that require additional quality testing (low confidence). This approach allows reliable NGS-based calling in a clinical setting by identifying the few important variant calls that require orthogonal confirmation. We developed and tested the model using a set of 7179 variants identified by a targeted NGS panel and re-tested by Sanger sequencing. The model incorporated several signals of sequence characteristics and call quality to determine if a variant was identified at high or low confidence. The model was tuned to eliminate false positives, defined as variants that were called by NGS but not confirmed by Sanger sequencing. The model achieved very high accuracy: 99.4% (95% confidence interval: +/- 0.03%). It categorized 92.2% (6622/7179) of the variants as high confidence, and 100% of these were confirmed to be present by Sanger sequencing. Among the variants that were categorized as low confidence, defined as NGS calls of low quality that are likely to be artifacts, 92.1% (513/557) were found to be not present by Sanger sequencing. This work shows that NGS data contains sufficient characteristics for a machine-learning-based model to

  20. Physical attraction to reliable, low variability nervous systems: Reaction time variability predicts attractiveness.

    Science.gov (United States)

    Butler, Emily E; Saville, Christopher W N; Ward, Robert; Ramsey, Richard

    2017-01-01

    The human face cues a range of important fitness information, which guides mate selection towards desirable others. Given humans' high investment in the central nervous system (CNS), cues to CNS function should be especially important in social selection. We tested if facial attractiveness preferences are sensitive to the reliability of human nervous system function. Several decades of research suggest an operational measure for CNS reliability is reaction time variability, which is measured by standard deviation of reaction times across trials. Across two experiments, we show that low reaction time variability is associated with facial attractiveness. Moreover, variability in performance made a unique contribution to attractiveness judgements above and beyond both physical health and sex-typicality judgements, which have previously been associated with perceptions of attractiveness. In a third experiment, we empirically estimated the distribution of attractiveness preferences expected by chance and show that the size and direction of our results in Experiments 1 and 2 are statistically unlikely without reference to reaction time variability. We conclude that an operating characteristic of the human nervous system, reliability of information processing, is signalled to others through facial appearance. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Interference Cancellation Using Replica Signal for HTRCI-MIMO/OFDM in Time-Variant Large Delay Spread Longer Than Guard Interval

    Directory of Open Access Journals (Sweden)

    Yuta Ida

    2012-01-01

    Full Text Available Orthogonal frequency division multiplexing (OFDM and multiple-input multiple-output (MIMO are generally known as the effective techniques for high data rate services. In MIMO/OFDM systems, the channel estimation (CE is very important to obtain an accurate channel state information (CSI. However, since the orthogonal pilot-based CE requires the large number of pilot symbols, the total transmission rate is degraded. To mitigate this problem, a high time resolution carrier interferometry (HTRCI for MIMO/OFDM has been proposed. In wireless communication systems, if the maximum delay spread is longer than the guard interval (GI, the system performance is significantly degraded due to the intersymbol interference (ISI and intercarrier interference (ICI. However, the conventional HTRCI-MIMO/OFDM does not consider the case with the time-variant large delay spread longer than the GI. In this paper, we propose the ISI and ICI compensation methods for a HTRCI-MIMO/OFDM in the time-variant large delay spread longer than the GI.

  2. CONSIDERING TRAVEL TIME RELIABILITY AND SAFETY FOR EVALUATION OF CONGESTION RELIEF SCHEMES ON EXPRESSWAY SEGMENTS

    Directory of Open Access Journals (Sweden)

    Babak MEHRAN

    2009-01-01

    Full Text Available Evaluation of the efficiency of congestion relief schemes on expressways has generally been based on average travel time analysis. However, road authorities are much more interested in knowing the possible impacts of improvement schemes on safety and travel time reliability prior to implementing them in real conditions. A methodology is presented to estimate travel time reliability based on modeling travel time variations as a function of demand, capacity and weather conditions. For a subject expressway segment, patterns of demand and capacity were generated for each 5-minute interval over a year by using the Monte-Carlo simulation technique, and accidents were generated randomly according to traffic conditions. A whole year analysis was performed by comparing demand and available capacity for each scenario and shockwave analysis was used to estimate the queue length at each time interval. Travel times were estimated from refined speed-flow relationships and buffer time index was estimated as a measure of travel time reliability. it was shown that the estimated reliability measures and predicted number of accidents are very close to observed values through empirical data. After validation, the methodology was applied to assess the impact of two alternative congestion relief schemes on a subject expressway segment. one alternative was to open the hard shoulder to traffic during the peak period, while the other was to reduce the peak period demand by 15%. The extent of improvements in travel conditions and safety, likewise the reduction in road users' costs after implementing each improvement scheme were estimated. it was shown that both strategies can result in up to 23% reduction in the number of occurred accidents and significant improvements in travel time reliability. Finally, the advantages and challenging issues of selecting each improvement scheme were discussed.

  3. Overcoming some limitations of imprecise reliability models

    DEFF Research Database (Denmark)

    Kozine, Igor; Krymsky, Victor

    2011-01-01

    The application of imprecise reliability models is often hindered by the rapid growth in imprecision that occurs when many components constitute a system and by the fact that time to failure is bounded from above. The latter results in the necessity to explicitly introduce an upper bound on time ...

  4. Interactive reliability assessment using an integrated reliability data bank

    International Nuclear Information System (INIS)

    Allan, R.N.; Whitehead, A.M.

    1986-01-01

    The logical structure, techniques and practical application of a computer-aided technique based on a microcomputer using floppy disc Random Access Files is described. This interactive computational technique is efficient if the reliability prediction program is coupled directly to a relevant source of data to create an integrated reliability assessment/reliability data bank system. (DG)

  5. Analysis of fault tolerance and reliability in distributed real-time system architectures

    International Nuclear Information System (INIS)

    Philippi, Stephan

    2003-01-01

    Safety critical real-time systems are becoming ubiquitous in many areas of our everyday life. Failures of such systems potentially have catastrophic consequences on different scales, in the worst case even the loss of human life. Therefore, safety critical systems have to meet maximum fault tolerance and reliability requirements. As the design of such systems is far from being trivial, this article focuses on concepts to specifically support the early architectural design. In detail, a simulation based approach for the analysis of fault tolerance and reliability in distributed real-time system architectures is presented. With this approach, safety related features can be evaluated in the early development stages and thus prevent costly redesigns in later ones

  6. Predicting enhancer activity and variant impact using gkm-SVM.

    Science.gov (United States)

    Beer, Michael A

    2017-09-01

    We participated in the Critical Assessment of Genome Interpretation eQTL challenge to further test computational models of regulatory variant impact and their association with human disease. Our prediction model is based on a discriminative gapped-kmer SVM (gkm-SVM) trained on genome-wide chromatin accessibility data in the cell type of interest. The comparisons with massively parallel reporter assays (MPRA) in lymphoblasts show that gkm-SVM is among the most accurate prediction models even though all other models used the MPRA data for model training, and gkm-SVM did not. In addition, we compare gkm-SVM with other MPRA datasets and show that gkm-SVM is a reliable predictor of expression and that deltaSVM is a reliable predictor of variant impact in K562 cells and mouse retina. We further show that DHS (DNase-I hypersensitive sites) and ATAC-seq (assay for transposase-accessible chromatin using sequencing) data are equally predictive substrates for training gkm-SVM, and that DHS regions flanked by H3K27Ac and H3K4me1 marks are more predictive than DHS regions alone. © 2017 Wiley Periodicals, Inc.

  7. Modified personal interviews: resurrecting reliable personal interviews for admissions?

    Science.gov (United States)

    Hanson, Mark D; Kulasegaram, Kulamakan Mahan; Woods, Nicole N; Fechtig, Lindsey; Anderson, Geoff

    2012-10-01

    Traditional admissions personal interviews provide flexible faculty-student interactions but are plagued by low inter-interview reliability. Axelson and Kreiter (2009) retrospectively showed that multiple independent sampling (MIS) may improve reliability of personal interviews; thus, the authors incorporated MIS into the admissions process for medical students applying to the University of Toronto's Leadership Education and Development Program (LEAD). They examined the reliability and resource demands of this modified personal interview (MPI) format. In 2010-2011, LEAD candidates submitted written applications, which were used to screen for participation in the MPI process. Selected candidates completed four brief (10-12 minutes) independent MPIs each with a different interviewer. The authors blueprinted MPI questions to (i.e., aligned them with) leadership attributes, and interviewers assessed candidates' eligibility on a five-point Likert-type scale. The authors analyzed inter-interview reliability using the generalizability theory. Sixteen candidates submitted applications; 10 proceeded to the MPI stage. Reliability of the written application components was 0.75. The MPI process had overall inter-interview reliability of 0.79. Correlation between the written application and MPI scores was 0.49. A decision study showed acceptable reliability of 0.74 with only three MPIs scored using one global rating. Furthermore, a traditional admissions interview format would take 66% more time than the MPI format. The MPI format, used during the LEAD admissions process, achieved high reliability with minimal faculty resources. The MPI format's reliability and effective resource use were possible through MIS and employment of expert interviewers. MPIs may be useful for other admissions tasks.

  8. Social sensing building reliable systems on unreliable data

    CERN Document Server

    Wang, Dong; Kaplan, Lance

    2015-01-01

    Increasingly, human beings are sensors engaging directly with the mobile Internet. Individuals can now share real-time experiences at an unprecedented scale. Social Sensing: Building Reliable Systems on Unreliable Data looks at recent advances in the emerging field of social sensing, emphasizing the key problem faced by application designers: how to extract reliable information from data collected from largely unknown and possibly unreliable sources. The book explains how a myriad of societal applications can be derived from this massive amount of data collected and shared by average individu

  9. Feasibility, reliability, and validity of a smartphone based application for the assessment of cognitive function in the elderly.

    Directory of Open Access Journals (Sweden)

    Robert M Brouillette

    Full Text Available While considerable knowledge has been gained through the use of established cognitive and motor assessment tools, there is a considerable interest and need for the development of a battery of reliable and validated assessment tools that provide real-time and remote analysis of cognitive and motor function in the elderly. Smartphones appear to be an obvious choice for the development of these "next-generation" assessment tools for geriatric research, although to date no studies have reported on the use of smartphone-based applications for the study of cognition in the elderly. The primary focus of the current study was to assess the feasibility, reliability, and validity of a smartphone-based application for the assessment of cognitive function in the elderly. A total of 57 non-demented elderly individuals were administered a newly developed smartphone application-based Color-Shape Test (CST in order to determine its utility in measuring cognitive processing speed in the elderly. Validity of this novel cognitive task was assessed by correlating performance on the CST with scores on widely accepted assessments of cognitive function. Scores on the CST were significantly correlated with global cognition (Mini-Mental State Exam: r = 0.515, p<0.0001 and multiple measures of processing speed and attention (Digit Span: r = 0.427, p<0.0001; Trail Making Test: r = -0.651, p<0.00001; Digit Symbol Test: r = 0.508, p<0.0001. The CST was not correlated with naming and verbal fluency tasks (Boston Naming Test, Vegetable/Animal Naming or memory tasks (Logical Memory Test. Test re-test reliability was observed to be significant (r = 0.726; p = 0.02. Together, these data are the first to demonstrate the feasibility, reliability, and validity of using a smartphone-based application for the purpose of assessing cognitive function in the elderly. The importance of these findings for the establishment of smartphone-based assessment batteries

  10. Optimal Release Time and Sensitivity Analysis Using a New NHPP Software Reliability Model with Probability of Fault Removal Subject to Operating Environments

    Directory of Open Access Journals (Sweden)

    Kwang Yoon Song

    2018-05-01

    Full Text Available With the latest technological developments, the software industry is at the center of the fourth industrial revolution. In today’s complex and rapidly changing environment, where software applications must be developed quickly and easily, software must be focused on rapidly changing information technology. The basic goal of software engineering is to produce high-quality software at low cost. However, because of the complexity of software systems, software development can be time consuming and expensive. Software reliability models (SRMs are used to estimate and predict the reliability, number of remaining faults, failure intensity, total and development cost, etc., of software. Additionally, it is very important to decide when, how, and at what cost to release the software to users. In this study, we propose a new nonhomogeneous Poisson process (NHPP SRM with a fault detection rate function affected by the probability of fault removal on failure subject to operating environments and discuss the optimal release time and software reliability with the new NHPP SRM. The example results show a good fit to the proposed model, and we propose an optimal release time for a given change in the proposed model.

  11. Reliability Evaluation of Base-Metal-Electrode Multilayer Ceramic Capacitors for Potential Space Applications

    Science.gov (United States)

    Liu, David (Donhang); Sampson, Michael J.

    2011-01-01

    Base-metal-electrode (BME) ceramic capacitors are being investigated for possible use in high-reliability spacelevel applications. This paper focuses on how BME capacitors construction and microstructure affects their lifetime and reliability. Examination of the construction and microstructure of commercial off-the-shelf (COTS) BME capacitors reveals great variance in dielectric layer thickness, even among BME capacitors with the same rated voltage. Compared to PME (precious-metal-electrode) capacitors, BME capacitors exhibit a denser and more uniform microstructure, with an average grain size between 0.3 and 0.5 m, which is much less than that of most PME capacitors. BME capacitors can be fabricated with more internal electrode layers and thinner dielectric layers than PME capacitors because they have a fine-grained microstructure and do not shrink much during ceramic sintering. This makes it possible for BME capacitors to achieve a very high capacitance volumetric efficiency. The reliability of BME and PME capacitors was investigated using highly accelerated life testing (HALT). Most BME capacitors were found to fail with an early avalanche breakdown, followed by a regular dielectric wearout failure during the HALT test. When most of the early failures, characterized with avalanche breakdown, were removed, BME capacitors exhibited a minimum mean time-to-failure (MTTF) of more than 105 years at room temperature and rated voltage. Dielectric thickness was found to be a critical parameter for the reliability of BME capacitors. The number of stacked grains in a dielectric layer appears to play a significant role in determining BME capacitor reliability. Although dielectric layer thickness varies for a given rated voltage in BME capacitors, the number of stacked grains is relatively consistent, typically around 12 for a number of BME capacitors with a rated voltage of 25V. This may suggest that the number of grains per dielectric layer is more critical than the

  12. New values of time and reliability in passenger transport in the Netherlands

    NARCIS (Netherlands)

    Kouwenhoven, M.; de Jong, G.; Koster, P.R.; van den Berg, V.A.C.; Verhoef, E.T.; Bates, J.; Warffemius, P.

    2014-01-01

    We have established new values of time (VOTs) and values of travel time reliability (VORs) for use in cost-benefit analysis (CBA) of transport projects in The Netherlands. This was the first national study in The Netherlands (and one of the first world-wide) to investigate these topics empirically

  13. Application of STOPP and START criteria: interrater reliability among pharmacists.

    LENUS (Irish Health Repository)

    Ryan, Cristin

    2009-07-01

    Inappropriate prescribing is a well-documented problem in older people. The new screening tools, STOPP (Screening Tool of Older Peoples\\' Prescriptions) and START (Screening Tool to Alert doctors to Right Treatment) have been formulated to identify potentially inappropriate medications (PIMs) and potential errors of omissions (PEOs) in older patients. Consistent, reliable application of STOPP and START is essential for the screening tools to be used effectively by pharmacists.

  14. Probabilistic simulation applications to reliability assessments

    International Nuclear Information System (INIS)

    Miller, Ian; Nutt, Mark W.; Hill, Ralph S. III

    2003-01-01

    Probabilistic risk/reliability (PRA) analyses for engineered systems are conventionally based on fault-tree methods. These methods are mature and efficient, and are well suited to systems consisting of interacting components with known, low probabilities of failure. Even complex systems, such as nuclear power plants or aircraft, are modeled by the careful application of these approaches. However, for systems that may evolve in complex and nonlinear ways, and where the performance of components may be a sensitive function of the history of their working environments, fault-tree methods can be very demanding. This paper proposes an alternative method of evaluating such systems, based on probabilistic simulation using intelligent software objects to represent the components of such systems. Using a Monte Carlo approach, simulation models can be constructed from relatively simple interacting objects that capture the essential behavior of the components that they represent. Such models are capable of reflecting the complex behaviors of the systems that they represent in a natural and realistic way. (author)

  15. Application of safety and reliability approaches in the power sector: Inside-sectoral overview

    DEFF Research Database (Denmark)

    Kozine, Igor

    2010-01-01

    This chapter summarizes the state-of-the-art and state-of-practice on the applications of safety and reliability approaches in the Power Sector. The nature and composition of this industrial sector including the characteristics of major hazards are summarized. The present situation with regard...... to a number of key technical aspects involved in the use of safety and reliability approaches in the power sector is discussed. Based on this review a Technology Maturity Matrix is synthesized. Barriers to the wider use of risk and reliability methods in the design and operation of power installations...... are identified and possible ways of overcoming these barriers are suggested. Key issues and priorities for research are identified....

  16. Adjoint sensitivity analysis of dynamic reliability models based on Markov chains - II: Application to IFMIF reliability assessment

    Energy Technology Data Exchange (ETDEWEB)

    Cacuci, D. G. [Commiss Energy Atom, Direct Energy Nucl, Saclay, (France); Cacuci, D. G.; Balan, I. [Univ Karlsruhe, Inst Nucl Technol and Reactor Safetly, Karlsruhe, (Germany); Ionescu-Bujor, M. [Forschungszentrum Karlsruhe, Fus Program, D-76021 Karlsruhe, (Germany)

    2008-07-01

    In Part II of this work, the adjoint sensitivity analysis procedure developed in Part I is applied to perform sensitivity analysis of several dynamic reliability models of systems of increasing complexity, culminating with the consideration of the International Fusion Materials Irradiation Facility (IFMIF) accelerator system. Section II presents the main steps of a procedure for the automated generation of Markov chains for reliability analysis, including the abstraction of the physical system, construction of the Markov chain, and the generation and solution of the ensuing set of differential equations; all of these steps have been implemented in a stand-alone computer code system called QUEFT/MARKOMAG-S/MCADJSEN. This code system has been applied to sensitivity analysis of dynamic reliability measures for a paradigm '2-out-of-3' system comprising five components and also to a comprehensive dynamic reliability analysis of the IFMIF accelerator system facilities for the average availability and, respectively, the system's availability at the final mission time. The QUEFT/MARKOMAG-S/MCADJSEN has been used to efficiently compute sensitivities to 186 failure and repair rates characterizing components and subsystems of the first-level fault tree of the IFMIF accelerator system. (authors)

  17. Adjoint sensitivity analysis of dynamic reliability models based on Markov chains - II: Application to IFMIF reliability assessment

    International Nuclear Information System (INIS)

    Cacuci, D. G.; Cacuci, D. G.; Balan, I.; Ionescu-Bujor, M.

    2008-01-01

    In Part II of this work, the adjoint sensitivity analysis procedure developed in Part I is applied to perform sensitivity analysis of several dynamic reliability models of systems of increasing complexity, culminating with the consideration of the International Fusion Materials Irradiation Facility (IFMIF) accelerator system. Section II presents the main steps of a procedure for the automated generation of Markov chains for reliability analysis, including the abstraction of the physical system, construction of the Markov chain, and the generation and solution of the ensuing set of differential equations; all of these steps have been implemented in a stand-alone computer code system called QUEFT/MARKOMAG-S/MCADJSEN. This code system has been applied to sensitivity analysis of dynamic reliability measures for a paradigm '2-out-of-3' system comprising five components and also to a comprehensive dynamic reliability analysis of the IFMIF accelerator system facilities for the average availability and, respectively, the system's availability at the final mission time. The QUEFT/MARKOMAG-S/MCADJSEN has been used to efficiently compute sensitivities to 186 failure and repair rates characterizing components and subsystems of the first-level fault tree of the IFMIF accelerator system. (authors)

  18. Reliability engineering

    International Nuclear Information System (INIS)

    Lee, Chi Woo; Kim, Sun Jin; Lee, Seung Woo; Jeong, Sang Yeong

    1993-08-01

    This book start what is reliability? such as origin of reliability problems, definition of reliability and reliability and use of reliability. It also deals with probability and calculation of reliability, reliability function and failure rate, probability distribution of reliability, assumption of MTBF, process of probability distribution, down time, maintainability and availability, break down maintenance and preventive maintenance design of reliability, design of reliability for prediction and statistics, reliability test, reliability data and design and management of reliability.

  19. Noise and signal processing in a microstrip detector with a time variant readout system

    International Nuclear Information System (INIS)

    Cattaneo, P.W.

    1995-01-01

    This paper treats the noise and signal processing by a time variant filter in a microstrip detector. In particular, the noise sources in the detector-electronics chain and the signal losses that cause a substantial decrease of the original signal are thoroughly analyzed. This work has been motivated by the analysis of the data of the microstrip detectors designed for the ALEPH minivertex detector. Hence, even if the discussion will be kept as general as possible, concrete examples will be presented referring to the specific ALEPH design. (orig.)

  20. Highly reliable computer network for real time system

    International Nuclear Information System (INIS)

    Mohammed, F.A.; Omar, A.A.; Ayad, N.M.A.; Madkour, M.A.I.; Ibrahim, M.K.

    1988-01-01

    Many of computer networks have been studied different trends regarding the network architecture and the various protocols that govern data transfers and guarantee a reliable communication among all a hierarchical network structure has been proposed to provide a simple and inexpensive way for the realization of a reliable real-time computer network. In such architecture all computers in the same level are connected to a common serial channel through intelligent nodes that collectively control data transfers over the serial channel. This level of computer network can be considered as a local area computer network (LACN) that can be used in nuclear power plant control system since it has geographically dispersed subsystems. network expansion would be straight the common channel for each added computer (HOST). All the nodes are designed around a microprocessor chip to provide the required intelligence. The node can be divided into two sections namely a common section that interfaces with serial data channel and a private section to interface with the host computer. This part would naturally tend to have some variations in the hardware details to match the requirements of individual host computers. fig 7

  1. The Reliability and Validity of Zimbardo Time Perspective Inventory Scores in Academically Talented Adolescents

    Science.gov (United States)

    Worrell, Frank C.; Mello, Zena R.

    2007-01-01

    In this study, the authors examined the reliability, structural validity, and concurrent validity of Zimbardo Time Perspective Inventory (ZTPI) scores in a group of 815 academically talented adolescents. Reliability estimates of the purported factors' scores were in the low to moderate range. Exploratory factor analysis supported a five-factor…

  2. Distributed Information and Control system reliability enhancement by fog-computing concept application

    Science.gov (United States)

    Melnik, E. V.; Klimenko, A. B.; Ivanov, D. Ya

    2018-03-01

    The paper focuses on the information and control system reliability issue. Authors of the current paper propose a new complex approach of information and control system reliability enhancement by application of the computing concept elements. The approach proposed consists of a complex of optimization problems to be solved. These problems are: estimation of computational complexity, which can be shifted to the edge of the network and fog-layer, distribution of computations among the data processing elements and distribution of computations among the sensors. The problems as well as some simulated results and discussion are formulated and presented within this paper.

  3. Evaluating how variants of floristic quality assessment indicate wetland condition.

    Science.gov (United States)

    Kutcher, Thomas E; Forrester, Graham E

    2018-03-28

    Biological indicators are useful tools for the assessment of ecosystem condition. Multi-metric and multi-taxa indicators may respond to a broader range of disturbances than simpler indicators, but their complexity can make them difficult to interpret, which is critical to indicator utility for ecosystem management. Floristic Quality Assessment (FQA) is an example of a biological assessment approach that has been widely tested for indicating freshwater wetland condition, but less attention has been given to clarifying the factors controlling its response. FQA quantifies the aggregate of vascular plant species tolerance to habitat degradation (conservatism), and model variants have incorporated species richness, abundance, and indigenity (native or non-native). To assess bias, we tested FQA variants in open-canopy freshwater wetlands against three independent reference measures, using practical vegetation sampling methods. FQA variants incorporating species richness did not correlate with our reference measures and were influenced by wetland size and hydrogeomorphic class. In contrast, FQA variants lacking measures of species richness responded linearly to reference measures quantifying individual and aggregate stresses, suggesting a broad response to cumulative degradation. FQA variants incorporating non-native species, and a variant additionally incorporating relative species abundance, improved performance over using only native species. We relate our empirical findings to ecological theory to clarify the functional properties and implications of the FQA variants. Our analysis indicates that (1) aggregate conservatism reliably declines with increased disturbance; (2) species richness has varying relationships with disturbance and increases with site area, confounding FQA response; and (3) non-native species signal human disturbance. We propose that incorporating species abundance can improve FQA site-level relevance with little extra sampling effort. Using our

  4. Effectiveness of different approaches to disseminating traveler information on travel time reliability. [supporting datasets

    Science.gov (United States)

    2013-11-30

    Travel time reliability information includes static data about traffic speeds or trip times that capture historic variations from day to day, and it can help individuals understand the level of variation in traffic. Unlike real-time travel time infor...

  5. Reliability Evaluation of Base-Metal-Electrode (BME) Multilayer Ceramic Capacitors for Space Applications

    Science.gov (United States)

    Liu, David (Donghang)

    2011-01-01

    This paper reports reliability evaluation of BME ceramic capacitors for possible high reliability space-level applications. The study is focused on the construction and microstructure of BME capacitors and their impacts on the capacitor life reliability. First, the examinations of the construction and microstructure of commercial-off-the-shelf (COTS) BME capacitors show great variance in dielectric layer thickness, even among BME capacitors with the same rated voltage. Compared to PME (precious-metal-electrode) capacitors, BME capacitors exhibit a denser and more uniform microstructure, with an average grain size between 0.3 and approximately 0.5 micrometers, which is much less than that of most PME capacitors. The primary reasons that a BME capacitor can be fabricated with more internal electrode layers and less dielectric layer thickness is that it has a fine-grained microstructure and does not shrink much during ceramic sintering. This results in the BME capacitors a very high volumetric efficiency. The reliability of BME and PME capacitors was investigated using highly accelerated life testing (HALT) and regular life testing as per MIL-PRF-123. Most BME capacitors were found to fail· with an early dielectric wearout, followed by a rapid wearout failure mode during the HALT test. When most of the early wearout failures were removed, BME capacitors exhibited a minimum mean time-to-failure of more than 10(exp 5) years. Dielectric thickness was found to be a critical parameter for the reliability of BME capacitors. The number of stacked grains in a dielectric layer appears to play a significant role in determining BME capacitor reliability. Although dielectric layer thickness varies for a given rated voltage in BME capacitors, the number of stacked grains is relatively consistent, typically between 10 and 20. This may suggest that the number of grains per dielectric layer is more critical than the thickness itself for determining the rated voltage and the life

  6. Parametric and semiparametric models with applications to reliability, survival analysis, and quality of life

    CERN Document Server

    Nikulin, M; Mesbah, M; Limnios, N

    2004-01-01

    Parametric and semiparametric models are tools with a wide range of applications to reliability, survival analysis, and quality of life. This self-contained volume examines these tools in survey articles written by experts currently working on the development and evaluation of models and methods. While a number of chapters deal with general theory, several explore more specific connections and recent results in "real-world" reliability theory, survival analysis, and related fields.

  7. Increase of hydroelectric power plant operation reliability

    International Nuclear Information System (INIS)

    Koshumbaev, M.B.

    2006-01-01

    The new design of the turbine of hydroelectric power plant (HPP) is executed in the form of a pipe with plates. Proposed solution allows increasing the hydroelectric power plant capacity at existing head and water flow. At that time the HPP turbine reliability is increase, its operation performances are improving. Design efficiency is effective mostly for small-scale and micro-HPP due to reliable operation, low-end technology, and harmless ecological application. (author)

  8. Accurate genotyping across variant classes and lengths using variant graphs

    DEFF Research Database (Denmark)

    Sibbesen, Jonas Andreas; Maretty, Lasse; Jensen, Jacob Malte

    2018-01-01

    of read k-mers to a graph representation of the reference and variants to efficiently perform unbiased, probabilistic genotyping across the variation spectrum. We demonstrate that BayesTyper generally provides superior variant sensitivity and genotyping accuracy relative to existing methods when used...... collecting a set of candidate variants across discovery methods, individuals and databases, and then realigning the reads to the variants and reference simultaneously. However, this realignment problem has proved computationally difficult. Here, we present a new method (BayesTyper) that uses exact alignment...... to integrate variants across discovery approaches and individuals. Finally, we demonstrate that including a ‘variation-prior’ database containing already known variants significantly improves sensitivity....

  9. Feasibility, reliability, and validity of a smartphone based application for the assessment of cognitive function in the elderly.

    Science.gov (United States)

    Brouillette, Robert M; Foil, Heather; Fontenot, Stephanie; Correro, Anthony; Allen, Ray; Martin, Corby K; Bruce-Keller, Annadora J; Keller, Jeffrey N

    2013-01-01

    While considerable knowledge has been gained through the use of established cognitive and motor assessment tools, there is a considerable interest and need for the development of a battery of reliable and validated assessment tools that provide real-time and remote analysis of cognitive and motor function in the elderly. Smartphones appear to be an obvious choice for the development of these "next-generation" assessment tools for geriatric research, although to date no studies have reported on the use of smartphone-based applications for the study of cognition in the elderly. The primary focus of the current study was to assess the feasibility, reliability, and validity of a smartphone-based application for the assessment of cognitive function in the elderly. A total of 57 non-demented elderly individuals were administered a newly developed smartphone application-based Color-Shape Test (CST) in order to determine its utility in measuring cognitive processing speed in the elderly. Validity of this novel cognitive task was assessed by correlating performance on the CST with scores on widely accepted assessments of cognitive function. Scores on the CST were significantly correlated with global cognition (Mini-Mental State Exam: r = 0.515, psmartphone-based application for the purpose of assessing cognitive function in the elderly. The importance of these findings for the establishment of smartphone-based assessment batteries of cognitive and motor function in the elderly is discussed.

  10. Simple and rapid preparation of [{sup 11}C]DASB with high quality and reliability for routine applications

    Energy Technology Data Exchange (ETDEWEB)

    Haeusler, D.; Mien, L.-K. [Department of Nuclear Medicine, PET, Medical University of Vienna, Waehringer Guertel 18-20, A-1090 Vienna (Austria); Department of Pharmaceutical Technology and Biopharmaceutics, University of Vienna, A-1090 Vienna (Austria); Nics, L. [Department of Nuclear Medicine, PET, Medical University of Vienna, Waehringer Guertel 18-20, A-1090 Vienna (Austria); Department of Nutritional Sciences, University of Vienna, A-1090 Vienna (Austria); Ungersboeck, J. [Department of Nuclear Medicine, PET, Medical University of Vienna, Waehringer Guertel 18-20, A-1090 Vienna (Austria); Department of Inorganic Chemistry, University of Vienna, A-1090 Vienna (Austria); Philippe, C. [Department of Nuclear Medicine, PET, Medical University of Vienna, Waehringer Guertel 18-20, A-1090 Vienna (Austria); Department of Pharmaceutical Technology and Biopharmaceutics, University of Vienna, A-1090 Vienna (Austria); Lanzenberger, R.R. [Department of Psychiatry and Psychotherapy, Medical University of Vienna, A-1090 Vienna (Austria); Kletter, K.; Dudczak, R. [Department of Nuclear Medicine, PET, Medical University of Vienna, Waehringer Guertel 18-20, A-1090 Vienna (Austria); Mitterhauser, M. [Department of Nuclear Medicine, PET, Medical University of Vienna, Waehringer Guertel 18-20, A-1090 Vienna (Austria); Department of Pharmaceutical Technology and Biopharmaceutics, University of Vienna, A-1090 Vienna (Austria); Hospital Pharmacy of the General Hospital of Vienna, A-1090 Vienna (Austria); Wadsak, W. [Department of Nuclear Medicine, PET, Medical University of Vienna, Waehringer Guertel 18-20, A-1090 Vienna (Austria); Department of Inorganic Chemistry, University of Vienna, A-1090 Vienna (Austria)], E-mail: wolfgang.wadsak@meduniwien.ac.at

    2009-09-15

    [{sup 11}C]DASB combines all major prerequisites for a successful SERT-ligand, providing excellent biological properties and in-vivo behaviour. Thus, we aimed to establish a fully automated procedure for the synthesis and purification of [{sup 11}C]DASB with a high degree of reliability reducing the overall synthesis time while conserving high yields and purity. The optimized [{sup 11}C]DASB synthesis was applied in more than 60 applications with a very low failure rate (3.2%). We obtained yields up to 8.9 GBq (average 5.3{+-}1.6 GBq). Radiochemical yields based on [{sup 11}C]CH{sub 3}I, (corrected for decay) were 66.3{+-}6.9% with a specific radioactivity (A{sub s}) of 86.8{+-}24.3 GBq/{mu}mol (both at the end of synthesis, EOS). Time consumption was kept to a minimum, resulting in 43 min from end of bombardment to release of the product after quality control. Form our data, it is evident that the presented method can be implemented for routine preparations of [{sup 11}C]DASB with high reliability.

  11. Transmission of single and multiple viral variants in primary HIV-1 subtype C infection.

    Directory of Open Access Journals (Sweden)

    Vladimir Novitsky

    2011-02-01

    Full Text Available To address whether sequences of viral gag and env quasispecies collected during the early post-acute period can be utilized to determine multiplicity of transmitted HIV's, recently developed approaches for analysis of viral evolution in acute HIV-1 infection [1,2] were applied. Specifically, phylogenetic reconstruction, inter- and intra-patient distribution of maximum and mean genetic distances, analysis of Poisson fitness, shape of highlighter plots, recombination analysis, and estimation of time to the most recent common ancestor (tMRCA were utilized for resolving multiplicity of HIV-1 transmission in a set of viral quasispecies collected within 50 days post-seroconversion (p/s in 25 HIV-infected individuals with estimated time of seroconversion. The decision on multiplicity of HIV infection was made based on the model's fit with, or failure to explain, the observed extent of viral sequence heterogeneity. The initial analysis was based on phylogeny, inter-patient distribution of maximum and mean distances, and Poisson fitness, and was able to resolve multiplicity of HIV transmission in 20 of 25 (80% cases. Additional analysis involved distribution of individual viral distances, highlighter plots, recombination analysis, and estimation of tMRCA, and resolved 4 of the 5 remaining cases. Overall, transmission of a single viral variant was identified in 16 of 25 (64% cases, and transmission of multiple variants was evident in 8 of 25 (32% cases. In one case multiplicity of HIV-1 transmission could not be determined. In primary HIV-1 subtype C infection, samples collected within 50 days p/s and analyzed by a single-genome amplification/sequencing technique can provide reliable identification of transmission multiplicity in 24 of 25 (96% cases. Observed transmission frequency of a single viral variant and multiple viral variants were within the ranges of 64% to 68%, and 32% to 36%, respectively.

  12. Validity and Reliability of Assessing Body Composition Using a Mobile Application.

    Science.gov (United States)

    Macdonald, Elizabeth Z; Vehrs, Pat R; Fellingham, Gilbert W; Eggett, Dennis; George, James D; Hager, Ronald

    2017-12-01

    The purpose of this study was to determine the validity and reliability of the LeanScreen (LS) mobile application that estimates percent body fat (%BF) using estimates of circumferences from photographs. The %BF of 148 weight-stable adults was estimated once using dual-energy x-ray absorptiometry (DXA). Each of two administrators assessed the %BF of each subject twice using the LS app and manually measured circumferences. A mixed-model ANOVA and Bland-Altman analyses were used to compare the estimates of %BF obtained from each method. Interrater and intrarater reliabilities values were determined using multiple measurements taken by each of the two administrators. The LS app and manually measured circumferences significantly underestimated (P < 0.05) the %BF determined using DXA by an average of -3.26 and -4.82 %BF, respectively. The LS app (6.99 %BF) and manually measured circumferences (6.76 %BF) had large limits of agreement. All interrater and intrarater reliability coefficients of estimates of %BF using the LS app and manually measured circumferences exceeded 0.99. The estimates of %BF from manually measured circumferences and the LS app were highly reliable. However, these field measures are not currently recommended for the assessment of body composition because of significant bias and large limits of agreements.

  13. Poisson Approximation-Based Score Test for Detecting Association of Rare Variants.

    Science.gov (United States)

    Fang, Hongyan; Zhang, Hong; Yang, Yaning

    2016-07-01

    Genome-wide association study (GWAS) has achieved great success in identifying genetic variants, but the nature of GWAS has determined its inherent limitations. Under the common disease rare variants (CDRV) hypothesis, the traditional association analysis methods commonly used in GWAS for common variants do not have enough power for detecting rare variants with a limited sample size. As a solution to this problem, pooling rare variants by their functions provides an efficient way for identifying susceptible genes. Rare variant typically have low frequencies of minor alleles, and the distribution of the total number of minor alleles of the rare variants can be approximated by a Poisson distribution. Based on this fact, we propose a new test method, the Poisson Approximation-based Score Test (PAST), for association analysis of rare variants. Two testing methods, namely, ePAST and mPAST, are proposed based on different strategies of pooling rare variants. Simulation results and application to the CRESCENDO cohort data show that our methods are more powerful than the existing methods. © 2016 John Wiley & Sons Ltd/University College London.

  14. OSS reliability measurement and assessment

    CERN Document Server

    Yamada, Shigeru

    2016-01-01

    This book analyses quantitative open source software (OSS) reliability assessment and its applications, focusing on three major topic areas: the Fundamentals of OSS Quality/Reliability Measurement and Assessment; the Practical Applications of OSS Reliability Modelling; and Recent Developments in OSS Reliability Modelling. Offering an ideal reference guide for graduate students and researchers in reliability for open source software (OSS) and modelling, the book introduces several methods of reliability assessment for OSS including component-oriented reliability analysis based on analytic hierarchy process (AHP), analytic network process (ANP), and non-homogeneous Poisson process (NHPP) models, the stochastic differential equation models and hazard rate models. These measurement and management technologies are essential to producing and maintaining quality/reliable systems using OSS.

  15. Reliability consideration of low-power-grid-tied inverter for photovoltaic application

    OpenAIRE

    Liu, J.; Henze, N.

    2009-01-01

    In recent years PV modules have been improved evidently. An excellent reliability has been validated corresponding to Mean Time between Failure (MTBF) between 500 and 6000 years respectively in commercial utility power systems. Manufactures can provide performance guarantees for PV modules at least for 20 years. If an average inverter lifetime of 5 years is assumed, it is evident that the overall reliability of PV systems [PVSs] with integrated inverter is determined chiefly by the inverter i...

  16. Improving the reliability of nuclear reprocessing by application of computers and mathematical modelling

    International Nuclear Information System (INIS)

    Gabowitsch, E.; Trauboth, H.

    1982-01-01

    After a brief survey of the present and expected future state of nuclear energy utilization, which should demonstrate the significance of nuclear reprocessing, safety and reliability aspects of nuclear reprocessing plants (NRP) are considered. Then, the principal possibilities of modern computer technology including computer systems architecture and application-oriented software for improving the reliability and availability are outlined. In this context, two information systems being developed at the Nuclear Research Center Karlsruhe (KfK) are briefly described. For design evaluation of certain areas of a large NRP mathematical methods and computer-aided tools developed, used or being designed by KfK are discussed. In conclusion, future research to be pursued in information processing and applied mathematics in support of reliable operation of NRP's is proposed. (Auth.)

  17. Sensitivity Weaknesses in Application of some Statistical Distribution in First Order Reliability Methods

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Enevoldsen, I.

    1993-01-01

    It has been observed and shown that in some examples a sensitivity analysis of the first order reliability index results in increasing reliability index, when the standard deviation for a stochastic variable is increased while the expected value is fixed. This unfortunate behaviour can occur when...... a stochastic variable is modelled by an asymmetrical density function. For lognormally, Gumbel and Weibull distributed stochastic variables it is shown for which combinations of the/3-point, the expected value and standard deviation the weakness can occur. In relation to practical application the behaviour...... is probably rather infrequent. A simple example is shown as illustration and to exemplify that for second order reliability methods and for exact calculations of the probability of failure this behaviour is much more infrequent....

  18. Longitudinal Reliability of Self-Reported Age at Menarche in Adolescent Girls: Variability across Time and Setting

    Science.gov (United States)

    Dorn, Lorah D.; Sontag-Padilla, Lisa M.; Pabst, Stephanie; Tissot, Abbigail; Susman, Elizabeth J.

    2013-01-01

    Age at menarche is critical in research and clinical settings, yet there is a dearth of studies examining its reliability in adolescents. We examined age at menarche during adolescence, specifically, (a) average method reliability across 3 years, (b) test-retest reliability between time points and methods, (c) intraindividual variability of…

  19. Reliability Analysis of Cooling Towers: Influence of Rebars Corrosion on Failure

    International Nuclear Information System (INIS)

    Sudret, Bruno; Pendola, Maurice

    2002-01-01

    Natural-draught cooling towers are used in nuclear power plants as heat exchangers. These structures are submitted to environmental loads such as wind and thermal gradients that are stochastic in nature. A probabilistic framework has been developed by EDF (Electricite de France) for assessing the durability of such structures. In this paper, the corrosion of the rebars due to concrete carbonation and the corresponding weakening of the reinforced concrete sections is considered. Due to the presence of time in the definition of the limit state function associated with the loss of serviceability of the cooling tower, time-variant reliability analysis has to be used. A novel approach is proposed to take into account the random 'initiation time', which corresponds to the time necessary for the carbonation to attain the rebars. Results are given in terms of the probability of failure of the structure over its life time. (authors)

  20. Reliability models for a nonrepairable system with heterogeneous components having a phase-type time-to-failure distribution

    International Nuclear Information System (INIS)

    Kim, Heungseob; Kim, Pansoo

    2017-01-01

    This research paper presents practical stochastic models for designing and analyzing the time-dependent reliability of nonrepairable systems. The models are formulated for nonrepairable systems with heterogeneous components having phase-type time-to-failure distributions by a structured continuous time Markov chain (CTMC). The versatility of the phase-type distributions enhances the flexibility and practicality of the systems. By virtue of these benefits, studies in reliability engineering can be more advanced than the previous studies. This study attempts to solve a redundancy allocation problem (RAP) by using these new models. The implications of mixing components, redundancy levels, and redundancy strategies are simultaneously considered to maximize the reliability of a system. An imperfect switching case in a standby redundant system is also considered. Furthermore, the experimental results for a well-known RAP benchmark problem are presented to demonstrate the approximating error of the previous reliability function for a standby redundant system and the usefulness of the current research. - Highlights: • Phase-type time-to-failure distribution is used for components. • Reliability model for nonrepairable system is developed using Markov chain. • System is composed of heterogeneous components. • Model provides the real value of standby system reliability not an approximation. • Redundancy allocation problem is used to show usefulness of this model.

  1. Risk and reliability analysis theory and applications : in honor of Prof. Armen Der Kiureghian

    CERN Document Server

    2017-01-01

    This book presents a unique collection of contributions from some of the foremost scholars in the field of risk and reliability analysis. Combining the most advanced analysis techniques with practical applications, it is one of the most comprehensive and up-to-date books available on risk-based engineering. All the fundamental concepts needed to conduct risk and reliability assessments are covered in detail, providing readers with a sound understanding of the field and making the book a powerful tool for students and researchers alike. This book was prepared in honor of Professor Armen Der Kiureghian, one of the fathers of modern risk and reliability analysis.

  2. Real-time RT-PCR high-resolution melting curve analysis and multiplex RT-PCR to detect and differentiate grapevine leafroll-associated associated virus 3 variant groups I, II, III and VI

    Directory of Open Access Journals (Sweden)

    Bester Rachelle

    2012-09-01

    Full Text Available Abstract Background Grapevine leafroll-associated virus 3 (GLRaV-3 is the main contributing agent of leafroll disease worldwide. Four of the six GLRaV-3 variant groups known have been found in South Africa, but their individual contribution to leafroll disease is unknown. In order to study the pathogenesis of leafroll disease, a sensitive and accurate diagnostic assay is required that can detect different variant groups of GLRaV-3. Methods In this study, a one-step real-time RT-PCR, followed by high-resolution melting (HRM curve analysis for the simultaneous detection and identification of GLRaV-3 variants of groups I, II, III and VI, was developed. A melting point confidence interval for each variant group was calculated to include at least 90% of all melting points observed. A multiplex RT-PCR protocol was developed to these four variant groups in order to assess the efficacy of the real-time RT-PCR HRM assay. Results A universal primer set for GLRaV-3 targeting the heat shock protein 70 homologue (Hsp70h gene of GLRaV-3 was designed that is able to detect GLRaV-3 variant groups I, II, III and VI and differentiate between them with high-resolution melting curve analysis. The real-time RT-PCR HRM and the multiplex RT-PCR were optimized using 121 GLRaV-3 positive samples. Due to a considerable variation in melting profile observed within each GLRaV-3 group, a confidence interval of above 90% was calculated for each variant group, based on the range and distribution of melting points. The intervals of groups I and II could not be distinguished and a 95% joint confidence interval was calculated for simultaneous detection of group I and II variants. An additional primer pair targeting GLRaV-3 ORF1a was developed that can be used in a subsequent real-time RT-PCR HRM to differentiate between variants of groups I and II. Additionally, the multiplex RT-PCR successfully validated 94.64% of the infections detected with the real-time RT-PCR HRM

  3. Cellulase variants

    Science.gov (United States)

    Blazej, Robert; Toriello, Nicholas; Emrich, Charles; Cohen, Richard N.; Koppel, Nitzan

    2015-07-14

    This invention provides novel variant cellulolytic enzymes having improved activity and/or stability. In certain embodiments the variant cellulotyic enzymes comprise a glycoside hydrolase with or comprising a substitution at one or more positions corresponding to one or more of residues F64, A226, and/or E246 in Thermobifida fusca Cel9A enzyme. In certain embodiments the glycoside hydrolase is a variant of a family 9 glycoside hydrolase. In certain embodiments the glycoside hydrolase is a variant of a theme B family 9 glycoside hydrolase.

  4. Development of FVSOntario: A Forest Vegetation Simulator Variant and application software for Ontario

    Science.gov (United States)

    Murray E. Woods; Donald C. E. Robinson

    2008-01-01

    The Ontario Ministry of Natural Resources is leading a government-industry partnership to develop an Ontario variant of the Forest Vegetation Simulator (FVS). Based on the Lake States variant and the PrognosisBC user-interface, the FVSOntarioproject is motivated by a need to model the impacts of intensive forest management...

  5. Reliability estimates for selected sensors in fusion applications

    International Nuclear Information System (INIS)

    Cadwallader, L.C.

    1996-09-01

    This report presents the results of a study to define several types of sensors in use, the qualitative reliability (failure modes) and quantitative reliability (average failure rates) for these types of process sensors. Temperature, pressure, flow, and level sensors are discussed for water coolant and for cryogenic coolants. The failure rates that have been found are useful for risk assessment and safety analysis. Repair times and calibration intervals are also given when found in the literature. All of these values can also be useful to plant operators and maintenance personnel. Designers may be able to make use of these data when planning systems. The final chapter in this report discusses failure rates for several types of personnel safety sensors, including ionizing radiation monitors, toxic and combustible gas detectors, humidity sensors, and magnetic field sensors. These data could be useful to industrial hygienists and other safety professionals when designing or auditing for personnel safety

  6. Capabilities needed for the next generation of thermo-hydraulic codes for use in real time applications

    Energy Technology Data Exchange (ETDEWEB)

    Arndt, S.A.

    1997-07-01

    The real-time reactor simulation field is currently at a crossroads in terms of the capability to perform real-time analysis using the most sophisticated computer codes. Current generation safety analysis codes are being modified to replace simplified codes that were specifically designed to meet the competing requirement for real-time applications. The next generation of thermo-hydraulic codes will need to have included in their specifications the specific requirement for use in a real-time environment. Use of the codes in real-time applications imposes much stricter requirements on robustness, reliability and repeatability than do design and analysis applications. In addition, the need for code use by a variety of users is a critical issue for real-time users, trainers and emergency planners who currently use real-time simulation, and PRA practitioners who will increasingly use real-time simulation for evaluating PRA success criteria in near real-time to validate PRA results for specific configurations and plant system unavailabilities.

  7. Capabilities needed for the next generation of thermo-hydraulic codes for use in real time applications

    International Nuclear Information System (INIS)

    Arndt, S.A.

    1997-01-01

    The real-time reactor simulation field is currently at a crossroads in terms of the capability to perform real-time analysis using the most sophisticated computer codes. Current generation safety analysis codes are being modified to replace simplified codes that were specifically designed to meet the competing requirement for real-time applications. The next generation of thermo-hydraulic codes will need to have included in their specifications the specific requirement for use in a real-time environment. Use of the codes in real-time applications imposes much stricter requirements on robustness, reliability and repeatability than do design and analysis applications. In addition, the need for code use by a variety of users is a critical issue for real-time users, trainers and emergency planners who currently use real-time simulation, and PRA practitioners who will increasingly use real-time simulation for evaluating PRA success criteria in near real-time to validate PRA results for specific configurations and plant system unavailabilities

  8. BUILDING MODEL ANALYSIS APPLICATIONS WITH THE JOINT UNIVERSAL PARAMETER IDENTIFICATION AND EVALUATION OF RELIABILITY (JUPITER) API

    Science.gov (United States)

    The open-source, public domain JUPITER (Joint Universal Parameter IdenTification and Evaluation of Reliability) API (Application Programming Interface) provides conventions and Fortran-90 modules to develop applications (computer programs) for analyzing process models. The input ...

  9. A Low Density Microarray Method for the Identification of Human Papillomavirus Type 18 Variants

    Science.gov (United States)

    Meza-Menchaca, Thuluz; Williams, John; Rodríguez-Estrada, Rocío B.; García-Bravo, Aracely; Ramos-Ligonio, Ángel; López-Monteon, Aracely; Zepeda, Rossana C.

    2013-01-01

    We describe a novel microarray based-method for the screening of oncogenic human papillomavirus 18 (HPV-18) molecular variants. Due to the fact that sequencing methodology may underestimate samples containing more than one variant we designed a specific and sensitive stacking DNA hybridization assay. This technology can be used to discriminate between three possible phylogenetic branches of HPV-18. Probes were attached covalently on glass slides and hybridized with single-stranded DNA targets. Prior to hybridization with the probes, the target strands were pre-annealed with the three auxiliary contiguous oligonucleotides flanking the target sequences. Screening HPV-18 positive cell lines and cervical samples were used to evaluate the performance of this HPV DNA microarray. Our results demonstrate that the HPV-18's variants hybridized specifically to probes, with no detection of unspecific signals. Specific probes successfully reveal detectable point mutations in these variants. The present DNA oligoarray system can be used as a reliable, sensitive and specific method for HPV-18 variant screening. Furthermore, this simple assay allows the use of inexpensive equipment, making it accessible in resource-poor settings. PMID:24077317

  10. ABC Assay: Method Development and Application to Quantify the Role of Three DWV Master Variants in Overwinter Colony Losses of European Honey Bees

    Directory of Open Access Journals (Sweden)

    Jessica L. Kevill

    2017-10-01

    Full Text Available Deformed wing virus (DWV is one of the most prevalent honey bee viral pathogens in the world. Typical of many RNA viruses, DWV is a quasi-species, which is comprised of a large number of different variants, currently consisting of three master variants: Type A, B, and C. Little is known about the impact of each variant or combinations of variants upon the biology of individual hosts. Therefore, we have developed a new set of master variant-specific DWV primers and a set of standards that allow for the quantification of each of the master variants. Competitive reverse transcriptase polymerase chain reaction (RT-PCR experimental design confirms that each new DWV primer set is specific to the retrospective master variant. The sensitivity of the ABC assay is dependent on whether DNA or RNA is used as the template and whether other master variants are present in the sample. Comparison of the overall proportions of each master variant within a sample of known diversity, as confirmed by next-generation sequence (NGS data, validates the efficiency of the ABC assay. The ABC assay was used on archived material from a Devon overwintering colony loss (OCL 2006–2007 study; further implicating DWV type A and, for the first time, possibly C in the untimely collapse of honey bee colonies. Moreover, in this study DWV type B was not associated with OCL. The use of the ABC assay will allow researchers to quickly and cost effectively pre-screen for the presence of DWV master variants in honey bees.

  11. The impact of scheduling on service reliability : Trip-time determination and holding points in long-headway services

    NARCIS (Netherlands)

    Van Oort, N.; Boterman, J.W.; Van Nes, R.

    2012-01-01

    This paper presents research on optimizing service reliability of longheadway services in urban public transport. Setting the driving time, and thus the departure time at stops, is an important decision when optimizing reliability in urban public transport. The choice of the percentile out of

  12. G2S: A web-service for annotating genomic variants on 3D protein structures.

    Science.gov (United States)

    Wang, Juexin; Sheridan, Robert; Sumer, S Onur; Schultz, Nikolaus; Xu, Dong; Gao, Jianjiong

    2018-01-27

    Accurately mapping and annotating genomic locations on 3D protein structures is a key step in structure-based analysis of genomic variants detected by recent large-scale sequencing efforts. There are several mapping resources currently available, but none of them provides a web API (Application Programming Interface) that support programmatic access. We present G2S, a real-time web API that provides automated mapping of genomic variants on 3D protein structures. G2S can align genomic locations of variants, protein locations, or protein sequences to protein structures and retrieve the mapped residues from structures. G2S API uses REST-inspired design conception and it can be used by various clients such as web browsers, command terminals, programming languages and other bioinformatics tools for bringing 3D structures into genomic variant analysis. The webserver and source codes are freely available at https://g2s.genomenexus.org. g2s@genomenexus.org. Supplementary data are available at Bioinformatics online. © The Author (2018). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  13. Provably Secure and Subliminal-Free Variant of Schnorr Signature

    OpenAIRE

    Zhang , Yinghui; Li , Hui; Li , Xiaoqing; Zhu , Hui

    2013-01-01

    Part 2: Asian Conference on Availability, Reliability and Security (AsiaARES); International audience; Subliminal channels present a severe challenge to information security. Currently, subliminal channels still exist in Schnorr signature. In this paper, we propose a subliminal-free variant of Schnorr signature. In the proposed scheme, an honest-but-curious warden is introduced to help the signer to generate a signature on a given message, but it is disallowed to sign messages independently. ...

  14. Investigating the value of time and value of reliability for managed lanes.

    Science.gov (United States)

    2015-12-01

    This report presents a comprehensive study in Value of Time (VOT) and Value of Reliability (VOR) analysis in : the context of managed lane (ML) facilities. Combined Revealed Preference (RP) and Stated Preference (SP) : data were used to understand tr...

  15. An application of modulated poisson processes to the reliability analysis of repairable systems

    Energy Technology Data Exchange (ETDEWEB)

    Saldanha, Pedro L.C. [Comissao Nacional de Energia Nuclear (CNEN), Rio de Janeiro, RJ (Brazil). Coordenacao de Reatores]. E-mail: saldanha@cnen.gov.br; Melo, P.F. Frutuoso e [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Nuclear]. E-mail: frutuoso@con.ufrj.br; Noriega, Hector C. [Universidad Austral de Chile (UACh), Valdivia (Chile). Faculdad de Ciencias de la Ingeniaria]. E-mail: hnoriega@uach.cl

    2005-07-01

    This paper discusses the application of the modulated power law process (MPLP) model to the rate of occurrence of failures of active repairable systems in reliability engineering. Traditionally, two ways of modeling repairable systems, in what concerns maintenance policies, are: a pessimistic approach (non-homogeneous process - NHPP), and a very optimistic approach (renewal processes - RP). It is important to build a generalized model that might consider characteristics and properties both of the NHPP and of the RP models as particular cases. In practice, by considering the pattern of times between failures, the MPLP appears to be more realistic to represent the occurrence of failures of repairable systems in order to define whether they can be modeled by a homogeneous or a non-homogeneous process. The study has shown that the model can be used to make decisions concerning the evaluation of the qualified life of plant equipment. By controlling and monitoring two of the three parameters of the MPLP model during the equipment operation, it is possible to check whether and how the equipment is following the basis of its qualification process, and so identify how the effects of time, degradation and operation modes are influencing the equipment performance. The discussion is illustrated by an application to the service water pumps of a typical PWR plant. (author)

  16. Application case study of AP1000 automatic depressurization system (ADS) for reliability evaluation by GO-FLOW methodology

    Energy Technology Data Exchange (ETDEWEB)

    Hashim, Muhammad, E-mail: hashimsajid@yahoo.com; Hidekazu, Yoshikawa, E-mail: yosikawa@kib.biglobe.ne.jp; Takeshi, Matsuoka, E-mail: mats@cc.utsunomiya-u.ac.jp; Ming, Yang, E-mail: myang.heu@gmail.com

    2014-10-15

    Highlights: • Discussion on reasons why AP1000 equipped with ADS system comparatively to PWR. • Clarification of full and partial depressurization of reactor coolant system by ADS system. • Application case study of four stages ADS system for reliability evaluation in LBLOCA. • GO-FLOW tool is capable to evaluate dynamic reliability of passive safety systems. • Calculated ADS reliability result significantly increased dynamic reliability of PXS. - Abstract: AP1000 nuclear power plant (NPP) utilized passive means for the safety systems to ensure its safety in events of transient or severe accidents. One of the unique safety systems of AP1000 to be compared with conventional PWR is the “four stages Automatic Depressurization System (ADS)”, and ADS system originally works as an active safety system. In the present study, authors first discussed the reasons of why four stages ADS system is added in AP1000 plant to be compared with conventional PWR in the aspect of reliability. And then explained the full and partial depressurization of RCS system by four stages ADS in events of transient and loss of coolant accidents (LOCAs). Lastly, the application case study of four stages ADS system of AP1000 has been conducted in the aspect of reliability evaluation of ADS system under postulated conditions of full RCS depressurization during large break loss of a coolant accident (LBLOCA) in one of the RCS cold legs. In this case study, the reliability evaluation is made by GO-FLOW methodology to determinate the influence of ADS system in dynamic reliability of passive core cooling system (PXS) of AP1000, i.e. what will happen if ADS system fails or successfully actuate. The GO-FLOW is success-oriented reliability analysis tool and is capable to evaluating the systems reliability/unavailability alternatively to Fault Tree Analysis (FTA) and Event Tree Analysis (ETA) tools. Under these specific conditions of LBLOCA, the GO-FLOW calculated reliability results indicated

  17. The curation of genetic variants: difficulties and possible solutions.

    Science.gov (United States)

    Pandey, Kapil Raj; Maden, Narendra; Poudel, Barsha; Pradhananga, Sailendra; Sharma, Amit Kumar

    2012-12-01

    The curation of genetic variants from biomedical articles is required for various clinical and research purposes. Nowadays, establishment of variant databases that include overall information about variants is becoming quite popular. These databases have immense utility, serving as a user-friendly information storehouse of variants for information seekers. While manual curation is the gold standard method for curation of variants, it can turn out to be time-consuming on a large scale thus necessitating the need for automation. Curation of variants described in biomedical literature may not be straightforward mainly due to various nomenclature and expression issues. Though current trends in paper writing on variants is inclined to the standard nomenclature such that variants can easily be retrieved, we have a massive store of variants in the literature that are present as non-standard names and the online search engines that are predominantly used may not be capable of finding them. For effective curation of variants, knowledge about the overall process of curation, nature and types of difficulties in curation, and ways to tackle the difficulties during the task are crucial. Only by effective curation, can variants be correctly interpreted. This paper presents the process and difficulties of curation of genetic variants with possible solutions and suggestions from our work experience in the field including literature support. The paper also highlights aspects of interpretation of genetic variants and the importance of writing papers on variants following standard and retrievable methods. Copyright © 2012. Published by Elsevier Ltd.

  18. Human reliability

    International Nuclear Information System (INIS)

    Embrey, D.E.

    1987-01-01

    Concepts and techniques of human reliability have been developed and are used mostly in probabilistic risk assessment. For this, the major application of human reliability assessment has been to identify the human errors which have a significant effect on the overall safety of the system and to quantify the probability of their occurrence. Some of the major issues within human reliability studies are reviewed and it is shown how these are applied to the assessment of human failures in systems. This is done under the following headings; models of human performance used in human reliability assessment, the nature of human error, classification of errors in man-machine systems, practical aspects, human reliability modelling in complex situations, quantification and examination of human reliability, judgement based approaches, holistic techniques and decision analytic approaches. (UK)

  19. Emerging applications of genome-editing technology to examine functionality of GWAS-associated variants for complex traits.

    Science.gov (United States)

    Smith, Andrew J P; Deloukas, Panos; Munroe, Patricia B

    2018-04-13

    Over the last decade, genome-wide association studies (GWAS) have propelled the discovery of thousands of loci associated with complex diseases. The focus is now turning towards the function of these association signals, determining the causal variant(s) amongst those in strong linkage disequilibrium, and identifying their underlying mechanisms, such as long-range gene regulation. Genome-editing techniques utilising zinc-finger nucleases (ZFN), transcription activator-like effector nucleases (TALENs) and clustered regularly-interspaced short palindromic repeats with Cas9 nuclease (CRISPR-Cas9), are becoming the tools of choice to establish functionality for these variants, due to the ability to assess effects of single variants in vivo. This review will discuss examples of how these technologies have begun to aid functional analysis of GWAS loci for complex traits such as cardiovascular disease, type 2 diabetes, cancer, obesity and autoimmune disease. We focus on analysis of variants occurring within non-coding genomic regions, as these comprise the majority of GWAS variants, providing the greatest challenges to determining functionality, and compare editing strategies that provide different levels of evidence for variant functionality. The review describes molecular insights into some of these potentially causal variants, and how these may relate to the pathology of the trait, and look towards future directions for these technologies in post-GWAS analysis, such as base-editing.

  20. The Seismic Reliability of Offshore Structures Based on Nonlinear Time History Analyses

    International Nuclear Information System (INIS)

    Hosseini, Mahmood; Karimiyani, Somayyeh; Ghafooripour, Amin; Jabbarzadeh, Mohammad Javad

    2008-01-01

    Regarding the past earthquakes damages to offshore structures, as vital structures in the oil and gas industries, it is important that their seismic design is performed by very high reliability. Accepting the Nonlinear Time History Analyses (NLTHA) as the most reliable seismic analysis method, in this paper an offshore platform of jacket type with the height of 304 feet, having a deck of 96 feet by 94 feet, and weighing 290 million pounds has been studied. At first, some Push-Over Analyses (POA) have been preformed to recognize the more critical members of the jacket, based on the range of their plastic deformations. Then NLTHA have been performed by using the 3-components accelerograms of 100 earthquakes, covering a wide range of frequency content, and normalized to three Peak Ground Acceleration (PGA) levels of 0.3 g, 0.65 g, and 1.0 g. By using the results of NLTHA the damage and rupture probabilities of critical member have been studied to assess the reliability of the jacket structure. Regarding that different structural members of the jacket have different effects on the stability of the platform, an ''importance factor'' has been considered for each critical member based on its location and orientation in the structure, and then the reliability of the whole structure has been obtained by combining the reliability of the critical members, each having its specific importance factor

  1. Measuring Passenger Travel Time Reliability using Smartcard Data

    NARCIS (Netherlands)

    Bagherian, M.; Cats, O.; van Oort, N.; Hickman, M

    2016-01-01

    Service reliability is a key performance measure for transit agencies in increasing their service quality and thus ridership. Conventional reliability metrics are established based on vehicle movements and thus do not adequately reflect passenger’s experience. In the past few years, the growing

  2. Application of Entry-Time Processes to Asset Management in Nuclear Power Plants

    International Nuclear Information System (INIS)

    Nelson, Paul; Wang, Shuwen; Kee, Ernie J.

    2006-01-01

    The entry-time approach to dynamic reliability is based upon computational solution of the Chapman-Kolmogorov (generalized state-transition) equations underlying a certain class of marked point processes. Previous work has verified a particular finite-difference approach to computational solution of these equations. The objective of this work is to illustrate the potential application of the entry-time approach to risk-informed asset management (RIAM) decisions regarding maintenance or replacement of major systems within a plant. Results are presented in the form of plots, with replacement/maintenance period as a parameter, of expected annual revenue, along with annual variance and annual skewness as indicators of associated risks. Present results are for a hypothetical system, to illustrate the capability of the approach, but some considerations related to potential application of this approach to nuclear power plants are discussed. (authors)

  3. Intermediate-term medium-range earthquake prediction algorithm M8: A new spatially stabilized application in Italy

    International Nuclear Information System (INIS)

    Romashkova, L.L.; Kossobokov, V.G.; Peresan, A.; Panza, G.F.

    2001-12-01

    A series of experiments, based on the intermediate-term earthquake prediction algorithm M8, has been performed for the retrospective simulation of forward predictions in the Italian territory, with the aim to design an experimental routine for real-time predictions. These experiments evidenced two main difficulties for the application of M8 in Italy. The first one is due to the fact that regional catalogues are usually limited in space. The second one concerns certain arbitrariness and instability, with respect to the positioning of the circles of investigation. Here we design a new scheme for the application of the algorithm M8, which is less subjective and less sensitive to the position of the circles of investigation. To perform this test, we consider a recent revision of the Italian catalogue, named UCI2001, composed by CCI1996, NEIC and ALPOR data for the period 1900-1985, and updated with the NEIC reduces the spatial heterogeneity of the data at the boundaries of Italy. The new variant of the M8 algorithm application reduces the number of spurious alarms and increases the reliability of predictions. As a result, three out of four earthquakes with magnitude M max larger than 6.0 are predicted in the retrospective simulation of the forward prediction, during the period 1972-2001, with a space-time volume of alarms comparable to that obtained with the non-stabilized variant of the M8 algorithm in Italy. (author)

  4. Three-dimensional spatial analysis of missense variants in RTEL1 identifies pathogenic variants in patients with Familial Interstitial Pneumonia.

    Science.gov (United States)

    Sivley, R Michael; Sheehan, Jonathan H; Kropski, Jonathan A; Cogan, Joy; Blackwell, Timothy S; Phillips, John A; Bush, William S; Meiler, Jens; Capra, John A

    2018-01-23

    Next-generation sequencing of individuals with genetic diseases often detects candidate rare variants in numerous genes, but determining which are causal remains challenging. We hypothesized that the spatial distribution of missense variants in protein structures contains information about function and pathogenicity that can help prioritize variants of unknown significance (VUS) and elucidate the structural mechanisms leading to disease. To illustrate this approach in a clinical application, we analyzed 13 candidate missense variants in regulator of telomere elongation helicase 1 (RTEL1) identified in patients with Familial Interstitial Pneumonia (FIP). We curated pathogenic and neutral RTEL1 variants from the literature and public databases. We then used homology modeling to construct a 3D structural model of RTEL1 and mapped known variants into this structure. We next developed a pathogenicity prediction algorithm based on proximity to known disease causing and neutral variants and evaluated its performance with leave-one-out cross-validation. We further validated our predictions with segregation analyses, telomere lengths, and mutagenesis data from the homologous XPD protein. Our algorithm for classifying RTEL1 VUS based on spatial proximity to pathogenic and neutral variation accurately distinguished 7 known pathogenic from 29 neutral variants (ROC AUC = 0.85) in the N-terminal domains of RTEL1. Pathogenic proximity scores were also significantly correlated with effects on ATPase activity (Pearson r = -0.65, p = 0.0004) in XPD, a related helicase. Applying the algorithm to 13 VUS identified from sequencing of RTEL1 from patients predicted five out of six disease-segregating VUS to be pathogenic. We provide structural hypotheses regarding how these mutations may disrupt RTEL1 ATPase and helicase function. Spatial analysis of missense variation accurately classified candidate VUS in RTEL1 and suggests how such variants cause disease. Incorporating

  5. CDKL5 variants

    Science.gov (United States)

    Kalscheuer, Vera M.; Hennig, Friederike; Leonard, Helen; Downs, Jenny; Clarke, Angus; Benke, Tim A.; Armstrong, Judith; Pineda, Mercedes; Bailey, Mark E.S.; Cobb, Stuart R.

    2017-01-01

    Objective: To provide new insights into the interpretation of genetic variants in a rare neurologic disorder, CDKL5 deficiency, in the contexts of population sequencing data and an updated characterization of the CDKL5 gene. Methods: We analyzed all known potentially pathogenic CDKL5 variants by combining data from large-scale population sequencing studies with CDKL5 variants from new and all available clinical cohorts and combined this with computational methods to predict pathogenicity. Results: The study has identified several variants that can be reclassified as benign or likely benign. With the addition of novel CDKL5 variants, we confirm that pathogenic missense variants cluster in the catalytic domain of CDKL5 and reclassify a purported missense variant as having a splicing consequence. We provide further evidence that missense variants in the final 3 exons are likely to be benign and not important to disease pathology. We also describe benign splicing and nonsense variants within these exons, suggesting that isoform hCDKL5_5 is likely to have little or no neurologic significance. We also use the available data to make a preliminary estimate of minimum incidence of CDKL5 deficiency. Conclusions: These findings have implications for genetic diagnosis, providing evidence for the reclassification of specific variants previously thought to result in CDKL5 deficiency. Together, these analyses support the view that the predominant brain isoform in humans (hCDKL5_1) is crucial for normal neurodevelopment and that the catalytic domain is the primary functional domain. PMID:29264392

  6. A new set of qualitative reliability criteria to aid inferences on palaeomagnetic dipole moment variations through geological time

    Directory of Open Access Journals (Sweden)

    Andrew John Biggin

    2014-10-01

    Full Text Available Records of reversal frequency support forcing of the geodynamo over geological timescales but obtaining these for earlier times (e.g. the Precambrian is a major challenge. Changes in the measured virtual (axial dipole moment of the Earth, averaged over several millions of years or longer, also have the potential to constrain core and mantle evolution through deep time. There have been a wealth of recent innovations in palaeointensity methods, but there is, as yet, no comprehensive means for assessing the reliability of new and existing dipole moment data. Here we present a new set of largely qualitative reliability criteria for palaeointensity results at the site mean level, which we term QPI in reference to the long-standing Q criteria used for assessing palaeomagnetic poles. These represent the first attempt to capture the range of biasing agents applicable to palaeointensity measurements and to recognise the various approaches employed to obviate them. A total of 8 criteria are proposed and applied to 312 dipole moment estimates recently incorporated into the PINT global database. The number of these criteria fulfilled by a single dipole moment estimate (the QPI value varies between 1 and 6 in the examined dataset and has a median of 3. Success rates for each of the criteria are highly variable, but each criterion was met by at least a few results. The new criteria will be useful for future studies as a means of gauging the reliability of new and published dipole moment estimates.

  7. Measuring reliability under epistemic uncertainty: Review on non-probabilistic reliability metrics

    Directory of Open Access Journals (Sweden)

    Kang Rui

    2016-06-01

    Full Text Available In this paper, a systematic review of non-probabilistic reliability metrics is conducted to assist the selection of appropriate reliability metrics to model the influence of epistemic uncertainty. Five frequently used non-probabilistic reliability metrics are critically reviewed, i.e., evidence-theory-based reliability metrics, interval-analysis-based reliability metrics, fuzzy-interval-analysis-based reliability metrics, possibility-theory-based reliability metrics (posbist reliability and uncertainty-theory-based reliability metrics (belief reliability. It is pointed out that a qualified reliability metric that is able to consider the effect of epistemic uncertainty needs to (1 compensate the conservatism in the estimations of the component-level reliability metrics caused by epistemic uncertainty, and (2 satisfy the duality axiom, otherwise it might lead to paradoxical and confusing results in engineering applications. The five commonly used non-probabilistic reliability metrics are compared in terms of these two properties, and the comparison can serve as a basis for the selection of the appropriate reliability metrics.

  8. Role of network dynamics in shaping spike timing reliability

    International Nuclear Information System (INIS)

    Bazhenov, Maxim; Rulkov, Nikolai F.; Fellous, Jean-Marc; Timofeev, Igor

    2005-01-01

    We study the reliability of cortical neuron responses to periodically modulated synaptic stimuli. Simple map-based models of two different types of cortical neurons are constructed to replicate the intrinsic resonances of reliability found in experimental data and to explore the effects of those resonance properties on collective behavior in a cortical network model containing excitatory and inhibitory cells. We show that network interactions can enhance the frequency range of reliable responses and that the latter can be controlled by the strength of synaptic connections. The underlying dynamical mechanisms of reliability enhancement are discussed

  9. 75 FR 15371 - Time Error Correction Reliability Standard

    Science.gov (United States)

    2010-03-29

    ... Electric Reliability Council of Texas (ERCOT) manages the flow of electric power to 22 million Texas customers. As the independent system operator for the region, ERCOT schedules power on an electric grid that... Coordinating Council (WECC) is responsible for coordinating and promoting bulk electric system reliability in...

  10. ESCAF - Failure simulation and reliability calculation device

    International Nuclear Information System (INIS)

    Laviron, A.; Berard, C.; Quenee, R.

    1979-01-01

    Reliability studies of nuclear power plant safety functions have, up to now, required the use of large computers. As they are of universal use, these big machines are not very well adapted to deal with reliability problems at low cost. ESCAF has been developed to be substituted for large computers in order to save time and money. ESCAF is a small electronic device which can be used in connection with a minicomputer. It allows to perform complex system reliability analysis (qualitative and quantitative) and to study critical element influences such as common cause failures. In this paper, the device is described and its features and abilities are outlined: easy to implement, swift running, low working cost. Its application range concerns all cases when a good reliability is needed

  11. Non-linear time variant model intended for polypyrrole-based actuators

    Science.gov (United States)

    Farajollahi, Meisam; Madden, John D. W.; Sassani, Farrokh

    2014-03-01

    Polypyrrole-based actuators are of interest due to their biocompatibility, low operation voltage and relatively high strain and force. Modeling and simulation are very important to predict the behaviour of each actuator. To develop an accurate model, we need to know the electro-chemo-mechanical specifications of the Polypyrrole. In this paper, the non-linear time-variant model of Polypyrrole film is derived and proposed using a combination of an RC transmission line model and a state space representation. The model incorporates the potential dependent ionic conductivity. A function of ionic conductivity of Polypyrrole vs. local charge is proposed and implemented in the non-linear model. Matching of the measured and simulated electrical response suggests that ionic conductivity of Polypyrrole decreases significantly at negative potential vs. silver/silver chloride and leads to reduced current in the cyclic voltammetry (CV) tests. The next stage is to relate the distributed charging of the polymer to actuation via the strain to charge ratio. Further work is also needed to identify ionic and electronic conductivities as well as capacitance as a function of oxidation state so that a fully predictive model can be created.

  12. ClinGen Pathogenicity Calculator: a configurable system for assessing pathogenicity of genetic variants.

    Science.gov (United States)

    Patel, Ronak Y; Shah, Neethu; Jackson, Andrew R; Ghosh, Rajarshi; Pawliczek, Piotr; Paithankar, Sameer; Baker, Aaron; Riehle, Kevin; Chen, Hailin; Milosavljevic, Sofia; Bizon, Chris; Rynearson, Shawn; Nelson, Tristan; Jarvik, Gail P; Rehm, Heidi L; Harrison, Steven M; Azzariti, Danielle; Powell, Bradford; Babb, Larry; Plon, Sharon E; Milosavljevic, Aleksandar

    2017-01-12

    The success of the clinical use of sequencing based tests (from single gene to genomes) depends on the accuracy and consistency of variant interpretation. Aiming to improve the interpretation process through practice guidelines, the American College of Medical Genetics and Genomics (ACMG) and the Association for Molecular Pathology (AMP) have published standards and guidelines for the interpretation of sequence variants. However, manual application of the guidelines is tedious and prone to human error. Web-based tools and software systems may not only address this problem but also document reasoning and supporting evidence, thus enabling transparency of evidence-based reasoning and resolution of discordant interpretations. In this report, we describe the design, implementation, and initial testing of the Clinical Genome Resource (ClinGen) Pathogenicity Calculator, a configurable system and web service for the assessment of pathogenicity of Mendelian germline sequence variants. The system allows users to enter the applicable ACMG/AMP-style evidence tags for a specific allele with links to supporting data for each tag and generate guideline-based pathogenicity assessment for the allele. Through automation and comprehensive documentation of evidence codes, the system facilitates more accurate application of the ACMG/AMP guidelines, improves standardization in variant classification, and facilitates collaborative resolution of discordances. The rules of reasoning are configurable with gene-specific or disease-specific guideline variations (e.g. cardiomyopathy-specific frequency thresholds and functional assays). The software is modular, equipped with robust application program interfaces (APIs), and available under a free open source license and as a cloud-hosted web service, thus facilitating both stand-alone use and integration with existing variant curation and interpretation systems. The Pathogenicity Calculator is accessible at http

  13. Reliability design of a critical facility: An application of PRA methods

    International Nuclear Information System (INIS)

    Souza Vieira Neto, A.; Souza Borges, W. de

    1987-01-01

    Although a general agreement concerning the enforcement of reliability (probabilistic) design criteria for nuclear utilities is yet to be achieved. PRA methodology can still be used successfully as a project design and review tool, aimed at improving system's prospective performance or minimizing expected accident consequences. In this paper, the potential of such an application of PRA methods is examined in the special case of a critical design project currently being developed in Brazil. (orig.)

  14. Detection of ATM germline variants by the p53 mitotic centrosomal localization test in BRCA1/2-negative patients with early-onset breast cancer.

    Science.gov (United States)

    Prodosmo, Andrea; Buffone, Amelia; Mattioni, Manlio; Barnabei, Agnese; Persichetti, Agnese; De Leo, Aurora; Appetecchia, Marialuisa; Nicolussi, Arianna; Coppa, Anna; Sciacchitano, Salvatore; Giordano, Carolina; Pinnarò, Paola; Sanguineti, Giuseppe; Strigari, Lidia; Alessandrini, Gabriele; Facciolo, Francesco; Cosimelli, Maurizio; Grazi, Gian Luca; Corrado, Giacomo; Vizza, Enrico; Giannini, Giuseppe; Soddu, Silvia

    2016-09-06

    Variant ATM heterozygotes have an increased risk of developing cancer, cardiovascular diseases, and diabetes. Costs and time of sequencing and ATM variant complexity make large-scale, general population screenings not cost-effective yet. Recently, we developed a straightforward, rapid, and inexpensive test based on p53 mitotic centrosomal localization (p53-MCL) in peripheral blood mononuclear cells (PBMCs) that diagnoses mutant ATM zygosity and recognizes tumor-associated ATM polymorphisms. Fresh PBMCs from 496 cancer patients were analyzed by p53-MCL: 90 cases with familial BRCA1/2-positive and -negative breast and/or ovarian cancer, 337 with sporadic cancers (ovarian, lung, colon, and post-menopausal breast cancers), and 69 with breast/thyroid cancer. Variants were confirmed by ATM sequencing. A total of seven individuals with ATM variants were identified, 5/65 (7.7 %) in breast cancer cases of familial breast and/or ovarian cancer and 2/69 (2.9 %) in breast/thyroid cancer. No variant ATM carriers were found among the other cancer cases. Excluding a single case in which both BRCA1 and ATM were mutated, no p53-MCL alterations were observed in BRCA1/2-positive cases. These data validate p53-MCL as reliable and specific test for germline ATM variants, confirm ATM as breast cancer susceptibility gene, and highlight a possible association with breast/thyroid cancers.

  15. Reliability of the Load-Velocity Relationship Obtained Through Linear and Polynomial Regression Models to Predict the One-Repetition Maximum Load.

    Science.gov (United States)

    Pestaña-Melero, Francisco Luis; Haff, G Gregory; Rojas, Francisco Javier; Pérez-Castilla, Alejandro; García-Ramos, Amador

    2017-12-18

    This study aimed to compare the between-session reliability of the load-velocity relationship between (1) linear vs. polynomial regression models, (2) concentric-only vs. eccentric-concentric bench press variants, as well as (3) the within-participants vs. the between-participants variability of the velocity attained at each percentage of the one-repetition maximum (%1RM). The load-velocity relationship of 30 men (age: 21.2±3.8 y; height: 1.78±0.07 m, body mass: 72.3±7.3 kg; bench press 1RM: 78.8±13.2 kg) were evaluated by means of linear and polynomial regression models in the concentric-only and eccentric-concentric bench press variants in a Smith Machine. Two sessions were performed with each bench press variant. The main findings were: (1) first-order-polynomials (CV: 4.39%-4.70%) provided the load-velocity relationship with higher reliability than second-order-polynomials (CV: 4.68%-5.04%); (2) the reliability of the load-velocity relationship did not differ between the concentric-only and eccentric-concentric bench press variants; (3) the within-participants variability of the velocity attained at each %1RM was markedly lower than the between-participants variability. Taken together, these results highlight that, regardless of the bench press variant considered, the individual determination of the load-velocity relationship by a linear regression model could be recommended to monitor and prescribe the relative load in the Smith machine bench press exercise.

  16. Reliability and sensitivity to change of the timed standing balance test in children with down syndrome

    Directory of Open Access Journals (Sweden)

    Vencita Priyanka Aranha

    2016-01-01

    Full Text Available Objective: To estimate the reliability and sensitivity to change of the timed standing balance test in children with Down syndrome (DS. Methods: It was a nonblinded, comparison study with a convenience sample of subjects consisting of children with DS (n = 9 aged 8–17 years. The main outcome measure was standing balance which was assessed using timed standing balance test, the time required to maintain in four conditions, eyes open static, eyes closed static, eyes open dynamic, and eyes closed dynamic. Results: Relative reliability was excellent for all four conditions with an Interclass Correlation Coefficient (ICC ranging from 0.91 to 0.93. The variation between repeated measurements for each condition was minimal with standard error of measurement (SEM of 0.21–0.59 s, suggestive of excellent absolute reliability. The sensitivity to change as measured by smallest real change (SRC was 1.27 s for eyes open static, 1.63 s for eyes closed static, 0.58 s for eyes open dynamic, and 0.61 s for eyes closed static. Conclusions: Timed standing balance test is an easy to administer test and sensitive to change with strong absolute and relative reliabilities, an important first step in establishing its utility as a clinical balance measure in children with DS.

  17. Reliability in maintenance and design of elastomer sealed closures

    International Nuclear Information System (INIS)

    Lake, W.H.

    1978-01-01

    The methods of reliability are considered for maintenance and design of elastomer sealed containment closures. Component reliability is used to establish a replacement schedule for system maintenance. Reliability data on elastomer seals is used to evaluate the common practice of annual replacement, and to calculate component reliability values for several typical shipment time periods. System reliability methods are used to examine the relative merits of typical closure designs. These include single component and redundant seal closure, with and without closure verification testing. The paper presents a general method of quantifying the merits of closure designs through the use of reliability analysis, which is a probabilistic technique. The reference list offers a general source of information in the field of reliability, and should offer the opportunity to extend the procedures discussed in this paper to other design safety applications

  18. Reliability Analysis Study of Digital Reactor Protection System in Nuclear Power Plant

    International Nuclear Information System (INIS)

    Guo, Xiao Ming; Liu, Tao; Tong, Jie Juan; Zhao, Jun

    2011-01-01

    The Digital I and C systems are believed to improve a plants safety and reliability generally. The reliability analysis of digital I and C system has become one research hotspot. Traditional fault tree method is one of means to quantify the digital I and C system reliability. Review of advanced nuclear power plant AP1000 digital protection system evaluation makes clear both the fault tree application and analysis process to the digital system reliability. One typical digital protection system special for advanced reactor has been developed, which reliability evaluation is necessary for design demonstration. The typical digital protection system construction is introduced in the paper, and the process of FMEA and fault tree application to the digital protection system reliability evaluation are described. Reliability data and bypass logic modeling are two points giving special attention in the paper. Because the factors about time sequence and feedback not exist in reactor protection system obviously, the dynamic feature of digital system is not discussed

  19. Near real-time GPS applications for tsunami early warning systems

    Directory of Open Access Journals (Sweden)

    C. Falck

    2010-02-01

    Full Text Available GPS (Global Positioning System technology is widely used for positioning applications. Many of them have high requirements with respect to precision, reliability or fast product delivery, but usually not all at the same time as it is the case for early warning applications. The tasks for the GPS-based components within the GITEWS project (German Indonesian Tsunami Early Warning System, Rudloff et al., 2009 are to support the determination of sea levels (measured onshore and offshore and to detect co-seismic land mass displacements with the lowest possible latency (design goal: first reliable results after 5 min. The completed system was designed to fulfil these tasks in near real-time, rather than for scientific research requirements. The obtained data products (movements of GPS antennas are supporting the warning process in different ways. The measurements from GPS instruments on buoys allow the earliest possible detection or confirmation of tsunami waves on the ocean. Onshore GPS measurements are made collocated with tide gauges or seismological stations and give information about co-seismic land mass movements as recorded, e.g., during the great Sumatra-Andaman earthquake of 2004 (Subarya et al., 2006. This information is important to separate tsunami-caused sea height movements from apparent sea height changes at tide gauge locations (sensor station movement and also as additional information about earthquakes' mechanisms, as this is an essential information to predict a tsunami (Sobolev et al., 2007.

    This article gives an end-to-end overview of the GITEWS GPS-component system, from the GPS sensors (GPS receiver with GPS antenna and auxiliary systems, either onshore or offshore to the early warning centre displays. We describe how the GPS sensors have been installed, how they are operated and the methods used to collect, transfer and process the GPS data in near real-time. This includes the sensor system design, the communication

  20. Assessments and applications to enhance human reliability and reduce risk during less-than-full-power operations

    International Nuclear Information System (INIS)

    Hannaman, G.W.; Singh, A.

    1992-01-01

    Study of events, interviews with plant personnel, and applications of risk studies indicate that the risk of a potential accident during less-than-full-power (LTFP) operation is becoming a greater fraction of the risk as improvements are made to the full-power operations. Industry efforts have been increased to reduce risk and the cost of shutdown operations. These efforts consider the development and application of advanced tools to help utilities proactively identify issues and develop contingencies and interventions to enhance reliability and reduce risk of low-power operations at nuclear power plants. The role for human reliability assessments is to help improve utility outage planning to better achieve schedule and risk control objectives. Improvements are expected to include intervention tools to identify and reduce human error, definition of new instructional modules, and prioritization of risk reduction issues for operators. The Electric Power Research Institute is sponsoring a project to address the identification and quantification of factors that affect human reliability during LTFP operation of nuclear power plants. The results of this project are expected to promote the development of proactively applied interventions and contingencies for enhanced human reliability during shutdown operations

  1. Quantitative reliability assessment for safety critical system software

    International Nuclear Information System (INIS)

    Chung, Dae Won; Kwon, Soon Man

    2005-01-01

    An essential issue in the replacement of the old analogue I and C to computer-based digital systems in nuclear power plants is the quantitative software reliability assessment. Software reliability models have been successfully applied to many industrial applications, but have the unfortunate drawback of requiring data from which one can formulate a model. Software which is developed for safety critical applications is frequently unable to produce such data for at least two reasons. First, the software is frequently one-of-a-kind, and second, it rarely fails. Safety critical software is normally expected to pass every unit test producing precious little failure data. The basic premise of the rare events approach is that well-tested software does not fail under normal routine and input signals, which means that failures must be triggered by unusual input data and computer states. The failure data found under the reasonable testing cases and testing time for these conditions should be considered for the quantitative reliability assessment. We will present the quantitative reliability assessment methodology of safety critical software for rare failure cases in this paper

  2. Design for Reliability and Robustness Tool Platform for Power Electronic Systems – Study Case on Motor Drive Applications

    DEFF Research Database (Denmark)

    Vernica, Ionut; Wang, Huai; Blaabjerg, Frede

    2018-01-01

    conventional approach, mainly based on failure statistics from the field, the reliability evaluation of the power devices is still a challenging task. In order to address the given problem, a MATLAB based reliability assessment tool has been developed. The Design for Reliability and Robustness (DfR2) tool...... allows the user to easily investigate the reliability performance of the power electronic components (or sub-systems) under given input mission profiles and operating conditions. The main concept of the tool and its framework are introduced, highlighting the reliability assessment procedure for power...... semiconductor devices. Finally, a motor drive application is implemented and the reliability performance of the power devices is investigated with the help of the DfR2 tool, and the resulting reliability metrics are presented....

  3. Reliability and radiation effects in compound semiconductors

    CERN Document Server

    Johnston, Allan

    2010-01-01

    This book discusses reliability and radiation effects in compound semiconductors, which have evolved rapidly during the last 15 years. Johnston's perspective in the book focuses on high-reliability applications in space, but his discussion of reliability is applicable to high reliability terrestrial applications as well. The book is important because there are new reliability mechanisms present in compound semiconductors that have produced a great deal of confusion. They are complex, and appear to be major stumbling blocks in the application of these types of devices. Many of the reliability problems that were prominent research topics five to ten years ago have been solved, and the reliability of many of these devices has been improved to the level where they can be used for ten years or more with low failure rates. There is also considerable confusion about the way that space radiation affects compound semiconductors. Some optoelectronic devices are so sensitive to damage in space that they are very difficu...

  4. Comparison and evaluation of two exome capture kits and sequencing platforms for variant calling.

    Science.gov (United States)

    Zhang, Guoqiang; Wang, Jianfeng; Yang, Jin; Li, Wenjie; Deng, Yutian; Li, Jing; Huang, Jun; Hu, Songnian; Zhang, Bing

    2015-08-05

    To promote the clinical application of next-generation sequencing, it is important to obtain accurate and consistent variants of target genomic regions at low cost. Ion Proton, the latest updated semiconductor-based sequencing instrument from Life Technologies, is designed to provide investigators with an inexpensive platform for human whole exome sequencing that achieves a rapid turnaround time. However, few studies have comprehensively compared and evaluated the accuracy of variant calling between Ion Proton and Illumina sequencing platforms such as HiSeq 2000, which is the most popular sequencing platform for the human genome. The Ion Proton sequencer combined with the Ion TargetSeq Exome Enrichment Kit together make up TargetSeq-Proton, whereas SureSelect-Hiseq is based on the Agilent SureSelect Human All Exon v4 Kit and the HiSeq 2000 sequencer. Here, we sequenced exonic DNA from four human blood samples using both TargetSeq-Proton and SureSelect-HiSeq. We then called variants in the exonic regions that overlapped between the two exome capture kits (33.6 Mb). The rates of shared variant loci called by two sequencing platforms were from 68.0 to 75.3% in four samples, whereas the concordance of co-detected variant loci reached 99%. Sanger sequencing validation revealed that the validated rate of concordant single nucleotide polymorphisms (SNPs) (91.5%) was higher than the SNPs specific to TargetSeq-Proton (60.0%) or specific to SureSelect-HiSeq (88.3%). With regard to 1-bp small insertions and deletions (InDels), the Sanger sequencing validated rates of concordant variants (100.0%) and SureSelect-HiSeq-specific (89.6%) were higher than those of TargetSeq-Proton-specific (15.8%). In the sequencing of exonic regions, a combination of using of two sequencing strategies (SureSelect-HiSeq and TargetSeq-Proton) increased the variant calling specificity for concordant variant loci and the sensitivity for variant loci called by any one platform. However, for the

  5. Reliable design of electronic equipment an engineering guide

    CERN Document Server

    Natarajan, Dhanasekharan

    2014-01-01

    This book explains reliability techniques with examples from electronics design for the benefit of engineers. It presents the application of de-rating, FMEA, overstress analyses and reliability improvement tests for designing reliable electronic equipment. Adequate information is provided for designing computerized reliability database system to support the application of the techniques by designers. Pedantic terms and the associated mathematics of reliability engineering discipline are excluded for the benefit of comprehensiveness and practical applications. This book offers excellent support

  6. Design for Reliability and Robustness Tool Platform for Power Electronic Systems – Study Case on Motor Drive Applications

    DEFF Research Database (Denmark)

    Vernica, Ionut; Wang, Huai; Blaabjerg, Frede

    2018-01-01

    Because of the high cost of failure, the reliability performance of power semiconductor devices is becoming a more and more important and stringent factor in many energy conversion applications. Thus, the need for appropriate reliability analysis of the power electronics emerges. Due to its...

  7. Reliability assessment platform for the power semiconductor devices - Study case on 3-phase grid-connected inverter application

    DEFF Research Database (Denmark)

    Vernica, Ionut; Ma, Ke; Blaabjerg, Frede

    2017-01-01

    provide valuable reliability information based on given mission profiles and system specification is first developed and its main concept is presented. In order to facilitate the test and access to the loading and lifetime information of the power devices, a novel mission profile based stress emulator...... experimental setup is proposed and designed. The link between the stress emulator setup and the reliability tool software is highlighted. Finally, the reliability assessment platform is demonstrated on a 3-phase grid-connected inverter application study case....

  8. Human reliability analysis

    International Nuclear Information System (INIS)

    Dougherty, E.M.; Fragola, J.R.

    1988-01-01

    The authors present a treatment of human reliability analysis incorporating an introduction to probabilistic risk assessment for nuclear power generating stations. They treat the subject according to the framework established for general systems theory. Draws upon reliability analysis, psychology, human factors engineering, and statistics, integrating elements of these fields within a systems framework. Provides a history of human reliability analysis, and includes examples of the application of the systems approach

  9. Single-variant and multi-variant trend tests for genetic association with next-generation sequencing that are robust to sequencing error.

    Science.gov (United States)

    Kim, Wonkuk; Londono, Douglas; Zhou, Lisheng; Xing, Jinchuan; Nato, Alejandro Q; Musolf, Anthony; Matise, Tara C; Finch, Stephen J; Gordon, Derek

    2012-01-01

    As with any new technology, next-generation sequencing (NGS) has potential advantages and potential challenges. One advantage is the identification of multiple causal variants for disease that might otherwise be missed by SNP-chip technology. One potential challenge is misclassification error (as with any emerging technology) and the issue of power loss due to multiple testing. Here, we develop an extension of the linear trend test for association that incorporates differential misclassification error and may be applied to any number of SNPs. We call the statistic the linear trend test allowing for error, applied to NGS, or LTTae,NGS. This statistic allows for differential misclassification. The observed data are phenotypes for unrelated cases and controls, coverage, and the number of putative causal variants for every individual at all SNPs. We simulate data considering multiple factors (disease mode of inheritance, genotype relative risk, causal variant frequency, sequence error rate in cases, sequence error rate in controls, number of loci, and others) and evaluate type I error rate and power for each vector of factor settings. We compare our results with two recently published NGS statistics. Also, we create a fictitious disease model based on downloaded 1000 Genomes data for 5 SNPs and 388 individuals, and apply our statistic to those data. We find that the LTTae,NGS maintains the correct type I error rate in all simulations (differential and non-differential error), while the other statistics show large inflation in type I error for lower coverage. Power for all three methods is approximately the same for all three statistics in the presence of non-differential error. Application of our statistic to the 1000 Genomes data suggests that, for the data downloaded, there is a 1.5% sequence misclassification rate over all SNPs. Finally, application of the multi-variant form of LTTae,NGS shows high power for a number of simulation settings, although it can have

  10. Enhancing thermal reliability of fiber-optic sensors for bio-inspired applications at ultra-high temperatures

    Science.gov (United States)

    Kang, Donghoon; Kim, Heon-Young; Kim, Dae-Hyun

    2014-07-01

    The rapid growth of bio-(inspired) sensors has led to an improvement in modern healthcare and human-robot systems in recent years. Higher levels of reliability and better flexibility, essential features of these sensors, are very much required in many application fields (e.g. applications at ultra-high temperatures). Fiber-optic sensors, and fiber Bragg grating (FBG) sensors in particular, are being widely studied as suitable sensors for improved structural health monitoring (SHM) due to their many merits. To enhance the thermal reliability of FBG sensors, thermal sensitivity, generally expressed as αf + ξf and considered a constant, should be investigated more precisely. For this purpose, the governing equation of FBG sensors is modified using differential derivatives between the wavelength shift and the temperature change in this study. Through a thermal test ranging from RT to 900 °C, the thermal sensitivity of FBG sensors is successfully examined and this guarantees thermal reliability of FBG sensors at ultra-high temperatures. In detail, αf + ξf has a non-linear dependence on temperature and varies from 6.0 × 10-6 °C-1 (20 °C) to 10.6 × 10-6 °C-1 (650 °C). Also, FBGs should be carefully used for applications at ultra-high temperatures due to signal disappearance near 900 °C.

  11. An efficient particle swarm approach for mixed-integer programming in reliability-redundancy optimization applications

    International Nuclear Information System (INIS)

    Santos Coelho, Leandro dos

    2009-01-01

    The reliability-redundancy optimization problems can involve the selection of components with multiple choices and redundancy levels that produce maximum benefits, and are subject to the cost, weight, and volume constraints. Many classical mathematical methods have failed in handling nonconvexities and nonsmoothness in reliability-redundancy optimization problems. As an alternative to the classical optimization approaches, the meta-heuristics have been given much attention by many researchers due to their ability to find an almost global optimal solutions. One of these meta-heuristics is the particle swarm optimization (PSO). PSO is a population-based heuristic optimization technique inspired by social behavior of bird flocking and fish schooling. This paper presents an efficient PSO algorithm based on Gaussian distribution and chaotic sequence (PSO-GC) to solve the reliability-redundancy optimization problems. In this context, two examples in reliability-redundancy design problems are evaluated. Simulation results demonstrate that the proposed PSO-GC is a promising optimization technique. PSO-GC performs well for the two examples of mixed-integer programming in reliability-redundancy applications considered in this paper. The solutions obtained by the PSO-GC are better than the previously best-known solutions available in the recent literature

  12. An efficient particle swarm approach for mixed-integer programming in reliability-redundancy optimization applications

    Energy Technology Data Exchange (ETDEWEB)

    Santos Coelho, Leandro dos [Industrial and Systems Engineering Graduate Program, LAS/PPGEPS, Pontifical Catholic University of Parana, PUCPR, Imaculada Conceicao, 1155, 80215-901 Curitiba, Parana (Brazil)], E-mail: leandro.coelho@pucpr.br

    2009-04-15

    The reliability-redundancy optimization problems can involve the selection of components with multiple choices and redundancy levels that produce maximum benefits, and are subject to the cost, weight, and volume constraints. Many classical mathematical methods have failed in handling nonconvexities and nonsmoothness in reliability-redundancy optimization problems. As an alternative to the classical optimization approaches, the meta-heuristics have been given much attention by many researchers due to their ability to find an almost global optimal solutions. One of these meta-heuristics is the particle swarm optimization (PSO). PSO is a population-based heuristic optimization technique inspired by social behavior of bird flocking and fish schooling. This paper presents an efficient PSO algorithm based on Gaussian distribution and chaotic sequence (PSO-GC) to solve the reliability-redundancy optimization problems. In this context, two examples in reliability-redundancy design problems are evaluated. Simulation results demonstrate that the proposed PSO-GC is a promising optimization technique. PSO-GC performs well for the two examples of mixed-integer programming in reliability-redundancy applications considered in this paper. The solutions obtained by the PSO-GC are better than the previously best-known solutions available in the recent literature.

  13. The timed "up and go" test : Reliability and validity in persons with unilateral lower limb amputation

    NARCIS (Netherlands)

    Schoppen, Tanneke; Boonstra, Antje; Groothoff, JW; de Vries, J; Goeken, LNH; Eisma, Willem

    Objective: To determine the interrater and interrater reliability and the validity of the Timed "up and go" test as a measure for physical mobility in elderly patients with an amputation of the lower extremity. Design: To test interrater reliability, the test was performed for two observers at

  14. Sampling inspection for the evaluation of time-dependent reliability of deteriorating systems under imperfect defect detection

    International Nuclear Information System (INIS)

    Kuniewski, Sebastian P.; Weide, Johannes A.M. van der; Noortwijk, Jan M. van

    2009-01-01

    The paper presents a sampling-inspection strategy for the evaluation of time-dependent reliability of deteriorating systems, where the deterioration is assumed to initiate at random times and at random locations. After initiation, defects are weakening the system's resistance. The system becomes unacceptable when at least one defect reaches a critical depth. The defects are assumed to initiate at random times modeled as event times of a non-homogeneous Poisson process (NHPP) and to develop according to a non-decreasing time-dependent gamma process. The intensity rate of the NHPP is assumed to be a combination of a known time-dependent shape function and an unknown proportionality constant. When sampling inspection (i.e. inspection of a selected subregion of the system) results in a number of defect initiations, Bayes' theorem can be used to update prior beliefs about the proportionality constant of the NHPP intensity rate to the posterior distribution. On the basis of a time- and space-dependent Poisson process for the defect initiation, an adaptive Bayesian model for sampling inspection is developed to determine the predictive probability distribution of the time to failure. A potential application is, for instance, the inspection of a large vessel or pipeline suffering pitting/localized corrosion in the oil industry. The possibility of imperfect defect detection is also incorporated in the model.

  15. G2LC: Resources Autoscaling for Real Time Bioinformatics Applications in IaaS

    Directory of Open Access Journals (Sweden)

    Rongdong Hu

    2015-01-01

    Full Text Available Cloud computing has started to change the way how bioinformatics research is being carried out. Researchers who have taken advantage of this technology can process larger amounts of data and speed up scientific discovery. The variability in data volume results in variable computing requirements. Therefore, bioinformatics researchers are pursuing more reliable and efficient methods for conducting sequencing analyses. This paper proposes an automated resource provisioning method, G2LC, for bioinformatics applications in IaaS. It enables application to output the results in a real time manner. Its main purpose is to guarantee applications performance, while improving resource utilization. Real sequence searching data of BLAST is used to evaluate the effectiveness of G2LC. Experimental results show that G2LC guarantees the application performance, while resource is saved up to 20.14%.

  16. Contrasting roles of the ABCG2 Q141K variant in prostate cancer

    Energy Technology Data Exchange (ETDEWEB)

    Sobek, Kathryn M. [Department of Urology, University of Pittsburgh School of Medicine, Pittsburgh, PA (United States); Cummings, Jessica L. [Department of Urology, University of Pittsburgh School of Medicine, Pittsburgh, PA (United States); Department of Critical Care Medicine, University of Pittsburgh, Pittsburgh, PA (United States); Bacich, Dean J. [Department of Urology, University of Pittsburgh School of Medicine, Pittsburgh, PA (United States); Department of Urology, University of Texas Health Science Center, San Antonio, TX (United States); O’Keefe, Denise S., E-mail: OKeefeD@uthscsa.edu [Department of Urology, University of Pittsburgh School of Medicine, Pittsburgh, PA (United States); Department of Urology, University of Texas Health Science Center, San Antonio, TX (United States)

    2017-05-01

    ABCG2 is a membrane transport protein that effluxes growth-promoting molecules, such as folates and dihydrotestosterone, as well as chemotherapeutic agents. Therefore it is important to determine how variants of ABCG2 affect the transporter function in order to determine whether modified treatment regimens may be necessary for patients harboring ABCG2 variants. Previous studies have demonstrated an association between the ABCG2 Q141K variant and overall survival after a prostate cancer diagnosis. We report here that in patients with recurrent prostate cancer, those who carry the ABCG2 Q141K variant had a significantly shorter time to PSA recurrence post-prostatectomy than patients homozygous for wild-type ABCG2 (P=0.01). Transport studies showed that wild-type ABCG2 was able to efflux more folic acid than the Q141K variant (P<0.002), suggesting that retained tumoral folate contributes to the decreased time to PSA recurrence in the Q141K variant patients. In a seemingly conflicting study, it was previously reported that docetaxel-treated Q141K variant prostate cancer patients have a longer survival time. We found this may be due to less efficient docetaxel efflux in cells with the Q141K variant versus wild-type ABCG2. In human prostate cancer tissues, confocal microscopy revealed that all genotypes had a mixture of cytoplasmic and plasma membrane staining, with noticeably less staining in the two homozygous KK patients. In conclusion, the Q141K variant plays contrasting roles in prostate cancer: 1) by decreasing folate efflux, increased intracellular folate levels result in enhanced tumor cell proliferation and therefore time to recurrence decreases; and 2) in patients treated with docetaxel, by decreasing its efflux, intratumoral docetaxel levels and tumor cell drug sensitivity increase and therefore patient survival time increases. Taken together, these data suggest that a patient's ABCG2 genotype may be important when determining a personalized treatment

  17. Towards a reliable animal model of migraine

    DEFF Research Database (Denmark)

    Olesen, Jes; Jansen-Olesen, Inger

    2012-01-01

    The pharmaceutical industry shows a decreasing interest in the development of drugs for migraine. One of the reasons for this could be the lack of reliable animal models for studying the effect of acute and prophylactic migraine drugs. The infusion of glyceryl trinitrate (GTN) is the best validated...... and most studied human migraine model. Several attempts have been made to transfer this model to animals. The different variants of this model are discussed as well as other recent models....

  18. Galileo Timing Applications

    Science.gov (United States)

    2007-11-01

    public bodies like university and research institutes. The user community analysis also includes a market analysis performed by a specialized company to... companies and public institutions (e.g., universities, research laboratories) that work in several different application domains in order to virtually...Summary of application domains for the use of time in cryptography. B2G B2B B2C Applications Military waypoints, judicial reports, construction

  19. Time-variant coherence between heart rate variability and EEG activity in epileptic patients: an advanced coupling analysis between physiological networks

    International Nuclear Information System (INIS)

    Piper, D; Schiecke, K; Pester, B; Witte, H; Benninger, F; Feucht, M

    2014-01-01

    Time-variant coherence analysis between the heart rate variability (HRV) and the channel-related envelopes of adaptively selected EEG components was used as an indicator for the occurrence of (correlative) couplings between the central autonomic network (CAN) and the epileptic network before, during and after epileptic seizures. Two groups of patients were investigated, a group with left and a group with right hemispheric temporal lobe epilepsy. The individual EEG components were extracted by a signal-adaptive approach, the multivariate empirical mode decomposition, and the envelopes of each resulting intrinsic mode function (IMF) were computed by using Hilbert transform. Two IMFs, whose envelopes were strongly correlated with the HRV’s low-frequency oscillation (HRV-LF; ≈0.1 Hz) before and after the seizure were identified. The frequency ranges of these IMFs correspond to the EEG delta-band. The time-variant coherence was statistically quantified and tensor decomposition of the time-frequency coherence maps was applied to explore the topography-time-frequency characteristics of the coherence analysis. Results allow the hypothesis that couplings between the CAN, which controls the cardiovascular-cardiorespiratory system, and the ‘epileptic neural network’ exist. Additionally, our results confirm the hypothesis of a right hemispheric lateralization of sympathetic cardiac control of the HRV-LF. (paper)

  20. Reliability modeling of a hard real-time system using the path-space approach

    International Nuclear Information System (INIS)

    Kim, Hagbae

    2000-01-01

    A hard real-time system, such as a fly-by-wire system, fails catastrophically (e.g. losing stability) if its control inputs are not updated by its digital controller computer within a certain timing constraint called the hard deadline. To assess and validate those systems' reliabilities by using a semi-Markov model that explicitly contains the deadline information, we propose a path-space approach deriving the upper and lower bounds of the probability of system failure. These bounds are derived by using only simple parameters, and they are especially suitable for highly reliable systems which should recover quickly. Analytical bounds are derived for both exponential and Wobble failure distributions encountered commonly, which have proven effective through numerical examples, while considering three repair strategies: repair-as-good-as-new, repair-as-good-as-old, and repair-better-than-old

  1. A combined Importance Sampling and Kriging reliability method for small failure probabilities with time-demanding numerical models

    International Nuclear Information System (INIS)

    Echard, B.; Gayton, N.; Lemaire, M.; Relun, N.

    2013-01-01

    Applying reliability methods to a complex structure is often delicate for two main reasons. First, such a structure is fortunately designed with codified rules leading to a large safety margin which means that failure is a small probability event. Such a probability level is difficult to assess efficiently. Second, the structure mechanical behaviour is modelled numerically in an attempt to reproduce the real response and numerical model tends to be more and more time-demanding as its complexity is increased to improve accuracy and to consider particular mechanical behaviour. As a consequence, performing a large number of model computations cannot be considered in order to assess the failure probability. To overcome these issues, this paper proposes an original and easily implementable method called AK-IS for active learning and Kriging-based Importance Sampling. This new method is based on the AK-MCS algorithm previously published by Echard et al. [AK-MCS: an active learning reliability method combining Kriging and Monte Carlo simulation. Structural Safety 2011;33(2):145–54]. It associates the Kriging metamodel and its advantageous stochastic property with the Importance Sampling method to assess small failure probabilities. It enables the correction or validation of the FORM approximation with only a very few mechanical model computations. The efficiency of the method is, first, proved on two academic applications. It is then conducted for assessing the reliability of a challenging aerospace case study submitted to fatigue.

  2. The transient M/G/1/0 queue: some bounds and approximations for light traffic with application to reliability

    Directory of Open Access Journals (Sweden)

    J. Ben Atkinson

    1995-01-01

    Full Text Available We consider the transient analysis of the M/G/1/0 queue, for which Pn(t denotes the probability that there are no customers in the system at time t, given that there are n(n=0,1 customers in the system at time 0. The analysis, which is based upon coupling theory, leads to simple bounds on Pn(t for the M/G/1/0 and M/PH/1/0 queues and improved bounds for the special case M/Er/1/0. Numerical results are presented for various values of the mean arrival rate λ to demonstrate the increasing accuracy of approximations based upon the above bounds in light traffic, i.e., as λ→0. An important area of application for the M/G/1/0 queue is as a reliability model for a single repairable component. Since most practical reliability problems have λ values that are small relative to the mean service rate, the approximations are potentially useful in that context. A duality relation between the M/G/1/0 and GI/M/1/0 queues is also described.

  3. On the reliability of seasonal climate forecasts

    Science.gov (United States)

    Weisheimer, A.; Palmer, T. N.

    2014-01-01

    Seasonal climate forecasts are being used increasingly across a range of application sectors. A recent UK governmental report asked: how good are seasonal forecasts on a scale of 1–5 (where 5 is very good), and how good can we expect them to be in 30 years time? Seasonal forecasts are made from ensembles of integrations of numerical models of climate. We argue that ‘goodness’ should be assessed first and foremost in terms of the probabilistic reliability of these ensemble-based forecasts; reliable inputs are essential for any forecast-based decision-making. We propose that a ‘5’ should be reserved for systems that are not only reliable overall, but where, in particular, small ensemble spread is a reliable indicator of low ensemble forecast error. We study the reliability of regional temperature and precipitation forecasts of the current operational seasonal forecast system of the European Centre for Medium-Range Weather Forecasts, universally regarded as one of the world-leading operational institutes producing seasonal climate forecasts. A wide range of ‘goodness’ rankings, depending on region and variable (with summer forecasts of rainfall over Northern Europe performing exceptionally poorly) is found. Finally, we discuss the prospects of reaching ‘5’ across all regions and variables in 30 years time. PMID:24789559

  4. LED system reliability

    NARCIS (Netherlands)

    Driel, W.D. van; Yuan, C.A.; Koh, S.; Zhang, G.Q.

    2011-01-01

    This paper presents our effort to predict the system reliability of Solid State Lighting (SSL) applications. A SSL system is composed of a LED engine with micro-electronic driver(s) that supplies power to the optic design. Knowledge of system level reliability is not only a challenging scientific

  5. Test-Retest Reliability of Standard and Emotional Stroop Tasks: An Investigation of Color-Word and Picture-Word Versions

    Science.gov (United States)

    Strauss, Gregory P.; Allen, Daniel N.; Jorgensen, Melinda L.; Cramer, Stacey L.

    2005-01-01

    Previous studies have examined the reliability of scores derived from various Stroop tasks. However, few studies have compared reliability of more recently developed Stroop variants such as emotional Stroop tasks to standard versions of the Stroop. The current study developed four different single-stimulus Stroop tasks and compared test-retest…

  6. Reliability, validity and description of timed performance of the Jebsen-Taylor Test in patients with muscular dystrophies.

    Science.gov (United States)

    Artilheiro, Mariana Cunha; Fávero, Francis Meire; Caromano, Fátima Aparecida; Oliveira, Acary de Souza Bulle; Carvas, Nelson; Voos, Mariana Callil; Sá, Cristina Dos Santos Cardoso de

    2017-12-08

    The Jebsen-Taylor Test evaluates upper limb function by measuring timed performance on everyday activities. The test is used to assess and monitor the progression of patients with Parkinson disease, cerebral palsy, stroke and brain injury. To analyze the reliability, internal consistency and validity of the Jebsen-Taylor Test in people with Muscular Dystrophy and to describe and classify upper limb timed performance of people with Muscular Dystrophy. Fifty patients with Muscular Dystrophy were assessed. Non-dominant and dominant upper limb performances on the Jebsen-Taylor Test were filmed. Two raters evaluated timed performance for inter-rater reliability analysis. Test-retest reliability was investigated by using intraclass correlation coefficients. Internal consistency was assessed using the Cronbach alpha. Construct validity was conducted by comparing the Jebsen-Taylor Test with the Performance of Upper Limb. The internal consistency of Jebsen-Taylor Test was good (Cronbach's α=0.98). A very high inter-rater reliability (0.903-0.999), except for writing with an Intraclass correlation coefficient of 0.772-1.000. Strong correlations between the Jebsen-Taylor Test and the Performance of Upper Limb Module were found (rho=-0.712). The Jebsen-Taylor Test is a reliable and valid measure of timed performance for people with Muscular Dystrophy. Copyright © 2017 Associação Brasileira de Pesquisa e Pós-Graduação em Fisioterapia. Publicado por Elsevier Editora Ltda. All rights reserved.

  7. Reliable RANSAC Using a Novel Preprocessing Model

    Directory of Open Access Journals (Sweden)

    Xiaoyan Wang

    2013-01-01

    Full Text Available Geometric assumption and verification with RANSAC has become a crucial step for corresponding to local features due to its wide applications in biomedical feature analysis and vision computing. However, conventional RANSAC is very time-consuming due to redundant sampling times, especially dealing with the case of numerous matching pairs. This paper presents a novel preprocessing model to explore a reduced set with reliable correspondences from initial matching dataset. Both geometric model generation and verification are carried out on this reduced set, which leads to considerable speedups. Afterwards, this paper proposes a reliable RANSAC framework using preprocessing model, which was implemented and verified using Harris and SIFT features, respectively. Compared with traditional RANSAC, experimental results show that our method is more efficient.

  8. Multi-state time-varying reliability evaluation of smart grid with flexible demand resources utilizing Lz transform

    Science.gov (United States)

    Jia, Heping; Jin, Wende; Ding, Yi; Song, Yonghua; Yu, Dezhao

    2017-01-01

    With the expanding proportion of renewable energy generation and development of smart grid technologies, flexible demand resources (FDRs) have been utilized as an approach to accommodating renewable energies. However, multiple uncertainties of FDRs may influence reliable and secure operation of smart grid. Multi-state reliability models for a single FDR and aggregating FDRs have been proposed in this paper with regard to responsive abilities for FDRs and random failures for both FDR devices and information system. The proposed reliability evaluation technique is based on Lz transform method which can formulate time-varying reliability indices. A modified IEEE-RTS has been utilized as an illustration of the proposed technique.

  9. The reliability of vertical jump tests between the Vertec and My Jump phone application.

    Science.gov (United States)

    Yingling, Vanessa R; Castro, Dimitri A; Duong, Justin T; Malpartida, Fiorella J; Usher, Justin R; O, Jenny

    2018-01-01

    The vertical jump is used to estimate sports performance capabilities and physical fitness in children, elderly, non-athletic and injured individuals. Different jump techniques and measurement tools are available to assess vertical jump height and peak power; however, their use is limited by access to laboratory settings, excessive cost and/or time constraints thus making these tools oftentimes unsuitable for field assessment. A popular field test uses the Vertec and the Sargent vertical jump with countermovement; however, new low cost, easy to use tools are becoming available, including the My Jump iOS mobile application (app). The purpose of this study was to assess the reliability of the My Jump relative to values obtained by the Vertec for the Sargent stand and reach vertical jump (VJ) test. One hundred and thirty-five healthy participants aged 18-39 years (94 males, 41 females) completed three maximal Sargent VJ with countermovement that were simultaneously measured using the Vertec and the My Jump . Jump heights were quantified for each jump and peak power was calculated using the Sayers equation. Four separate ICC estimates and their 95% confidence intervals were used to assess reliability. Two analyses (with jump height and calculated peak power as the dependent variables, respectively) were based on a single rater, consistency, two-way mixed-effects model, while two others (with jump height and calculated peak power as the dependent variables, respectively) were based on a single rater, absolute agreement, two-way mixed-effects model. Moderate to excellent reliability relative to the degree of consistency between the Vertec and My Jump values was found for jump height (ICC = 0.813; 95% CI [0.747-0.863]) and calculated peak power (ICC = 0.926; 95% CI [0.897-0.947]). However, poor to good reliability relative to absolute agreement for VJ height (ICC = 0.665; 95% CI [0.050-0.859]) and poor to excellent reliability relative to absolute agreement for peak power

  10. V2X application-reliability analysis of data-rate and message-rate congestion control algorithms

    NARCIS (Netherlands)

    Math, C. Belagal; Li, H.; Heemstra de Groot, S.M.; Niemegeers, I.G.M.M.

    2017-01-01

    Intelligent Transportation Systems (ITS) require Vehicle-to-Everything (V2X) communication. In dense traffic, the communication channel may become congested, impairing the reliability of the ITS safety applications. Therefore, European Telecommunications Standard Institute (ETSI) demands

  11. Reliability analysis of neutron transport simulation using Monte Carlo method

    International Nuclear Information System (INIS)

    Souza, Bismarck A. de; Borges, Jose C.

    1995-01-01

    This work presents a statistical and reliability analysis covering data obtained by computer simulation of neutron transport process, using the Monte Carlo method. A general description of the method and its applications is presented. Several simulations, corresponding to slowing down and shielding problems have been accomplished. The influence of the physical dimensions of the materials and of the sample size on the reliability level of results was investigated. The objective was to optimize the sample size, in order to obtain reliable results, optimizing computation time. (author). 5 refs, 8 figs

  12. Application of the error propagation theory in estimates of static formation temperatures in geothermal and petroleum boreholes

    International Nuclear Information System (INIS)

    Verma, Surendra P.; Andaverde, Jorge; Santoyo, E.

    2006-01-01

    We used the error propagation theory to calculate uncertainties in static formation temperature estimates in geothermal and petroleum wells from three widely used methods (line-source or Horner method; spherical and radial heat flow method; and cylindrical heat source method). Although these methods commonly use an ordinary least-squares linear regression model considered in this study, we also evaluated two variants of a weighted least-squares linear regression model for the actual relationship between the bottom-hole temperature and the corresponding time functions. Equations based on the error propagation theory were derived for estimating uncertainties in the time function of each analytical method. These uncertainties in conjunction with those on bottom-hole temperatures were used to estimate individual weighting factors required for applying the two variants of the weighted least-squares regression model. Standard deviations and 95% confidence limits of intercept were calculated for both types of linear regressions. Applications showed that static formation temperatures computed with the spherical and radial heat flow method were generally greater (at the 95% confidence level) than those from the other two methods under study. When typical measurement errors of 0.25 h in time and 5 deg. C in bottom-hole temperature were assumed for the weighted least-squares model, the uncertainties in the estimated static formation temperatures were greater than those for the ordinary least-squares model. However, if these errors were smaller (about 1% in time and 0.5% in temperature measurements), the weighted least-squares linear regression model would generally provide smaller uncertainties for the estimated temperatures than the ordinary least-squares linear regression model. Therefore, the weighted model would be statistically correct and more appropriate for such applications. We also suggest that at least 30 precise and accurate BHT and time measurements along with

  13. Human Reliability Analysis for Design: Using Reliability Methods for Human Factors Issues

    Energy Technology Data Exchange (ETDEWEB)

    Ronald Laurids Boring

    2010-11-01

    This paper reviews the application of human reliability analysis methods to human factors design issues. An application framework is sketched in which aspects of modeling typically found in human reliability analysis are used in a complementary fashion to the existing human factors phases of design and testing. The paper provides best achievable practices for design, testing, and modeling. Such best achievable practices may be used to evaluate and human system interface in the context of design safety certifications.

  14. Human Reliability Analysis for Design: Using Reliability Methods for Human Factors Issues

    International Nuclear Information System (INIS)

    Boring, Ronald Laurids

    2010-01-01

    This paper reviews the application of human reliability analysis methods to human factors design issues. An application framework is sketched in which aspects of modeling typically found in human reliability analysis are used in a complementary fashion to the existing human factors phases of design and testing. The paper provides best achievable practices for design, testing, and modeling. Such best achievable practices may be used to evaluate and human system interface in the context of design safety certifications.

  15. Multiplexed enrichment of rare DNA variants via sequence-selective and temperature-robust amplification

    Science.gov (United States)

    Wu, Lucia R.; Chen, Sherry X.; Wu, Yalei; Patel, Abhijit A.; Zhang, David Yu

    2018-01-01

    Rare DNA-sequence variants hold important clinical and biological information, but existing detection techniques are expensive, complex, allele-specific, or don’t allow for significant multiplexing. Here, we report a temperature-robust polymerase-chain-reaction method, which we term blocker displacement amplification (BDA), that selectively amplifies all sequence variants, including single-nucleotide variants (SNVs), within a roughly 20-nucleotide window by 1,000-fold over wild-type sequences. This allows for easy detection and quantitation of hundreds of potential variants originally at ≤0.1% in allele frequency. BDA is compatible with inexpensive thermocycler instrumentation and employs a rationally designed competitive hybridization reaction to achieve comparable enrichment performance across annealing temperatures ranging from 56 °C to 64 °C. To show the sequence generality of BDA, we demonstrate enrichment of 156 SNVs and the reliable detection of single-digit copies. We also show that the BDA detection of rare driver mutations in cell-free DNA samples extracted from the blood plasma of lung-cancer patients is highly consistent with deep sequencing using molecular lineage tags, with a receiver operator characteristic accuracy of 95%. PMID:29805844

  16. Proceeding of 35th domestic symposium on applications of structural reliability and risk assessment methods to nuclear power plants

    International Nuclear Information System (INIS)

    2005-06-01

    As the 35th domestic symposium of Atomic Energy Research Committee, the Japan Welding Engineering Society, the symposium was held titled as Applications of structural reliability/risk assessment methods to nuclear energy'. Six speakers gave lectures titled as 'Structural reliability and risk assessment methods', 'Risk-informed regulation of US nuclear energy and role of probabilistic risk assessment', 'Reliability and risk assessment methods in chemical plants', 'Practical structural design methods based on reliability in architectural and civil areas', 'Maintenance activities based on reliability in thermal power plants' and 'LWR maintenance strategies based on Probabilistic Fracture Mechanics'. (T. Tanaka)

  17. Ultra-Reliable Communication in 5G Wireless Systems

    DEFF Research Database (Denmark)

    Popovski, Petar

    2014-01-01

    —Wireless 5G systems will not only be “4G, but faster”. One of the novel features discussed in relation to 5G is Ultra-Reliable Communication (URC), an operation mode not present in today’s wireless systems. URC refers to provision of certain level of communication service almost 100 % of the time....... Example URC applications include reliable cloud connectivity, critical connections for industrial automation and reliable wireless coordination among vehicles. This paper puts forward a systematic view on URC in 5G wireless systems. It starts by analyzing the fundamental mechanisms that constitute......-term URC (URC-S). The second dimension is represented by the type of reliability impairment that can affect the communication reliability in a given scenario. The main objective of this paper is to create the context for defining and solving the new engineering problems posed by URC in 5G....

  18. Quantitative determination of casein genetic variants in goat milk: Application in Girgentana dairy goat breed.

    Science.gov (United States)

    Montalbano, Maria; Segreto, Roberta; Di Gerlando, Rosalia; Mastrangelo, Salvatore; Sardina, Maria Teresa

    2016-02-01

    The study was conducted to develop a high-performance liquid chromatographic (HPLC) method to quantify casein genetic variants (αs2-, β-, and κ-casein) in milk of homozygous individuals of Girgentana goat breed. For calibration experiments, pure genetic variants were extracted from individual milk samples of animals with known genotypes. The described HPLC approach was precise, accurate and highly suitable for quantification of goat casein genetic variants of homozygous individuals. The amount of each casein per allele was: αs2-casein A = 2.9 ± 0.8 g/L and F = 1.8 ± 0.4 g/L; β-casein C = 3.0 ± 0.8 g/L and C1 = 2.0 ± 0.7 g/L and κ-casein A = 1.6 ± 0.3 g/L and B = 1.1 ± 0.2 g/L. A good correlation was found between the quantities of αs2-casein genetic variants A and F, and β-casein C and C1 with other previously described method. The main important result was obtained for κ-casein because, till now, no data were available on quantification of single genetic variants for this protein. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Reliability over time of EEG-based mental workload evaluation during Air Traffic Management (ATM) tasks.

    Science.gov (United States)

    Arico, Pietro; Borghini, Gianluca; Di Flumeri, Gianluca; Colosimo, Alfredo; Graziani, Ilenia; Imbert, Jean-Paul; Granger, Geraud; Benhacene, Railene; Terenzi, Michela; Pozzi, Simone; Babiloni, Fabio

    2015-08-01

    Machine-learning approaches for mental workload (MW) estimation by using the user brain activity went through a rapid expansion in the last decades. In fact, these techniques allow now to measure the MW with a high time resolution (e.g. few seconds). Despite such advancements, one of the outstanding problems of these techniques regards their ability to maintain a high reliability over time (e.g. high accuracy of classification even across consecutive days) without performing any recalibration procedure. Such characteristic will be highly desirable in real world applications, in which human operators could use such approach without undergo a daily training of the device. In this work, we reported that if a simple classifier is calibrated by using a low number of brain spectral features, between those ones strictly related to the MW (i.e. Frontal and Occipital Theta and Parietal Alpha rhythms), those features will make the classifier performance stable over time. In other words, the discrimination accuracy achieved by the classifier will not degrade significantly across different days (i.e. until one week). The methodology has been tested on twelve Air Traffic Controls (ATCOs) trainees while performing different Air Traffic Management (ATM) scenarios under three different difficulty levels.

  20. Enhancing thermal reliability of fiber-optic sensors for bio-inspired applications at ultra-high temperatures

    International Nuclear Information System (INIS)

    Kang, Donghoon; Kim, Heon-Young; Kim, Dae-Hyun

    2014-01-01

    The rapid growth of bio-(inspired) sensors has led to an improvement in modern healthcare and human–robot systems in recent years. Higher levels of reliability and better flexibility, essential features of these sensors, are very much required in many application fields (e.g. applications at ultra-high temperatures). Fiber-optic sensors, and fiber Bragg grating (FBG) sensors in particular, are being widely studied as suitable sensors for improved structural health monitoring (SHM) due to their many merits. To enhance the thermal reliability of FBG sensors, thermal sensitivity, generally expressed as α f + ξ f and considered a constant, should be investigated more precisely. For this purpose, the governing equation of FBG sensors is modified using differential derivatives between the wavelength shift and the temperature change in this study. Through a thermal test ranging from RT to 900 °C, the thermal sensitivity of FBG sensors is successfully examined and this guarantees thermal reliability of FBG sensors at ultra-high temperatures. In detail, α f + ξ f has a non-linear dependence on temperature and varies from 6.0 × 10 −6  °C −1 (20 °C) to 10.6 × 10 −6  °C −1 (650 °C). Also, FBGs should be carefully used for applications at ultra-high temperatures due to signal disappearance near 900 °C. (paper)

  1. Validity and reliability of an application review process using dedicated reviewers in one stage of a multi-stage admissions model.

    Science.gov (United States)

    Zeeman, Jacqueline M; McLaughlin, Jacqueline E; Cox, Wendy C

    2017-11-01

    With increased emphasis placed on non-academic skills in the workplace, a need exists to identify an admissions process that evaluates these skills. This study assessed the validity and reliability of an application review process involving three dedicated application reviewers in a multi-stage admissions model. A multi-stage admissions model was utilized during the 2014-2015 admissions cycle. After advancing through the academic review, each application was independently reviewed by two dedicated application reviewers utilizing a six-construct rubric (written communication, extracurricular and community service activities, leadership experience, pharmacy career appreciation, research experience, and resiliency). Rubric scores were extrapolated to a three-tier ranking to select candidates for on-site interviews. Kappa statistics were used to assess interrater reliability. A three-facet Many-Facet Rasch Model (MFRM) determined reviewer severity, candidate suitability, and rubric construct difficulty. The kappa statistic for candidates' tier rank score (n = 388 candidates) was 0.692 with a perfect agreement frequency of 84.3%. There was substantial interrater reliability between reviewers for the tier ranking (kappa: 0.654-0.710). Highest construct agreement occurred in written communication (kappa: 0.924-0.984). A three-facet MFRM analysis explained 36.9% of variance in the ratings, with 0.06% reflecting application reviewer scoring patterns (i.e., severity or leniency), 22.8% reflecting candidate suitability, and 14.1% reflecting construct difficulty. Utilization of dedicated application reviewers and a defined tiered rubric provided a valid and reliable method to effectively evaluate candidates during the application review process. These analyses provide insight into opportunities for improving the application review process among schools and colleges of pharmacy. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Reliability analysis of operator's monitoring behavior in digital main control room of nuclear power plants and its application

    International Nuclear Information System (INIS)

    Zhang Li; Hu Hong; Li Pengcheng; Jiang Jianjun; Yi Cannan; Chen Qingqing

    2015-01-01

    In order to build a quantitative model to analyze operators' monitoring behavior reliability of digital main control room of nuclear power plants, based on the analysis of the design characteristics of digital main control room of a nuclear power plant and operator's monitoring behavior, and combining with operators' monitoring behavior process, monitoring behavior reliability was divided into three parts including information transfer reliability among screens, inside-screen information sampling reliability and information detection reliability. Quantitative calculation model of information transfer reliability among screens was established based on Senders's monitoring theory; the inside screen information sampling reliability model was established based on the allocation theory of attention resources; and considering the performance shaping factor causality, a fuzzy Bayesian method was presented to quantify information detection reliability and an example of application was given. The results show that the established model of monitoring behavior reliability gives an objective description for monitoring process, which can quantify the monitoring reliability and overcome the shortcomings of traditional methods. Therefore, it provides theoretical support for operator's monitoring behavior reliability analysis in digital main control room of nuclear power plants and improves the precision of human reliability analysis. (authors)

  3. Application of AVK and selective encryption in improving performance of quantum cryptography and networks

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2006-07-01

    The subject of quantum cryptography has emerged as an important area of research. Reported theoretical and practical investigations have conclusively established the reliable quantum key distribution (QKD) protocols with a higher level of security. For perfect security, the implementation of a time variant key is essential. The nature of cost and operation involved in quantum key distribution to distribute a time variant key from session to session/message to message has yet to be addressed from an implementation angle, yet it is understood to be hard with current available technology. Besides, the disadvantages of the subject quantum cryptanalysis, in the name of 'quantum cheating' and quantum error are demonstrated in the literature. This calls for an investigation for an affordable hybrid solution using QKD with conventional classical methods of key distribution to implement a time variant key. The paper proposes a hybrid solution towards this investigation. The solutions suggested will improve the performance of computer networks for secure transport of data in general. (author)

  4. Improving Power Converter Reliability

    DEFF Research Database (Denmark)

    Ghimire, Pramod; de Vega, Angel Ruiz; Beczkowski, Szymon

    2014-01-01

    of a high-power IGBT module during converter operation, which may play a vital role in improving the reliability of the power converters. The measured voltage is used to estimate the module average junction temperature of the high and low-voltage side of a half-bridge IGBT separately in every fundamental......The real-time junction temperature monitoring of a high-power insulated-gate bipolar transistor (IGBT) module is important to increase the overall reliability of power converters for industrial applications. This article proposes a new method to measure the on-state collector?emitter voltage...... is measured in a wind power converter at a low fundamental frequency. To illustrate more, the test method as well as the performance of the measurement circuit are also presented. This measurement is also useful to indicate failure mechanisms such as bond wire lift-off and solder layer degradation...

  5. Research on reliability management systems for Nuclear Power Plant

    International Nuclear Information System (INIS)

    Maki, Nobuo

    2000-01-01

    Investigation on a reliability management system for Nuclear Power Plants (NPPs) has been performed on national and international archived documents as well as on current status of studies at Idaho National Engineering and Environmental Laboratory (INEEL), US NPPs (McGuire, Seabrook), a French NPP (St. Laurent-des-Eaux), Japan Atomic Energy Research Institute (JAERI), Central Research Institute of Electric Power Industries (CRIEPI), and power plant manufacturers in Japan. As a result of the investigation, the following points were identified: (i) A reliability management system is composed of a maintenance management system to inclusively manage maintenance data, and an anomalies information and reliability data management system to extract data from maintenance results stored in the maintenance management system and construct a reliability database. (ii) The maintenance management system, which is widely-used among NPPs in the US and Europe, is an indispensable system for the increase of maintenance reliability. (iii) Maintenance management methods utilizing reliability data like Reliability Centered Maintenance are applied for NPP maintenance in the US and Europe, and contributing to cost saving. Maintenance templates are effective in the application process. In addition, the following points were proposed on the design of the system: (i) A detailed database on specifications of facilities and components is necessary for the effective use of the system. (ii) A demand database is indispensable for the application of the methods. (iii) Full-time database managers are important to maintain the quality of the reliability data. (author)

  6. Principles of Bridge Reliability

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle; Nowak, Andrzej S.

    The paper gives a brief introduction to the basic principles of structural reliability theory and its application to bridge engineering. Fundamental concepts like failure probability and reliability index are introduced. Ultimate as well as serviceability limit states for bridges are formulated......, and as an example the reliability profile and a sensitivity analyses for a corroded reinforced concrete bridge is shown....

  7. Reliability Capacity of Half-Duplex Channels with Strict Deadlines

    DEFF Research Database (Denmark)

    Costa, Rui; Roetter, Daniel Enrique Lucani; Vinhoza, Tiago

    2015-01-01

    A fundamental characterization of a half-duplex wireless system with packet losses under traffic with hard deadlines is instrumental to understanding and developing efficient, coding aware policies for real-time applications. We set forth the concept of reliability capacity with a limited number ...

  8. Reliability studies of a high-power proton accelerator for accelerator-driven system applications for nuclear waste transmutation

    Energy Technology Data Exchange (ETDEWEB)

    Burgazzi, Luciano [ENEA-Centro Ricerche ' Ezio Clementel' , Advanced Physics Technology Division, Via Martiri di Monte Sole, 4, 40129 Bologna (Italy)]. E-mail: burgazzi@bologna.enea.it; Pierini, Paolo [INFN-Sezione di Milano, Laboratorio Acceleratori e Superconduttivita Applicata, Via Fratelli Cervi 201, I-20090 Segrate (MI) (Italy)

    2007-04-15

    The main effort of the present study is to analyze the availability and reliability of a high-performance linac (linear accelerator) conceived for Accelerator-Driven Systems (ADS) purpose and to suggest recommendations, in order both to meet the high operability goals and to satisfy the safety requirements dictated by the reactor system. Reliability Block Diagrams (RBD) approach has been considered for system modelling, according to the present level of definition of the design: component failure modes are assessed in terms of Mean Time Between Failure (MTBF) and Mean Time To Repair (MTTR), reliability and availability figures are derived, applying the current reliability algorithms. The lack of a well-established component database has been pointed out as the main issue related to the accelerator reliability assessment. The results, affected by the conservative character of the study, show a high margin for the improvement in terms of accelerator reliability and availability figures prediction. The paper outlines the viable path towards the accelerator reliability and availability enhancement process and delineates the most proper strategies. The improvement in the reliability characteristics along this path is shown as well.

  9. Reliability studies of a high-power proton accelerator for accelerator-driven system applications for nuclear waste transmutation

    International Nuclear Information System (INIS)

    Burgazzi, Luciano; Pierini, Paolo

    2007-01-01

    The main effort of the present study is to analyze the availability and reliability of a high-performance linac (linear accelerator) conceived for Accelerator-Driven Systems (ADS) purpose and to suggest recommendations, in order both to meet the high operability goals and to satisfy the safety requirements dictated by the reactor system. Reliability Block Diagrams (RBD) approach has been considered for system modelling, according to the present level of definition of the design: component failure modes are assessed in terms of Mean Time Between Failure (MTBF) and Mean Time To Repair (MTTR), reliability and availability figures are derived, applying the current reliability algorithms. The lack of a well-established component database has been pointed out as the main issue related to the accelerator reliability assessment. The results, affected by the conservative character of the study, show a high margin for the improvement in terms of accelerator reliability and availability figures prediction. The paper outlines the viable path towards the accelerator reliability and availability enhancement process and delineates the most proper strategies. The improvement in the reliability characteristics along this path is shown as well

  10. Reliable allele detection using SNP-based PCR primers containing Locked Nucleic Acid: application in genetic mapping

    Directory of Open Access Journals (Sweden)

    Trognitz Friederike

    2007-02-01

    Full Text Available Abstract Background The diploid, Solanum caripense, a wild relative of potato and tomato, possesses valuable resistance to potato late blight and we are interested in the genetic base of this resistance. Due to extremely low levels of genetic variation within the S. caripense genome it proved impossible to generate a dense genetic map and to assign individual Solanum chromosomes through the use of conventional chromosome-specific SSR, RFLP, AFLP, as well as gene- or locus-specific markers. The ease of detection of DNA polymorphisms depends on both frequency and form of sequence variation. The narrow genetic background of close relatives and inbreds complicates the detection of persisting, reduced polymorphism and is a challenge to the development of reliable molecular markers. Nonetheless, monomorphic DNA fragments representing not directly usable conventional markers can contain considerable variation at the level of single nucleotide polymorphisms (SNPs. This can be used for the design of allele-specific molecular markers. The reproducible detection of allele-specific markers based on SNPs has been a technical challenge. Results We present a fast and cost-effective protocol for the detection of allele-specific SNPs by applying Sequence Polymorphism-Derived (SPD markers. These markers proved highly efficient for fingerprinting of individuals possessing a homogeneous genetic background. SPD markers are obtained from within non-informative, conventional molecular marker fragments that are screened for SNPs to design allele-specific PCR primers. The method makes use of primers containing a single, 3'-terminal Locked Nucleic Acid (LNA base. We demonstrate the applicability of the technique by successful genetic mapping of allele-specific SNP markers derived from monomorphic Conserved Ortholog Set II (COSII markers mapped to Solanum chromosomes, in S. caripense. By using SPD markers it was possible for the first time to map the S. caripense alleles

  11. An overall methodology for reliability prediction of mechatronic systems design with industrial application

    International Nuclear Information System (INIS)

    Habchi, Georges; Barthod, Christine

    2016-01-01

    We propose in this paper an overall ten-step methodology dedicated to the analysis and quantification of reliability during the design phase of a mechatronic system, considered as a complex system. The ten steps of the methodology are detailed according to the downward side of the V-development cycle usually used for the design of complex systems. Two main phases of analysis are complementary and cover the ten steps, qualitative analysis and quantitative analysis. The qualitative phase proposes to analyze the functional and dysfunctional behavior of the system and then determine its different failure modes and degradation states, based on external and internal functional analysis, organic and physical implementation, and dependencies between components, with consideration of customer specifications and mission profile. The quantitative phase is used to calculate the reliability of the system and its components, based on the qualitative behavior patterns, and considering data gathering and processing and reliability targets. Systemic approach is used to calculate the reliability of the system taking into account: the different technologies of a mechatronic system (mechanics, electronics, electrical .), dependencies and interactions between components and external influencing factors. To validate the methodology, the ten steps are applied to an industrial system, the smart actuator of Pack'Aero Company. - Highlights: • A ten-step methodology for reliability prediction of mechatronic systems design. • Qualitative and quantitative analysis for reliability evaluation using PN and RBD. • A dependency matrix proposal, based on the collateral and functional interactions. • Models consider mission profile, deterioration, interactions and influent factors. • Application and validation of the methodology on the “Smart Actuator” of PACK’AERO.

  12. Test-retest reliability of the Battery for the Assessment of Auditory Sensorimotor and Timing Abilities (BAASTA).

    Science.gov (United States)

    Bégel, Valentin; Verga, Laura; Benoit, Charles-Etienne; Kotz, Sonja A; Bella, Simone Dalla

    2018-04-27

    Perceptual and sensorimotor timing skills can be comprehensively assessed with the Battery for the Assessment of Auditory Sensorimotor and Timing Abilities (BAASTA). The battery has been used for testing rhythmic skills in healthy adults and patient populations (e.g., with Parkinson disease), showing sensitivity to timing and rhythm deficits. Here we assessed the test-retest reliability of the BAASTA in 20 healthy adults. Participants were tested twice with the BAASTA, implemented on a tablet interface, with a 2-week interval. They completed 4 perceptual tasks, namely, duration discrimination, anisochrony detection with tones and music, and the Beat Alignment Test (BAT). Moreover, they completed motor tasks via finger tapping, including unpaced and paced tapping with tones and music, synchronization-continuation, and adaptive tapping to a sequence with a tempo change. Despite high variability among individuals, the results showed stable test-retest reliability in most tasks. A slight but significant improvement from test to retest was found in tapping with music, which may reflect a learning effect. In general, the BAASTA was found a reliable tool for evaluating timing and rhythm skills. Copyright © 2018 Elsevier Masson SAS. All rights reserved.

  13. A Study on the Joint Reliability Importance with Applications to the Maintenance Policy

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Jung Sik; Kwon, Hong Je; Song, Mi Ja; Kim, Woong Kil; Yoong, Do Hwa [Seoul National Polytechnic University, Seoul (Korea, Republic of); Moon, Sin Myung; Cho, Bong Je; Moon, Jae Phil; Koo, Hoon Young; Lee, Jin Seung [Seoul National University, Seoul (Korea, Republic of)

    1997-09-01

    The objective of this project is to investigate the possibility of applying the Joint Reliability Importance(JRI) of two components to the establishment of system maintenance policy. Components are classified into reliability substitutes and reliability compliments. If the sign of JRI of two components is positive, they are called as reliability compliments. If the sign of JRI of two components is negative, they are called as reliability substitutes. In case of reliability compliments, one component becomes more important as the other one works and in case of reliability substitutes, one component becomes more important as the other one fails. Therefore, when the preventive maintenance is carried out, two components which are reliability substitutes should not be maintained at the same time. Also, when the corrective maintenance is carried out, we not only repair the failed components but pay attention to the functioning components which are reliability substitutes with respect to the failed components. The sign of JRI of any two components in series (parallel) system is positive (negative). Then, what is the sign of any two components in k-out-of-n:G system? This project presents an idea of characterizing the k-out-of-n:G system by calculating the JRI of two components in that system, assuming that reliability of all components are equal. In addition to the JRI of two components, JRI of two gates is introduced in this project. The algorithm to compute the JRI of two gates is presented. Bridge system is considered and the co-relation of two min cut sets is illustrated by using the cut-set representation of bridge system and calculating the JRI of two gates. 28 refs., 20 tabs., 32 figs. (author)

  14. Measuring Passenger Travel Time Reliability Using Smart Card Data

    NARCIS (Netherlands)

    Bagherian, M.; Cats, O.; van Oort, N.; Hickman, M

    2016-01-01

    Service reliability is a key performance measure for transit agencies in increasing their service quality and thus ridership. Conventional reliability metrics are established based on vehicle movements and thus do not adequately reflect passenger’s experience. In the past few years, the growing

  15. The application of cognitive models to the evaluation and prediction of human reliability

    International Nuclear Information System (INIS)

    Embrey, D.E.; Reason, J.T.

    1986-01-01

    The first section of the paper provides a brief overview of a number of important principles relevant to human reliability modeling that have emerged from cognitive models, and presents a synthesis of these approaches in the form of a Generic Error Modeling System (GEMS). The next section illustrates the application of GEMS to some well known nuclear power plant (NPP) incidents in which human error was a major contributor. The way in which design recommendations can emerge from analyses of this type is illustrated. The third section describes the use of cognitive models in the classification of human errors for prediction and data collection purposes. The final section addresses the predictive modeling of human error as part of human reliability assessment in Probabilistic Risk Assessment

  16. Re-Ranking Sequencing Variants in the Post-GWAS Era for Accurate Causal Variant Identification

    Science.gov (United States)

    Faye, Laura L.; Machiela, Mitchell J.; Kraft, Peter; Bull, Shelley B.; Sun, Lei

    2013-01-01

    Next generation sequencing has dramatically increased our ability to localize disease-causing variants by providing base-pair level information at costs increasingly feasible for the large sample sizes required to detect complex-trait associations. Yet, identification of causal variants within an established region of association remains a challenge. Counter-intuitively, certain factors that increase power to detect an associated region can decrease power to localize the causal variant. First, combining GWAS with imputation or low coverage sequencing to achieve the large sample sizes required for high power can have the unintended effect of producing differential genotyping error among SNPs. This tends to bias the relative evidence for association toward better genotyped SNPs. Second, re-use of GWAS data for fine-mapping exploits previous findings to ensure genome-wide significance in GWAS-associated regions. However, using GWAS findings to inform fine-mapping analysis can bias evidence away from the causal SNP toward the tag SNP and SNPs in high LD with the tag. Together these factors can reduce power to localize the causal SNP by more than half. Other strategies commonly employed to increase power to detect association, namely increasing sample size and using higher density genotyping arrays, can, in certain common scenarios, actually exacerbate these effects and further decrease power to localize causal variants. We develop a re-ranking procedure that accounts for these adverse effects and substantially improves the accuracy of causal SNP identification, often doubling the probability that the causal SNP is top-ranked. Application to the NCI BPC3 aggressive prostate cancer GWAS with imputation meta-analysis identified a new top SNP at 2 of 3 associated loci and several additional possible causal SNPs at these loci that may have otherwise been overlooked. This method is simple to implement using R scripts provided on the author's website. PMID:23950724

  17. Reliability analysis of reactor inspection robot(RIROB)

    International Nuclear Information System (INIS)

    Eom, H. S.; Kim, J. H.; Lee, J. C.; Choi, Y. R.; Moon, S. S.

    2002-05-01

    This report describes the method and the result of the reliability analysis of RIROB developed in Korea Atomic Energy Research Institute. There are many classic techniques and models for the reliability analysis. These techniques and models have been used widely and approved in other industries such as aviation and nuclear industry. Though these techniques and models have been approved in real fields they are still insufficient for the complicated systems such RIROB which are composed of computer, networks, electronic parts, mechanical parts, and software. Particularly the application of these analysis techniques to digital and software parts of complicated systems is immature at this time thus expert judgement plays important role in evaluating the reliability of the systems at these days. In this report we proposed a method which combines diverse evidences relevant to the reliability to evaluate the reliability of complicated systems such as RIROB. The proposed method combines diverse evidences and performs inference in formal and in quantitative way by using the benefits of Bayesian Belief Nets (BBN)

  18. Efficient utilization of rare variants for detection of disease-related genomic regions.

    Directory of Open Access Journals (Sweden)

    Lei Zhang

    2010-12-01

    Full Text Available When testing association between rare variants and diseases, an efficient analytical approach involves considering a set of variants in a genomic region as the unit of analysis. One factor complicating this approach is that the vast majority of rare variants in practical applications are believed to represent background neutral variation. As a result, analyzing a single set with all variants may not represent a powerful approach. Here, we propose two alternative strategies. In the first, we analyze the subsets of rare variants exhaustively. In the second, we categorize variants selectively into two subsets: one in which variants are overrepresented in cases, and the other in which variants are overrepresented in controls. When the proportion of neutral variants is moderate to large we show, by simulations, that the both proposed strategies improve the statistical power over methods analyzing a single set with total variants. When applied to a real sequencing association study, the proposed methods consistently produce smaller p-values than their competitors. When applied to another real sequencing dataset to study the difference of rare allele distributions between ethnic populations, the proposed methods detect the overrepresentation of variants between the CHB (Chinese Han in Beijing and YRI (Yoruba people of Ibadan populations with small p-values. Additional analyses suggest that there is no difference between the CHB and CHD (Chinese Han in Denver datasets, as expected. Finally, when applied to the CHB and JPT (Japanese people in Tokyo populations, existing methods fail to detect any difference, while it is detected by the proposed methods in several regions.

  19. An Introduction To Reliability

    International Nuclear Information System (INIS)

    Park, Kyoung Su

    1993-08-01

    This book introduces reliability with definition of reliability, requirement of reliability, system of life cycle and reliability, reliability and failure rate such as summary, reliability characteristic, chance failure, failure rate which changes over time, failure mode, replacement, reliability in engineering design, reliability test over assumption of failure rate, and drawing of reliability data, prediction of system reliability, conservation of system, failure such as summary and failure relay and analysis of system safety.

  20. Adjoint sensitivity analysis procedure of Markov chains with applications on reliability of IFMIF accelerator-system facilities

    Energy Technology Data Exchange (ETDEWEB)

    Balan, I.

    2005-05-01

    This work presents the implementation of the Adjoint Sensitivity Analysis Procedure (ASAP) for the Continuous Time, Discrete Space Markov chains (CTMC), as an alternative to the other computational expensive methods. In order to develop this procedure as an end product in reliability studies, the reliability of the physical systems is analyzed using a coupled Fault-Tree - Markov chain technique, i.e. the abstraction of the physical system is performed using as the high level interface the Fault-Tree and afterwards this one is automatically converted into a Markov chain. The resulting differential equations based on the Markov chain model are solved in order to evaluate the system reliability. Further sensitivity analyses using ASAP applied to CTMC equations are performed to study the influence of uncertainties in input data to the reliability measures and to get the confidence in the final reliability results. The methods to generate the Markov chain and the ASAP for the Markov chain equations have been implemented into the new computer code system QUEFT/MARKOMAGS/MCADJSEN for reliability and sensitivity analysis of physical systems. The validation of this code system has been carried out by using simple problems for which analytical solutions can be obtained. Typical sensitivity results show that the numerical solution using ASAP is robust, stable and accurate. The method and the code system developed during this work can be used further as an efficient and flexible tool to evaluate the sensitivities of reliability measures for any physical system analyzed using the Markov chain. Reliability and sensitivity analyses using these methods have been performed during this work for the IFMIF Accelerator System Facilities. The reliability studies using Markov chain have been concentrated around the availability of the main subsystems of this complex physical system for a typical mission time. The sensitivity studies for two typical responses using ASAP have been

  1. Capacity and reliability analyses with applications to power quality

    Science.gov (United States)

    Azam, Mohammad; Tu, Fang; Shlapak, Yuri; Kirubarajan, Thiagalingam; Pattipati, Krishna R.; Karanam, Rajaiah

    2001-07-01

    The deregulation of energy markets, the ongoing advances in communication networks, the proliferation of intelligent metering and protective power devices, and the standardization of software/hardware interfaces are creating a dramatic shift in the way facilities acquire and utilize information about their power usage. The currently available power management systems gather a vast amount of information in the form of power usage, voltages, currents, and their time-dependent waveforms from a variety of devices (for example, circuit breakers, transformers, energy and power quality meters, protective relays, programmable logic controllers, motor control centers). What is lacking is an information processing and decision support infrastructure to harness this voluminous information into usable operational and management knowledge to handle the health of their equipment and power quality, minimize downtime and outages, and to optimize operations to improve productivity. This paper considers the problem of evaluating the capacity and reliability analyses of power systems with very high availability requirements (e.g., systems providing energy to data centers and communication networks with desired availability of up to 0.9999999). The real-time capacity and margin analysis helps operators to plan for additional loads and to schedule repair/replacement activities. The reliability analysis, based on computationally efficient sum of disjoint products, enables analysts to decide the optimum levels of redundancy, aids operators in prioritizing the maintenance options for a given budget and monitoring the system for capacity margin. The resulting analytical and software tool is demonstrated on a sample data center.

  2. De novo assembly and next-generation sequencing to analyse full-length gene variants from codon-barcoded libraries.

    Science.gov (United States)

    Cho, Namjin; Hwang, Byungjin; Yoon, Jung-ki; Park, Sangun; Lee, Joongoo; Seo, Han Na; Lee, Jeewon; Huh, Sunghoon; Chung, Jinsoo; Bang, Duhee

    2015-09-21

    Interpreting epistatic interactions is crucial for understanding evolutionary dynamics of complex genetic systems and unveiling structure and function of genetic pathways. Although high resolution mapping of en masse variant libraries renders molecular biologists to address genotype-phenotype relationships, long-read sequencing technology remains indispensable to assess functional relationship between mutations that lie far apart. Here, we introduce JigsawSeq for multiplexed sequence identification of pooled gene variant libraries by combining a codon-based molecular barcoding strategy and de novo assembly of short-read data. We first validate JigsawSeq on small sub-pools and observed high precision and recall at various experimental settings. With extensive simulations, we then apply JigsawSeq to large-scale gene variant libraries to show that our method can be reliably scaled using next-generation sequencing. JigsawSeq may serve as a rapid screening tool for functional genomics and offer the opportunity to explore evolutionary trajectories of protein variants.

  3. Predicting Flow Breakdown Probability and Duration in Stochastic Network Models: Impact on Travel Time Reliability

    Energy Technology Data Exchange (ETDEWEB)

    Dong, Jing [ORNL; Mahmassani, Hani S. [Northwestern University, Evanston

    2011-01-01

    This paper proposes a methodology to produce random flow breakdown endogenously in a mesoscopic operational model, by capturing breakdown probability and duration. Based on previous research findings that probability of flow breakdown can be represented as a function of flow rate and the duration can be characterized by a hazard model. By generating random flow breakdown at various levels and capturing the traffic characteristics at the onset of the breakdown, the stochastic network simulation model provides a tool for evaluating travel time variability. The proposed model can be used for (1) providing reliability related traveler information; (2) designing ITS (intelligent transportation systems) strategies to improve reliability; and (3) evaluating reliability-related performance measures of the system.

  4. The Component Timed-Up-and-Go test: the utility and psychometric properties of using a mobile application to determine prosthetic mobility in people with lower limb amputations.

    Science.gov (United States)

    Clemens, Sheila M; Gailey, Robert S; Bennett, Christopher L; Pasquina, Paul F; Kirk-Sanchez, Neva J; Gaunaurd, Ignacio A

    2018-03-01

    Using a custom mobile application to evaluate the reliability and validity of the Component Timed-Up-and-Go test to assess prosthetic mobility in people with lower limb amputation. Cross-sectional design. National conference for people with limb loss. A total of 118 people with non-vascular cause of lower limb amputation participated. Subjects had a mean age of 48 (±13.7) years and were an average of 10 years post amputation. Of them, 54% ( n = 64) of subjects were male. None. The Component Timed-Up-and-Go was administered using a mobile iPad application, generating a total time to complete the test and five component times capturing each subtask (sit to stand transitions, linear gait, turning) of the standard timed-up-and-go test. The outcome underwent test-retest reliability using intraclass correlation coefficients (ICCs) and convergent validity analyses through correlation with self-report measures of balance and mobility. The Component Timed-Up-and-Go exhibited excellent test-retest reliability with ICCs ranging from .98 to .86 for total and component times. Evidence of discriminative validity resulted from significant differences in mean total times between people with transtibial (10.1 (SD: ±2.3)) and transfemoral (12.76 (SD: ±5.1) amputation, as well as significant differences in all five component times ( P < .05). Convergent validity of the Component Timed-Up-and-Go was demonstrated through moderate correlations with the PLUS-M ( r s  = -.56). The Component Timed-Up-and-Go is a reliable and valid clinical tool for detailed assessment of prosthetic mobility in people with non-vascular lower limb amputation. The iPad application provided a means to easily record data, contributing to clinical utility.

  5. Structural Reliability Analysis of Wind Turbines: A Review

    Directory of Open Access Journals (Sweden)

    Zhiyu Jiang

    2017-12-01

    Full Text Available The paper presents a detailed review of the state-of-the-art research activities on structural reliability analysis of wind turbines between the 1990s and 2017. We describe the reliability methods including the first- and second-order reliability methods and the simulation reliability methods and show the procedure for and application areas of structural reliability analysis of wind turbines. Further, we critically review the various structural reliability studies on rotor blades, bottom-fixed support structures, floating systems and mechanical and electrical components. Finally, future applications of structural reliability methods to wind turbine designs are discussed.

  6. Identification of hemoglobin variants by top-down mass spectrometry using selected diagnostic product ions.

    Science.gov (United States)

    Coelho Graça, Didia; Hartmer, Ralf; Jabs, Wolfgang; Beris, Photis; Clerici, Lorella; Stoermer, Carsten; Samii, Kaveh; Hochstrasser, Denis; Tsybin, Yury O; Scherl, Alexander; Lescuyer, Pierre

    2015-04-01

    Hemoglobin disorder diagnosis is a complex procedure combining several analytical steps. Due to the lack of specificity of the currently used protein analysis methods, the identification of uncommon hemoglobin variants (proteoforms) can become a hard task to accomplish. The aim of this work was to develop a mass spectrometry-based approach to quickly identify mutated protein sequences within globin chain variants. To reach this goal, a top-down electron transfer dissociation mass spectrometry method was developed for hemoglobin β chain analysis. A diagnostic product ion list was established with a color code strategy allowing to quickly and specifically localize a mutation in the hemoglobin β chain sequence. The method was applied to the analysis of rare hemoglobin β chain variants and an (A)γ-β fusion protein. The results showed that the developed data analysis process allows fast and reliable interpretation of top-down electron transfer dissociation mass spectra by nonexpert users in the clinical area.

  7. Derating design for optimizing reliability and cost with an application to liquid rocket engines

    International Nuclear Information System (INIS)

    Kim, Kyungmee O.; Roh, Taeseong; Lee, Jae-Woo; Zuo, Ming J.

    2016-01-01

    Derating is the operation of an item at a stress that is lower than its rated design value. Previous research has indicated that reliability can be increased from operational derating. In order to derate an item in field operation, however, an engineer must rate the design of the item at a stress level higher than the operational stress level, which increases the item's nominal failure rate and development costs. At present, there is no model available to quantify the cost and reliability that considers the design uprating as well as the operational derating. In this paper, we establish the reliability expression in terms of the derating level assuming that the nominal failure rate is constant with time for a fixed rated design value. The total development cost is expressed in terms of the rated design value and the number of tests necessary to demonstrate the reliability requirement. The properties of the optimal derating level are explained for maximizing the reliability or for minimizing the cost. As an example, the proposed model is applied to the design of liquid rocket engines. - Highlights: • Modeled the effect of derating design on the reliability and the development cost. • Discovered that derating design may reduce the cost of reliability demonstration test. • Optimized the derating design parameter for reliability maximization or cost minimization.

  8. Application of reliability centered maintenance for nuclear power station in Japan

    International Nuclear Information System (INIS)

    Kumano, Haruyuki; Honda, Hironobu.

    1990-01-01

    The reliability centered maintenance (RCM) method has been widely used with good results in aviation companies in the U.S. to ensure positive preventive maintenance and management. In addition, the Electric Power Research Institute has been making studies and tests in an effort to apply the RCM method to nuclear power plants. The present report shows and discusses some results of a preliminary study aimed at the introduction of the RCM method to nuclear power plants in Japan. The history of the development and application of RCM is outlined first, and the procedure of its implementation is then described and discussed. The procedure consists of five major steps: collection of data, identification of system components, analysis of the functions of the system, selection of required tasks for preventive management, and packaging. Some actual examples of the application of RCM to nuclear power plants in the U.S. are described. And finally, the report discusses some major problems to be solved to permit the application of RCM to nuclear power plants in Japan. (N.K.)

  9. Reliability Testing Using the Vehicle Durability Simulator

    Science.gov (United States)

    2017-11-20

    techniques are employed to reduce test and simulation time. Through application of these processes and techniques the reliability characteristics...remote parameter control (RPC) software. The software is specifically designed for the data collection, analysis, and simulation processes outlined in...the selection process for determining the desired runs for simulation . 4.3 Drive File Development. After the data have been reviewed and

  10. Disparity Map Generation from Illumination Variant Stereo Images Using Efficient Hierarchical Dynamic Programming

    Directory of Open Access Journals (Sweden)

    Viral H. Borisagar

    2014-01-01

    Full Text Available A novel hierarchical stereo matching algorithm is presented which gives disparity map as output from illumination variant stereo pair. Illumination difference between two stereo images can lead to undesirable output. Stereo image pair often experience illumination variations due to many factors like real and practical situation, spatially and temporally separated camera positions, environmental illumination fluctuation, and the change in the strength or position of the light sources. Window matching and dynamic programming techniques are employed for disparity map estimation. Good quality disparity map is obtained with the optimized path. Homomorphic filtering is used as a preprocessing step to lessen illumination variation between the stereo images. Anisotropic diffusion is used to refine disparity map to give high quality disparity map as a final output. The robust performance of the proposed approach is suitable for real life circumstances where there will be always illumination variation between the images. The matching is carried out in a sequence of images representing the same scene, however in different resolutions. The hierarchical approach adopted decreases the computation time of the stereo matching problem. This algorithm can be helpful in applications like robot navigation, extraction of information from aerial surveys, 3D scene reconstruction, and military and security applications. Similarity measure SAD is often sensitive to illumination variation. It produces unacceptable disparity map results for illumination variant left and right images. Experimental results show that our proposed algorithm produces quality disparity maps for both wide range of illumination variant and invariant stereo image pair.

  11. Time-dependent reliability analysis and condition assessment of structures

    International Nuclear Information System (INIS)

    Ellingwood, B.R.

    1997-01-01

    Structures generally play a passive role in assurance of safety in nuclear plant operation, but are important if the plant is to withstand the effect of extreme environmental or abnormal events. Relative to mechanical and electrical components, structural systems and components would be difficult and costly to replace. While the performance of steel or reinforced concrete structures in service generally has been very good, their strengths may deteriorate during an extended service life as a result of changes brought on by an aggressive environment, excessive loading, or accidental loading. Quantitative tools for condition assessment of aging structures can be developed using time-dependent structural reliability analysis methods. Such methods provide a framework for addressing the uncertainties attendant to aging in the decision process

  12. Human factors assessment of conflict resolution aid reliability and time pressure in future air traffic control.

    Science.gov (United States)

    Trapsilawati, Fitri; Qu, Xingda; Wickens, Chris D; Chen, Chun-Hsien

    2015-01-01

    Though it has been reported that air traffic controllers' (ATCos') performance improves with the aid of a conflict resolution aid (CRA), the effects of imperfect automation on CRA are so far unknown. The main objective of this study was to examine the effects of imperfect automation on conflict resolution. Twelve students with ATC knowledge were instructed to complete ATC tasks in four CRA conditions including reliable, unreliable and high time pressure, unreliable and low time pressure, and manual conditions. Participants were able to resolve the designated conflicts more accurately and faster in the reliable versus unreliable CRA conditions. When comparing the unreliable CRA and manual conditions, unreliable CRA led to better conflict resolution performance and higher situation awareness. Surprisingly, high time pressure triggered better conflict resolution performance as compared to the low time pressure condition. The findings from the present study highlight the importance of CRA in future ATC operations. Practitioner Summary: Conflict resolution aid (CRA) is a proposed automation decision aid in air traffic control (ATC). It was found in the present study that CRA was able to promote air traffic controllers' performance even when it was not perfectly reliable. These findings highlight the importance of CRA in future ATC operations.

  13. Reliability analysis of software based safety functions

    International Nuclear Information System (INIS)

    Pulkkinen, U.

    1993-05-01

    The methods applicable in the reliability analysis of software based safety functions are described in the report. Although the safety functions also include other components, the main emphasis in the report is on the reliability analysis of software. The check list type qualitative reliability analysis methods, such as failure mode and effects analysis (FMEA), are described, as well as the software fault tree analysis. The safety analysis based on the Petri nets is discussed. The most essential concepts and models of quantitative software reliability analysis are described. The most common software metrics and their combined use with software reliability models are discussed. The application of software reliability models in PSA is evaluated; it is observed that the recent software reliability models do not produce the estimates needed in PSA directly. As a result from the study some recommendations and conclusions are drawn. The need of formal methods in the analysis and development of software based systems, the applicability of qualitative reliability engineering methods in connection to PSA and the need to make more precise the requirements for software based systems and their analyses in the regulatory guides should be mentioned. (orig.). (46 refs., 13 figs., 1 tab.)

  14. ToTem: a tool for variant calling pipeline optimization.

    Science.gov (United States)

    Tom, Nikola; Tom, Ondrej; Malcikova, Jitka; Pavlova, Sarka; Kubesova, Blanka; Rausch, Tobias; Kolarik, Miroslav; Benes, Vladimir; Bystry, Vojtech; Pospisilova, Sarka

    2018-06-26

    High-throughput bioinformatics analyses of next generation sequencing (NGS) data often require challenging pipeline optimization. The key problem is choosing appropriate tools and selecting the best parameters for optimal precision and recall. Here we introduce ToTem, a tool for automated pipeline optimization. ToTem is a stand-alone web application with a comprehensive graphical user interface (GUI). ToTem is written in Java and PHP with an underlying connection to a MySQL database. Its primary role is to automatically generate, execute and benchmark different variant calling pipeline settings. Our tool allows an analysis to be started from any level of the process and with the possibility of plugging almost any tool or code. To prevent an over-fitting of pipeline parameters, ToTem ensures the reproducibility of these by using cross validation techniques that penalize the final precision, recall and F-measure. The results are interpreted as interactive graphs and tables allowing an optimal pipeline to be selected, based on the user's priorities. Using ToTem, we were able to optimize somatic variant calling from ultra-deep targeted gene sequencing (TGS) data and germline variant detection in whole genome sequencing (WGS) data. ToTem is a tool for automated pipeline optimization which is freely available as a web application at  https://totem.software .

  15. Operational safety reliability research

    International Nuclear Information System (INIS)

    Hall, R.E.; Boccio, J.L.

    1986-01-01

    Operating reactor events such as the TMI accident and the Salem automatic-trip failures raised the concern that during a plant's operating lifetime the reliability of systems could degrade from the design level that was considered in the licensing process. To address this concern, NRC is sponsoring the Operational Safety Reliability Research project. The objectives of this project are to identify the essential tasks of a reliability program and to evaluate the effectiveness and attributes of such a reliability program applicable to maintaining an acceptable level of safety during the operating lifetime at the plant

  16. Numerically-quantified two dimensionality of microstructure evolution accompanying variant selection of FePd

    International Nuclear Information System (INIS)

    Ueshima, N; Yoshiya, M; Yasuda, H; Fukuda, T; Kakeshita, T

    2015-01-01

    Through three-dimensional (3D) simulations of microstructure evolution by phase-field modeling (PFM), microstructures have been quantified during their time evolution by an image processing technique with particular attention to the shape of variants in the course of variant selection. It is found that the emerging variants exhibit planar shapes rather than 3D shapes due to the elastic field around the variants arising upon disorder-to-order transition to the L1 0 phase. The two-dimensionality is more pronounced as variant selection proceeds. Although three equivalent variants compete for dominance under an external field, one of the three variants vanishes before final competition occurs between the remaining variants, which can be explained by the elastic strain energy. These numerical analyses provide better understanding of the microstructure evolution in a more quantitative manner, including the small influence of the third variant, and the results obtained confirm that the understanding of variant selection obtained from two-dimensional (2D) simulations by PFM is valid. (paper)

  17. Solving advanced multi-objective robust designs by means of multiple objective evolutionary algorithms (MOEA): A reliability application

    Energy Technology Data Exchange (ETDEWEB)

    Salazar A, Daniel E. [Division de Computacion Evolutiva (CEANI), Instituto de Sistemas Inteligentes y Aplicaciones Numericas en Ingenieria (IUSIANI), Universidad de Las Palmas de Gran Canaria. Canary Islands (Spain)]. E-mail: danielsalazaraponte@gmail.com; Rocco S, Claudio M. [Universidad Central de Venezuela, Facultad de Ingenieria, Caracas (Venezuela)]. E-mail: crocco@reacciun.ve

    2007-06-15

    This paper extends the approach proposed by the second author in [Rocco et al. Robust design using a hybrid-cellular-evolutionary and interval-arithmetic approach: a reliability application. In: Tarantola S, Saltelli A, editors. SAMO 2001: Methodological advances and useful applications of sensitivity analysis. Reliab Eng Syst Saf 2003;79(2):149-59 [special issue

  18. Reliability, Convergent Validity and Time Invariance of Default Mode Network Deviations in Early Adult Major Depressive Disorder

    Directory of Open Access Journals (Sweden)

    Katie L. Bessette

    2018-06-01

    Full Text Available There is substantial variability across studies of default mode network (DMN connectivity in major depressive disorder, and reliability and time-invariance are not reported. This study evaluates whether DMN dysconnectivity in remitted depression (rMDD is reliable over time and symptom-independent, and explores convergent relationships with cognitive features of depression. A longitudinal study was conducted with 82 young adults free of psychotropic medications (47 rMDD, 35 healthy controls who completed clinical structured interviews, neuropsychological assessments, and 2 resting-state fMRI scans across 2 study sites. Functional connectivity analyses from bilateral posterior cingulate and anterior hippocampal formation seeds in DMN were conducted at both time points within a repeated-measures analysis of variance to compare groups and evaluate reliability of group-level connectivity findings. Eleven hyper- (from posterior cingulate and 6 hypo- (from hippocampal formation connectivity clusters in rMDD were obtained with moderate to adequate reliability in all but one cluster (ICC's range = 0.50 to 0.76 for 16 of 17. The significant clusters were reduced with a principle component analysis (5 components obtained to explore these connectivity components, and were then correlated with cognitive features (rumination, cognitive control, learning and memory, and explicit emotion identification. At the exploratory level, for convergent validity, components consisting of posterior cingulate with cognitive control network hyperconnectivity in rMDD were related to cognitive control (inverse and rumination (positive. Components consisting of anterior hippocampal formation with social emotional network and DMN hypoconnectivity were related to memory (inverse and happy emotion identification (positive. Thus, time-invariant DMN connectivity differences exist early in the lifespan course of depression and are reliable. The nuanced results suggest a ventral

  19. High-performance web services for querying gene and variant annotation.

    Science.gov (United States)

    Xin, Jiwen; Mark, Adam; Afrasiabi, Cyrus; Tsueng, Ginger; Juchler, Moritz; Gopal, Nikhil; Stupp, Gregory S; Putman, Timothy E; Ainscough, Benjamin J; Griffith, Obi L; Torkamani, Ali; Whetzel, Patricia L; Mungall, Christopher J; Mooney, Sean D; Su, Andrew I; Wu, Chunlei

    2016-05-06

    Efficient tools for data management and integration are essential for many aspects of high-throughput biology. In particular, annotations of genes and human genetic variants are commonly used but highly fragmented across many resources. Here, we describe MyGene.info and MyVariant.info, high-performance web services for querying gene and variant annotation information. These web services are currently accessed more than three million times permonth. They also demonstrate a generalizable cloud-based model for organizing and querying biological annotation information. MyGene.info and MyVariant.info are provided as high-performance web services, accessible at http://mygene.info and http://myvariant.info . Both are offered free of charge to the research community.

  20. Time-variant partial directed coherence in analysis of the cardiovascular system. A methodological study

    International Nuclear Information System (INIS)

    Milde, T; Schwab, K; Walther, M; Eiselt, M; Witte, H; Schelenz, C; Voss, A

    2011-01-01

    Time-variant partial directed coherence (tvPDC) is used for the first time in a multivariate analysis of heart rate variability (HRV), respiratory movements (RMs) and (systolic) arterial blood pressure. It is shown that respiration-related HRV components which also occur at other frequencies besides the RM frequency (= respiratory sinus arrhythmia, RSA) can be identified. These additional components are known to be an effect of the 'half-the-mean-heart-rate-dilemma' ('cardiac aliasing' CA). These CA components may contaminate the entire frequency range of HRV and can lead to misinterpretation of the RSA analysis. TvPDC analysis of simulated and clinical data (full-term neonates and sedated patients) reveals these contamination effects and, in addition, the respiration-related CA components can be separated from the RSA component and the Traube–Hering–Mayer wave. It can be concluded that tvPDC can be beneficially applied to avoid misinterpretations in HRV analyses as well as to quantify partial correlative interaction properties between RM and RSA

  1. RealCalc : a real time Java calculation tool. Application to HVSR estimation

    Science.gov (United States)

    Hloupis, G.; Vallianatos, F.

    2009-04-01

    Java computation platform is not a newcomer in the seismology field. It is mainly used for applications regarding collecting, requesting, spreading and visualizing seismological data because it is productive, safe and has low maintenance costs. Although it has very attractive characteristics for the engineers, Java didn't used frequently in real time applications where prediction and reliability required as a reaction to real world events. The main reasons for this are the absence of priority support (such as priority ceiling or priority inversion) and the use of an automated memory management (called garbage collector). To overcome these problems a number of extensions have been proposed with the Real Time Specification for Java (RTSJ) being the most promising and used one. In the current study we used the RTSJ to build an application that receives data continuously and provides estimations in real time. The application consists of four main modules: incoming data, preprocessing, estimation and publication. As an application example we present real time HVSR estimation. Microtremors recordings are collected continuously from the incoming data module. The preprocessing module consists of a window selector tool based on wavelets which is applied on the incoming data stream in order derive the most stationary parts. The estimation module provides all the necessary calculations according to user specifications. Finally the publication module except the results presentation it also calculates attributes and relevant statistics for each site (temporal variations, HVSR stability). Acknowledgements This work is partially supported by the Greek General Secretariat of Research and Technology in the frame of Crete Regional Project 2000- 2006 (M1.2): "TALOS: An integrated system of seismic hazard monitoring and management in the front of the Hellenic Arc", CRETE PEP7 (KP_7).

  2. The Evaluation of Real Time Milk Analyse Result Reliability in the Czech Republic

    Directory of Open Access Journals (Sweden)

    Oto Hanuš

    2016-01-01

    Full Text Available The good result reliability of regular analyzes of milk composition could improve the health monitoring of dairy cows and herd management. The aim of this study was the analysis of measurement of abilities and properties of RT (Real Time system (AfiLab = AfiMilk (NIR measurement unit (near infrared spectroscopy and electrical conductivity (C of milk by conductometry + AfiFarm (calibration and interpretation software for the analysis of individual milk samples (IMSs. There were 2 × 30 IMSs in the experiment. The reference values (RVs of milk components and properties (fat (F, proteins (P, lactose (L, C and the somatic cell count (SCC were determined by conventional (direct and indirect: conductometry (C; infrared spectroscopy 1 with the filter technology and 2 with the Fourier transformations (F, P, L; fluoro-opto-electronic cell counting (SCC in the film on the rotation disc (1 and by flow cytometry (2 methods. AfiLab method (alternative showed less close relationships as compared to the RVs as relationships between reference methods. This was expected. However, these relationships (r were mostly significant: F from .597 to .738 (P ≤ 0.01 and ≤ 0.001; P from .284 to .787 (P > 0.05 and P ≤ 0.001; C .773 (P ≤ 0.001. Correlations (r were not significant (P > 0.05: L from −.013 to .194; SCC from −.148 to −.133. Variability of the RVs explained the following percentages of variability in AfiLab results: F to 54.4 %; P to 61.9 %; L only 3.8 %; C to 59.7 %. Explanatory power (reliability of AfiLab results to the animal is increasing with the regularity of their measurements (principle of real time application. Correlation values r (x minus 1.64 × sd for confidence interval (one-sided at a level of 95 % can be used for an alternative method in assessing the calibration quality. These limits are F 0.564, P 0.784 and C 0.715 and can be essential with the further implementation of this advanced technology of dairy herd management.

  3. Test-retest reliability of stride time variability while dual tasking in healthy and demented adults with frontotemporal degeneration

    Directory of Open Access Journals (Sweden)

    Herrmann Francois R

    2011-07-01

    Full Text Available Abstract Background Although test-retest reliability of mean values of spatio-temporal gait parameters has been assessed for reliability while walking alone (i.e., single tasking, little is known about the test-retest reliability of stride time variability (STV while performing an attention demanding-task (i.e., dual tasking. The objective of this study was to examine immediate test-retest reliability of STV while single and dual tasking in cognitively healthy older individuals (CHI and in demented patients with frontotemporal degeneration (FTD. Methods Based on a cross-sectional design, 69 community-dwelling CHI (mean age 75.5 ± 4.3; 43.5% women and 14 demented patients with FTD (mean age 65.7 ± 9.8 years; 6.7% women walked alone (without performing an additional task; i.e., single tasking and while counting backward (CB aloud starting from 50 (i.e., dual tasking. Each subject completed two trials for all the testing conditions. The mean value and the coefficient of variation (CoV of stride time while walking alone and while CB at self-selected walking speed were measured using GAITRite® and SMTEC® footswitch systems. Results ICC of mean value in CHI under both walking conditions were higher than ICC of demented patients with FTD and indicated perfect reliability (ICC > 0.80. Reliability of mean value was better while single tasking than dual tasking in CHI (ICC = 0.96 under single-task and ICC = 0.86 under dual-task, whereas it was the opposite in demented patients (ICC = 0.65 under single-task and ICC = 0.81 under dual-task. ICC of CoV was slight to poor whatever the group of participants and the walking condition (ICC Conclusions The immediate test-retest reliability of the mean value of stride time in single and dual tasking was good in older CHI as well as in demented patients with FTD. In contrast, the variability of stride time was low in both groups of participants.

  4. Serum Total Tryptase Level Confirms Itself as a More Reliable Marker of Mast Cells Burden in Mast Cell Leukaemia (Aleukaemic Variant

    Directory of Open Access Journals (Sweden)

    P. Savini

    2015-01-01

    Full Text Available Mast cell leukemia (MCL is a very rare form of systemic mastocytosis (SM with a short median survival of 6 months. We describe a case of a 65-year-old woman with aleukaemic variant of MCL with a very high serum total tryptase level of 2255 μg/L at diagnosis, which occurred following an episode of hypotensive shock. She fulfilled the diagnostic criteria of SM, with a bone marrow smear infiltration of 50–60% of atypical mast cells (MCs. She tested negative for the KIT D816V mutation, without any sign of organ damage (no B- or C-findings and only few mediator-related symptoms. She was treated with antihistamine alone and then with imatinib for the appearance of anemia. She maintained stable tryptase level and a very indolent clinical course for twenty-two months; then, she suddenly progressed to acute MCL with a serum tryptase level up to 12960 μg/L. The patient died due to haemorrhagic diathesis twenty-four months after diagnosis. This clinical case maybe represents an example of the chronic form of mast cell leukemia, described as unpredictable disease, in which the serum total tryptase level has confirmed itself as a reliable marker of mast cells burden regardless of the presence of other signs or symptoms.

  5. Optimally Fortifying Logic Reliability through Criticality Ranking

    Directory of Open Access Journals (Sweden)

    Yu Bai

    2015-02-01

    Full Text Available With CMOS technology aggressively scaling towards the 22-nm node, modern FPGA devices face tremendous aging-induced reliability challenges due to bias temperature instability (BTI and hot carrier injection (HCI. This paper presents a novel anti-aging technique at the logic level that is both scalable and applicable for VLSI digital circuits implemented with FPGA devices. The key idea is to prolong the lifetime of FPGA-mapped designs by strategically elevating the VDD values of some LUTs based on their modular criticality values. Although the idea of scaling VDD in order to improve either energy efficiency or circuit reliability has been explored extensively, our study distinguishes itself by approaching this challenge through an analytical procedure, therefore being able to maximize the overall reliability of the target FPGA design by rigorously modeling the BTI-induced device reliability and optimally solving the VDD assignment problem. Specifically, we first develop a systematic framework to analytically model the reliability of an FPGA LUT (look-up table, which consists of both RAM memory bits and associated switching circuit. We also, for the first time, establish the relationship between signal transition density and a LUT’s reliability in an analytical way. This key observation further motivates us to define the modular criticality as the product of signal transition density and the logic observability of each LUT. Finally, we analytically prove, for the first time, that the optimal way to improve the overall reliability of a whole FPGA device is to fortify individual LUTs according to their modular criticality. To the best of our knowledge, this work is the first to draw such a conclusion.

  6. Component reliability analysis for development of component reliability DB of Korean standard NPPs

    International Nuclear Information System (INIS)

    Choi, S. Y.; Han, S. H.; Kim, S. H.

    2002-01-01

    The reliability data of Korean NPP that reflects the plant specific characteristics is necessary for PSA and Risk Informed Application. We have performed a project to develop the component reliability DB and calculate the component reliability such as failure rate and unavailability. We have collected the component operation data and failure/repair data of Korean standard NPPs. We have analyzed failure data by developing a data analysis method which incorporates the domestic data situation. And then we have compared the reliability results with the generic data for the foreign NPPs

  7. Persistent trigeminal artery/persistent trigeminal artery variant and coexisting variants of the head and neck vessels diagnosed using 3 T MRA

    International Nuclear Information System (INIS)

    Bai, M.; Guo, Q.; Li, S.

    2013-01-01

    Aim: To report the prevalence and characteristic features of persistent trigeminal artery (PTA), PTA variant (PTAV), and other variants of the head and neck vessels, identified using magnetic resonance angiography (MRA). Materials and methods: The three-dimensional (3D) time of flight (TOF) MRA and 3D contrast-enhanced (CE) MRA images of 6095 consecutive patients who underwent 3 T MRA at Liaocheng People's Hospital from 1 September 2008 through 31 May 2012 were retrospectively reviewed and analysed. Thirty-two patients were excluded because of suboptimal image quality or internal carotid artery (ICA) occlusion. Results: The prevalence of both PTA and PTAV was 0.63% (PTA, 26 cases; PTAV, 12 cases). The prevalence of coexisting variants of the head and neck vessels in cases of PTA/PTAV was 52.6% (20 of 38 cases). The vascular variants that coexisted with cases of PTA/PTAV were as follows: the intracranial arteries varied in 10 cases, the origin of the supra-aortic arteries varied in nine cases, the vertebral artery (VA) varied in 14 cases, and six cases displayed fenestrations. Fifteen of the 20 cases contained more than two types of variants. Conclusion: The prevalence of both PTA and PTAV was 0.63%. Although PTA and PTAV are rare vascular variants, they frequently coexist with other variants of the head and neck vessels. Multiple vascular variations can coexist in a single patient. Recognizing PTA, PTAV, and other variants of the head and neck vessels is crucial when planning a neuroradiological intervention or surgery. Recognizing the medial PTA is very important in clinical practice when performing trans-sphenoidal surgery on the pituitary as failure to do so could result in massive haemorrhage

  8. Energy/Reliability Trade-offs in Fault-Tolerant Event-Triggered Distributed Embedded Systems

    DEFF Research Database (Denmark)

    Gan, Junhe; Gruian, Flavius; Pop, Paul

    2011-01-01

    task, such that transient faults are tolerated, the timing constraints of the application are satisfied, and the energy consumed is minimized. Tasks are scheduled using fixed-priority preemptive scheduling, while replication is used for recovery from multiple transient faults. Addressing energy...... and reliability simultaneously is especially challenging, since lowering the voltage to reduce the energy consumption has been shown to increase the transient fault rate. We presented a Tabu Search-based approach which uses an energy/reliability trade-off model to find reliable and schedulable implementations...

  9. Reliability of Interaural Time Difference-Based Localization Training in Elderly Individuals with Speech-in-Noise Perception Disorder.

    Science.gov (United States)

    Delphi, Maryam; Lotfi, M-Yones; Moossavi, Abdollah; Bakhshi, Enayatollah; Banimostafa, Maryam

    2017-09-01

    Previous studies have shown that interaural-time-difference (ITD) training can improve localization ability. Surprisingly little is, however, known about localization training vis-à-vis speech perception in noise based on interaural time difference in the envelope (ITD ENV). We sought to investigate the reliability of an ITD ENV-based training program in speech-in-noise perception among elderly individuals with normal hearing and speech-in-noise disorder. The present interventional study was performed during 2016. Sixteen elderly men between 55 and 65 years of age with the clinical diagnosis of normal hearing up to 2000 Hz and speech-in-noise perception disorder participated in this study. The training localization program was based on changes in ITD ENV. In order to evaluate the reliability of the training program, we performed speech-in-noise tests before the training program, immediately afterward, and then at 2 months' follow-up. The reliability of the training program was analyzed using the Friedman test and the SPSS software. Significant statistical differences were shown in the mean scores of speech-in-noise perception between the 3 time points (P=0.001). The results also indicated no difference in the mean scores of speech-in-noise perception between the 2 time points of immediately after the training program and 2 months' follow-up (P=0.212). The present study showed the reliability of an ITD ENV-based localization training in elderly individuals with speech-in-noise perception disorder.

  10. Significance of functional disease-causal/susceptible variants identified by whole-genome analyses for the understanding of human diseases.

    Science.gov (United States)

    Hitomi, Yuki; Tokunaga, Katsushi

    2017-01-01

    Human genome variation may cause differences in traits and disease risks. Disease-causal/susceptible genes and variants for both common and rare diseases can be detected by comprehensive whole-genome analyses, such as whole-genome sequencing (WGS), using next-generation sequencing (NGS) technology and genome-wide association studies (GWAS). Here, in addition to the application of an NGS as a whole-genome analysis method, we summarize approaches for the identification of functional disease-causal/susceptible variants from abundant genetic variants in the human genome and methods for evaluating their functional effects in human diseases, using an NGS and in silico and in vitro functional analyses. We also discuss the clinical applications of the functional disease causal/susceptible variants to personalized medicine.

  11. An application of characteristic function in order to predict reliability and lifetime of aeronautical hardware

    Energy Technology Data Exchange (ETDEWEB)

    Żurek, Józef; Kaleta, Ryszard; Zieja, Mariusz [Air Force Institute of Technology ul. Księcia Bolesława 6 01-494 Warsaw (Poland)

    2016-06-08

    The forecasting of reliability and life of aeronautical hardware requires recognition of many and various destructive processes that deteriorate the health/maintenance status thereof. The aging of technical components of aircraft as an armament system proves of outstanding significance to reliability and safety of the whole system. The aging process is usually induced by many and various factors, just to mention mechanical, biological, climatic, or chemical ones. The aging is an irreversible process and considerably affects (i.e. reduces) reliability and lifetime of aeronautical equipment. Application of the characteristic function of the aging process is suggested to predict reliability and lifetime of aeronautical hardware. An increment in values of diagnostic parameters is introduced to formulate then, using the characteristic function and after some rearrangements, the partial differential equation. An analytical dependence for the characteristic function of the aging process is a solution to this equation. With the inverse transformation applied, the density function of the aging of aeronautical hardware is found. Having found the density function, one can determine the aeronautical equipment’s reliability and lifetime. The in-service collected or the life tests delivered data are used to attain this goal. Coefficients in this relationship are found using the likelihood function.

  12. An application of characteristic function in order to predict reliability and lifetime of aeronautical hardware

    International Nuclear Information System (INIS)

    Żurek, Józef; Kaleta, Ryszard; Zieja, Mariusz

    2016-01-01

    The forecasting of reliability and life of aeronautical hardware requires recognition of many and various destructive processes that deteriorate the health/maintenance status thereof. The aging of technical components of aircraft as an armament system proves of outstanding significance to reliability and safety of the whole system. The aging process is usually induced by many and various factors, just to mention mechanical, biological, climatic, or chemical ones. The aging is an irreversible process and considerably affects (i.e. reduces) reliability and lifetime of aeronautical equipment. Application of the characteristic function of the aging process is suggested to predict reliability and lifetime of aeronautical hardware. An increment in values of diagnostic parameters is introduced to formulate then, using the characteristic function and after some rearrangements, the partial differential equation. An analytical dependence for the characteristic function of the aging process is a solution to this equation. With the inverse transformation applied, the density function of the aging of aeronautical hardware is found. Having found the density function, one can determine the aeronautical equipment’s reliability and lifetime. The in-service collected or the life tests delivered data are used to attain this goal. Coefficients in this relationship are found using the likelihood function.

  13. A Stochastic Reliability Model for Application in a Multidisciplinary Optimization of a Low Pressure Turbine Blade Made of Titanium Aluminide

    Directory of Open Access Journals (Sweden)

    Christian Dresbach

    Full Text Available Abstract Currently, there are a lot of research activities dealing with gamma titanium aluminide (γ-TiAl alloys as new materials for low pressure turbine (LPT blades. Even though the scatter in mechanical properties of such intermetallic alloys is more distinctive as in conventional metallic alloys, stochastic investigations on γ -TiAl alloys are very rare. For this reason, we analyzed the scatter in static and dynamic mechanical properties of the cast alloy Ti-48Al-2Cr-2Nb. It was found that this alloy shows a size effect in strength which is less pronounced than the size effect of brittle materials. A weakest-link approach is enhanced for describing a scalable size effect under multiaxial stress states and implemented in a post processing tool for reliability analysis of real components. The presented approach is a first applicable reliability model for semi-brittle materials. The developed reliability tool was integrated into a multidisciplinary optimization of the geometry of a LPT blade. Some processes of the optimization were distributed in a wide area network, so that specialized tools for each discipline could be employed. The optimization results show that it is possible to increase the aerodynamic efficiency and the structural mechanics reliability at the same time, while ensuring the blade can be manufactured in an investment casting process.

  14. Reliability evaluation of deregulated electric power systems for planning applications

    International Nuclear Information System (INIS)

    Ehsani, A.; Ranjbar, A.M.; Jafari, A.; Fotuhi-Firuzabad, M.

    2008-01-01

    In a deregulated electric power utility industry in which a competitive electricity market can influence system reliability, market risks cannot be ignored. This paper (1) proposes an analytical probabilistic model for reliability evaluation of competitive electricity markets and (2) develops a methodology for incorporating the market reliability problem into HLII reliability studies. A Markov state space diagram is employed to evaluate the market reliability. Since the market is a continuously operated system, the concept of absorbing states is applied to it in order to evaluate the reliability. The market states are identified by using market performance indices and the transition rates are calculated by using historical data. The key point in the proposed method is the concept that the reliability level of a restructured electric power system can be calculated using the availability of the composite power system (HLII) and the reliability of the electricity market. Two case studies are carried out over Roy Billinton Test System (RBTS) to illustrate interesting features of the proposed methodology

  15. Medical device reliability and associated areas

    National Research Council Canada - National Science Library

    Dhillon, Balbir S

    2000-01-01

    .... Although the history of reliability engineering can be traced back to World War II, the application of reliability engineering concepts to medical devices is a fairly recent idea that goes back to the latter part of the 1960s when many publications on medical device reliability emerged. Today, a large number of books on general reliability have been...

  16. NDE performance demonstration in the US nuclear power industry - applications, costs, lessons learned, and connection to NDE reliability

    International Nuclear Information System (INIS)

    Ammirato, F.

    1997-01-01

    Periodic inservice inspection (ISI) of nuclear power plant components is performed in the United States to satisfy legal commitments and to provide plant owners with reliable information for managing degradation. Performance demonstration provides credible evidence that ISI will fulfill its objectives. This paper examines the technical requirements for inspection and discusses how these technical needs are used to develop effective performance demonstration applications. NDE reliability is discussed with particular reference to its role in structural integrity assessments and its connection with performance demonstration. It is shown that the role of NDE reliability can range from very small to critical depending on the particular application and must be considered carefully in design of inspection techniques and performance demonstration programs used to qualify the inspection. Finally, the costs, benefits, and problems associated with performance demonstration are reviewed along with lessons learned from more than 15 years of performance demonstration experience in the US. (orig.)

  17. Application of Cold Chain Logistics Safety Reliability in Fresh Food Distribution Optimization

    OpenAIRE

    Zou Yifeng; Xie Ruhe

    2013-01-01

    In view of the nature of fresh food’s continuous decrease of safety during distribution process, this study applied safety reliability of food cold chain logistics to establish fresh food distribution routing optimization model with time windows, and solved the model using MAX-MIN Ant System (MMAS) with case analysis. Studies have shown that the mentioned model and algorithm can better solve the problem of fresh food distribution routing optimization with time windows.

  18. Reliability Engineering for Service Oriented Architectures

    Science.gov (United States)

    2013-02-01

    Common Object Request Broker Architecture Ecosystem In software , an ecosystem is a set of applications and/or services that grad- ually build up over time...Enterprise Service Bus Foreign In an SOA context: Any SOA, service or software which the owners of the calling software do not have control of, either...SOA Service Oriented Architecture SRE Software Reliability Engineering System Mode Many systems exhibit different modes of operation. E.g. the cockpit

  19. RELIABILITY MODELING BASED ON INCOMPLETE DATA: OIL PUMP APPLICATION

    Directory of Open Access Journals (Sweden)

    Ahmed HAFAIFA

    2014-07-01

    Full Text Available The reliability analysis for industrial maintenance is now increasingly demanded by the industrialists in the world. Indeed, the modern manufacturing facilities are equipped by data acquisition and monitoring system, these systems generates a large volume of data. These data can be used to infer future decisions affecting the health facilities. These data can be used to infer future decisions affecting the state of the exploited equipment. However, in most practical cases the data used in reliability modelling are incomplete or not reliable. In this context, to analyze the reliability of an oil pump, this work proposes to examine and treat the incomplete, incorrect or aberrant data to the reliability modeling of an oil pump. The objective of this paper is to propose a suitable methodology for replacing the incomplete data using a regression method.

  20. Improvement of the reliability graph with general gates to analyze the reliability of dynamic systems that have various operation modes

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Seung Ki [Div. of Research Reactor System Design, Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); No, Young Gyu; Seong, Poong Hyun [Dept. of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of)

    2016-04-15

    The safety of nuclear power plants is analyzed by a probabilistic risk assessment, and the fault tree analysis is the most widely used method for a risk assessment with the event tree analysis. One of the well-known disadvantages of the fault tree is that drawing a fault tree for a complex system is a very cumbersome task. Thus, several graphical modeling methods have been proposed for the convenient and intuitive modeling of complex systems. In this paper, the reliability graph with general gates (RGGG) method, one of the intuitive graphical modeling methods based on Bayesian networks, is improved for the reliability analyses of dynamic systems that have various operation modes with time. A reliability matrix is proposed and it is explained how to utilize the reliability matrix in the RGGG for various cases of operation mode changes. The proposed RGGG with a reliability matrix provides a convenient and intuitive modeling of various operation modes of complex systems, and can also be utilized with dynamic nodes that analyze the failure sequences of subcomponents. The combinatorial use of a reliability matrix with dynamic nodes is illustrated through an application to a shutdown cooling system in a nuclear power plant.

  1. Improvement of the reliability graph with general gates to analyze the reliability of dynamic systems that have various operation modes

    International Nuclear Information System (INIS)

    Shin, Seung Ki; No, Young Gyu; Seong, Poong Hyun

    2016-01-01

    The safety of nuclear power plants is analyzed by a probabilistic risk assessment, and the fault tree analysis is the most widely used method for a risk assessment with the event tree analysis. One of the well-known disadvantages of the fault tree is that drawing a fault tree for a complex system is a very cumbersome task. Thus, several graphical modeling methods have been proposed for the convenient and intuitive modeling of complex systems. In this paper, the reliability graph with general gates (RGGG) method, one of the intuitive graphical modeling methods based on Bayesian networks, is improved for the reliability analyses of dynamic systems that have various operation modes with time. A reliability matrix is proposed and it is explained how to utilize the reliability matrix in the RGGG for various cases of operation mode changes. The proposed RGGG with a reliability matrix provides a convenient and intuitive modeling of various operation modes of complex systems, and can also be utilized with dynamic nodes that analyze the failure sequences of subcomponents. The combinatorial use of a reliability matrix with dynamic nodes is illustrated through an application to a shutdown cooling system in a nuclear power plant

  2. Key attributes of the SAPHIRE risk and reliability analysis software for risk-informed probabilistic applications

    International Nuclear Information System (INIS)

    Smith, Curtis; Knudsen, James; Kvarfordt, Kellie; Wood, Ted

    2008-01-01

    The Idaho National Laboratory is a primary developer of probabilistic risk and reliability analysis (PRRA) tools, dating back over 35 years. Evolving from mainframe-based software, the current state-of-the-practice has led to the creation of the SAPHIRE software. Currently, agencies such as the Nuclear Regulatory Commission, the National Aeronautics and Aerospace Agency, the Department of Energy, and the Department of Defense use version 7 of the SAPHIRE software for many of their risk-informed activities. In order to better understand and appreciate the power of software as part of risk-informed applications, we need to recall that our current analysis methods and solution methods have built upon pioneering work done 30-40 years ago. We contrast this work with the current capabilities in the SAPHIRE analysis package. As part of this discussion, we provide information for both the typical features and special analysis capabilities, which are available. We also present the application and results typically found with state-of-the-practice PRRA models. By providing both a high-level and detailed look at the SAPHIRE software, we give a snapshot in time for the current use of software tools in a risk-informed decision arena

  3. Reliability of Smartphone-Based Instant Messaging Application for Diagnosis, Classification, and Decision-making in Pediatric Orthopedic Trauma.

    Science.gov (United States)

    Stahl, Ido; Katsman, Alexander; Zaidman, Michael; Keshet, Doron; Sigal, Amit; Eidelman, Mark

    2017-07-11

    Smartphones have the ability to capture and send images, and their use has become common in the emergency setting for transmitting radiographic images with the intent to consult an off-site specialist. Our objective was to evaluate the reliability of smartphone-based instant messaging applications for the evaluation of various pediatric limb traumas, as compared with the standard method of viewing images of a workstation-based picture archiving and communication system (PACS). X-ray images of 73 representative cases of pediatric limb trauma were captured and transmitted to 5 pediatric orthopedic surgeons by the Whatsapp instant messaging application on an iPhone 6 smartphone. Evaluators were asked to diagnose, classify, and determine the course of treatment for each case over their personal smartphones. Following a 4-week interval, revaluation was conducted using the PACS. Intraobserver agreement was calculated for overall agreement and per fracture site. The overall results indicate "near perfect agreement" between interpretations of the radiographs on smartphones compared with computer-based PACS, with κ of 0.84, 0.82, and 0.89 for diagnosis, classification, and treatment planning, respectively. Looking at the results per fracture site, we also found substantial to near perfect agreement. Smartphone-based instant messaging applications are reliable for evaluation of a wide range of pediatric limb fractures. This method of obtaining an expert opinion from the off-site specialist is immediately accessible and inexpensive, making smartphones a powerful tool for doctors in the emergency department, primary care clinics, or remote medical centers, enabling timely and appropriate treatment for the injured child. This method is not a substitution for evaluation of the images in the standard method over computer-based PACS, which should be performed before final decision-making.

  4. The Saccharomyces Genome Database Variant Viewer.

    Science.gov (United States)

    Sheppard, Travis K; Hitz, Benjamin C; Engel, Stacia R; Song, Giltae; Balakrishnan, Rama; Binkley, Gail; Costanzo, Maria C; Dalusag, Kyla S; Demeter, Janos; Hellerstedt, Sage T; Karra, Kalpana; Nash, Robert S; Paskov, Kelley M; Skrzypek, Marek S; Weng, Shuai; Wong, Edith D; Cherry, J Michael

    2016-01-04

    The Saccharomyces Genome Database (SGD; http://www.yeastgenome.org) is the authoritative community resource for the Saccharomyces cerevisiae reference genome sequence and its annotation. In recent years, we have moved toward increased representation of sequence variation and allelic differences within S. cerevisiae. The publication of numerous additional genomes has motivated the creation of new tools for their annotation and analysis. Here we present the Variant Viewer: a dynamic open-source web application for the visualization of genomic and proteomic differences. Multiple sequence alignments have been constructed across high quality genome sequences from 11 different S. cerevisiae strains and stored in the SGD. The alignments and summaries are encoded in JSON and used to create a two-tiered dynamic view of the budding yeast pan-genome, available at http://www.yeastgenome.org/variant-viewer. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  5. Factor VII deficiency: a novel missense variant and genotype-phenotype correlation in patients from Southern Italy.

    Science.gov (United States)

    Tiscia, Giovanni; Favuzzi, Giovanni; Chinni, Elena; Colaizzo, Donatella; Fischetti, Lucia; Intrieri, Mariano; Margaglione, Maurizio; Grandone, Elvira

    2017-01-01

    This study aimed at attempting to correlate genotype and phenotype in factor VII deficiency. Here, we present molecular and clinical findings of 10 patients with factor VII deficiency. From 2013 to 2016, 10 subjects were referred to our center because of a prolonged prothrombin time identified during routine or presurgery examinations or after a laboratory assessment of a bleeding episode. Mutation characterization was performed using the bioinformatics applications PROMO, SIFT, and Polyphen-2. Structural changes in the factor VII protein were analyzed using the SPDB viewer tool. Of the 10 variants we identified, 1 was responsible for a novel missense change (c.1199G>C, p.Cys400Ser); in 2 cases we identified the c.-54G>A and c.509G>A (p.Arg170His) polymorphic variants in the 5'-upstream region of the factor VII gene and exon 6, respectively. To our knowledge, neither of these polymorphic variants has been described previously in factor VII-deficient patients. In silico predictions showed differences in binding sites for transcription factors caused by the c.-54G>A variant and a probable damaging effect of the p.Cys400Ser missense change on factor VII active conformation, leading to breaking of the Cys400-Cys428 disulfide bridge. Our findings further suggest that, independently of factor VII levels and of variants potentially affecting factor VII levels, environmental factors, e.g., trauma, could heavily influence the clinical phenotype of factor VII-deficient patients.

  6. Electre III method in assessment of variants of integrated urban public transport system in Cracow

    Directory of Open Access Journals (Sweden)

    Katarzyna SOLECKA

    2014-12-01

    Full Text Available There is a lot of methods which are currently used for assessment of urban public transport system development and operation e.g. economic analysis, mostly Cost-Benefit Analysis – CBA, Cost-Effectiveness Analysis - CEA, hybrid methods, measurement methods (survey e.g. among passengers and measurement of traffic volume, vehicles capacity etc., and multicriteria decision aiding methods (multicriteria analysis. The main aim of multicriteria analysis is the choice of the most desirable solution from among alternative variants according to different criteria which are difficult to compare against one another. There are several multicriteria methods for assessment of urban public transport system development and operation, e.g. AHP, ANP, Electre, Promethee, Oreste. The paper presents an application of one of the most popular variant ranking methods – Electre III method. The algorithm of Electre III method usage is presented in detail and then its application for assessment of variants of urban public transport system integration in Cracow is shown. The final ranking of eight variants of integration of urban public transport system in Cracow (from the best to the worst variant was drawn up with the application of the Electre III method. For assessment purposes 10 criteria were adopted: economical, technical, environmental, and social; they form a consistent criteria family. The problem was analyzed with taking into account different points of view: city authorities, public transport operators, city units responsible for transport management, passengers and others users. Separate models of preferences for all stakeholders were created.

  7. Protein variants in Hiroshima and Nagasaki: tales of two cities.

    Science.gov (United States)

    Neel, J V; Satoh, C; Smouse, P; Asakawa, J; Takahashi, N; Goriki, K; Fujita, M; Kageoka, T; Hazama, R

    1988-12-01

    The results of 1,465,423 allele product determinations based on blood samples from Hiroshima and Nagasaki, involving 30 different proteins representing 32 different gene products, are analyzed in a variety of ways, with the following conclusions: (1) Sibships and their parents are included in the sample. Our analysis reveals that statistical procedures designed to reduce the sample to equivalent independent genomes do not in population comparisons compensate for the familial cluster effect of rare variants. Accordingly, the data set was reduced to one representative of each sibship (937,427 allele products). (2) Both chi 2-type contrasts and a genetic distance measure (delta) reveal that rare variants (P less than .01) are collectively as effective as polymorphisms in establishing genetic differences between the two cities. (3) We suggest that rare variants that individually exhibit significant intercity differences are probably the legacy of tribal private polymorphisms that occurred during prehistoric times. (4) Despite the great differences in the known histories of the two cities, both the overall frequency of rare variants and the number of different rare variants are essentially identical in the two cities. (5) The well-known differences in locus variability are confirmed, now after adjustment for sample size differences for the various locus products; in this large series we failed to detect variants at only three of 29 loci for which sample size exceeded 23,000. (6) The number of alleles identified per locus correlates positively with subunit molecular weight. (7) Loci supporting genetic polymorphisms are characterized by more rare variants than are loci at which polymorphisms were not encountered. (8) Loci whose products do not appear to be essential for health support more variants than do loci the absence of whose product is detrimental to health. (9) There is a striking excess of rare variants over the expectation under the neutral mutation

  8. Incorporating temporal variation in seabird telemetry data: time variant kernel density models

    Science.gov (United States)

    Gilbert, Andrew; Adams, Evan M.; Anderson, Carl; Berlin, Alicia; Bowman, Timothy D.; Connelly, Emily; Gilliland, Scott; Gray, Carrie E.; Lepage, Christine; Meattey, Dustin; Montevecchi, William; Osenkowski, Jason; Savoy, Lucas; Stenhouse, Iain; Williams, Kathryn

    2015-01-01

    A key component of the Mid-Atlantic Baseline Studies project was tracking the individual movements of focal marine bird species (Red-throated Loon [Gavia stellata], Northern Gannet [Morus bassanus], and Surf Scoter [Melanitta perspicillata]) through the use of satellite telemetry. This element of the project was a collaborative effort with the Department of Energy (DOE), Bureau of Ocean Energy Management (BOEM), the U.S. Fish and Wildlife Service (USFWS), and Sea Duck Joint Venture (SDJV), among other organizations. Satellite telemetry is an effective and informative tool for understanding individual animal movement patterns, allowing researchers to mark an individual once, and thereafter follow the movements of the animal in space and time. Aggregating telemetry data from multiple individuals can provide information about the spatial use and temporal movements of populations. Tracking data is three dimensional, with the first two dimensions, X and Y, ordered along the third dimension, time. GIS software has many capabilities to store, analyze and visualize the location information, but little or no support for visualizing the temporal data, and tools for processing temporal data are lacking. We explored several ways of analyzing the movement patterns using the spatiotemporal data provided by satellite tags. Here, we present the results of one promising method: time-variant kernel density analysis (Keating and Cherry, 2009). The goal of this chapter is to demonstrate new methods in spatial analysis to visualize and interpret tracking data for a large number of individual birds across time in the mid-Atlantic study area and beyond. In this chapter, we placed greater emphasis on analytical methods than on the behavior and ecology of the animals tracked. For more detailed examinations of the ecology and wintering habitat use of the focal species in the midAtlantic, see Chapters 20-22.

  9. Design for reliability: NASA reliability preferred practices for design and test

    Science.gov (United States)

    Lalli, Vincent R.

    1994-01-01

    This tutorial summarizes reliability experience from both NASA and industry and reflects engineering practices that support current and future civil space programs. These practices were collected from various NASA field centers and were reviewed by a committee of senior technical representatives from the participating centers (members are listed at the end). The material for this tutorial was taken from the publication issued by the NASA Reliability and Maintainability Steering Committee (NASA Reliability Preferred Practices for Design and Test. NASA TM-4322, 1991). Reliability must be an integral part of the systems engineering process. Although both disciplines must be weighed equally with other technical and programmatic demands, the application of sound reliability principles will be the key to the effectiveness and affordability of America's space program. Our space programs have shown that reliability efforts must focus on the design characteristics that affect the frequency of failure. Herein, we emphasize that these identified design characteristics must be controlled by applying conservative engineering principles.

  10. The Role of Constitutional Copy Number Variants in Breast Cancer

    Science.gov (United States)

    Walker, Logan C.; Wiggins, George A.R.; Pearson, John F.

    2015-01-01

    Constitutional copy number variants (CNVs) include inherited and de novo deviations from a diploid state at a defined genomic region. These variants contribute significantly to genetic variation and disease in humans, including breast cancer susceptibility. Identification of genetic risk factors for breast cancer in recent years has been dominated by the use of genome-wide technologies, such as single nucleotide polymorphism (SNP)-arrays, with a significant focus on single nucleotide variants. To date, these large datasets have been underutilised for generating genome-wide CNV profiles despite offering a massive resource for assessing the contribution of these structural variants to breast cancer risk. Technical challenges remain in determining the location and distribution of CNVs across the human genome due to the accuracy of computational prediction algorithms and resolution of the array data. Moreover, better methods are required for interpreting the functional effect of newly discovered CNVs. In this review, we explore current and future application of SNP array technology to assess rare and common CNVs in association with breast cancer risk in humans. PMID:27600231

  11. Advances in population surveillance for physical activity and sedentary behavior: reliability and validity of time use surveys.

    Science.gov (United States)

    van der Ploeg, Hidde P; Merom, Dafna; Chau, Josephine Y; Bittman, Michael; Trost, Stewart G; Bauman, Adrian E

    2010-11-15

    Many countries conduct regular national time use surveys, some of which date back as far as the 1960s. Time use surveys potentially provide more detailed and accurate national estimates of the prevalence of sedentary and physical activity behavior than more traditional self-report surveillance systems. In this study, the authors determined the reliability and validity of time use surveys for assessing sedentary and physical activity behavior. In 2006 and 2007, participants (n = 134) were recruited from work sites in the Australian state of New South Wales. Participants completed a 2-day time use diary twice, 7 days apart, and wore an accelerometer. The 2 diaries were compared for test-retest reliability, and comparison with the accelerometer determined concurrent validity. Participants with similar activity patterns during the 2 diary periods showed reliability intraclass correlations of 0.74 and 0.73 for nonoccupational sedentary behavior and moderate/vigorous physical activity, respectively. Comparison of the diary with the accelerometer showed Spearman correlations of 0.57-0.59 and 0.45-0.69 for nonoccupational sedentary behavior and moderate/vigorous physical activity, respectively. Time use surveys appear to be more valid for population surveillance of nonoccupational sedentary behavior and health-enhancing physical activity than more traditional surveillance systems. National time use surveys could be used to retrospectively study nonoccupational sedentary and physical activity behavior over the past 5 decades.

  12. Product Variant Master as a Means to Handle Variant Design

    DEFF Research Database (Denmark)

    Hildre, Hans Petter; Mortensen, Niels Henrik; Andreasen, Mogens Myrup

    1996-01-01

    be implemented in the CAD system I-DEAS. A precondition for high degree of computer support is identification of a product variant master from which new variants can be derived. This class platform defines how a product build up fit certain production methods and rules governing determination of modules...

  13. Adaptation of the Godin Leisure-Time Exercise Questionnaire into Turkish: The Validity and Reliability Study

    Directory of Open Access Journals (Sweden)

    Emine Sari

    2016-01-01

    Full Text Available This study was conducted with the aim of determining whether the Turkish form of the “Leisure-Time Exercise Questionnaire” developed by Godin is a valid and reliable tool for diabetic patients in Turkey. The study was conducted as a methodological research on 300 diabetic patients in Turkey. The linguistic equivalence of the questionnaire was assessed through the back-translation method, while its content validity was assessed through obtaining expert opinions. Cronbach’s alpha value was found to assess the reliability of the questionnaire. The test-retest analysis and the correlation between independent observers were examined. The content validity index (CVI was found to be .82 according to the expert assessments, and no statistical difference was found between them (Kendall’s W=.17, p=.235. Cronbach’s alpha was found to be α=.64, the result of the test-retest analysis was r=.97, and the correlation between independent observers (ICC was .98. This study found that the Turkish form of the Leisure-Time Exercise Questionnaire is a valid and reliable tool that can be used to define and assess the exercise behaviors of Turkish diabetic patients.

  14. Challenges Regarding IP Core Functional Reliability

    Science.gov (United States)

    Berg, Melanie D.; LaBel, Kenneth A.

    2017-01-01

    For many years, intellectual property (IP) cores have been incorporated into field programmable gate array (FPGA) and application specific integrated circuit (ASIC) design flows. However, the usage of large complex IP cores were limited within products that required a high level of reliability. This is no longer the case. IP core insertion has become mainstream including their use in highly reliable products. Due to limited visibility and control, challenges exist when using IP cores and subsequently compromise product reliability. We discuss challenges and suggest potential solutions to critical application IP insertion.

  15. Reliability and safety engineering

    CERN Document Server

    Verma, Ajit Kumar; Karanki, Durga Rao

    2016-01-01

    Reliability and safety are core issues that must be addressed throughout the life cycle of engineering systems. Reliability and Safety Engineering presents an overview of the basic concepts, together with simple and practical illustrations. The authors present reliability terminology in various engineering fields, viz.,electronics engineering, software engineering, mechanical engineering, structural engineering and power systems engineering. The book describes the latest applications in the area of probabilistic safety assessment, such as technical specification optimization, risk monitoring and risk informed in-service inspection. Reliability and safety studies must, inevitably, deal with uncertainty, so the book includes uncertainty propagation methods: Monte Carlo simulation, fuzzy arithmetic, Dempster-Shafer theory and probability bounds. Reliability and Safety Engineering also highlights advances in system reliability and safety assessment including dynamic system modeling and uncertainty management. Cas...

  16. A study of operational and testing reliability in software reliability analysis

    International Nuclear Information System (INIS)

    Yang, B.; Xie, M.

    2000-01-01

    Software reliability is an important aspect of any complex equipment today. Software reliability is usually estimated based on reliability models such as nonhomogeneous Poisson process (NHPP) models. Software systems are improving in testing phase, while it normally does not change in operational phase. Depending on whether the reliability is to be predicted for testing phase or operation phase, different measure should be used. In this paper, two different reliability concepts, namely, the operational reliability and the testing reliability, are clarified and studied in detail. These concepts have been mixed up or even misused in some existing literature. Using different reliability concept will lead to different reliability values obtained and it will further lead to different reliability-based decisions made. The difference of the estimated reliabilities is studied and the effect on the optimal release time is investigated

  17. Exact combinatorial reliability analysis of dynamic systems with sequence-dependent failures

    International Nuclear Information System (INIS)

    Xing Liudong; Shrestha, Akhilesh; Dai Yuanshun

    2011-01-01

    Many real-life fault-tolerant systems are subjected to sequence-dependent failure behavior, in which the order in which the fault events occur is important to the system reliability. Such systems can be modeled by dynamic fault trees (DFT) with priority-AND (pAND) gates. Existing approaches for the reliability analysis of systems subjected to sequence-dependent failures are typically state-space-based, simulation-based or inclusion-exclusion-based methods. Those methods either suffer from the state-space explosion problem or require long computation time especially when results with high degree of accuracy are desired. In this paper, an analytical method based on sequential binary decision diagrams is proposed. The proposed approach can analyze the exact reliability of non-repairable dynamic systems subjected to the sequence-dependent failure behavior. Also, the proposed approach is combinatorial and is applicable for analyzing systems with any arbitrary component time-to-failure distributions. The application and advantages of the proposed approach are illustrated through analysis of several examples. - Highlights: → We analyze the sequence-dependent failure behavior using combinatorial models. → The method has no limitation on the type of time-to-failure distributions. → The method is analytical and based on sequential binary decision diagrams (SBDD). → The method is computationally more efficient than existing methods.

  18. Listeners' processing of a given reduced word pronunciation variant directly reflects their exposure to this variant: Evidence from native listeners and learners of French.

    Science.gov (United States)

    Brand, Sophie; Ernestus, Mirjam

    2018-05-01

    In casual conversations, words often lack segments. This study investigates whether listeners rely on their experience with reduced word pronunciation variants during the processing of single segment reduction. We tested three groups of listeners in a lexical decision experiment with French words produced either with or without word-medial schwa (e.g., /ʀvy/ and /ʀvy/ for revue). Participants also rated the relative frequencies of the two pronunciation variants of the words. If the recognition accuracy and reaction times (RTs) for a given listener group correlate best with the frequencies of occurrence holding for that given listener group, recognition is influenced by listeners' exposure to these variants. Native listeners' relative frequency ratings correlated well with their accuracy scores and RTs. Dutch advanced learners' accuracy scores and RTs were best predicted by their own ratings. In contrast, the accuracy and RTs from Dutch beginner learners of French could not be predicted by any relative frequency rating; the rating task was probably too difficult for them. The participant groups showed behaviour reflecting their difference in experience with the pronunciation variants. Our results strongly suggest that listeners store the frequencies of occurrence of pronunciation variants, and consequently the variants themselves.

  19. eCD4-Ig variants that more potently neutralize HIV-1.

    Science.gov (United States)

    Fetzer, Ina; Gardner, Matthew R; Davis-Gardner, Meredith E; Prasad, Neha R; Alfant, Barnett; Weber, Jesse A; Farzan, Michael

    2018-03-28

    The HIV-1 entry inhibitor eCD4-Ig is a fusion of CD4-Ig and a coreceptor-mimetic peptide. eCD4-Ig is markedly more potent than CD4-Ig, with neutralization efficiencies approaching those of HIV-1 broadly neutralizing antibodies (bNAbs). However, unlike bNAbs, eCD4-Ig neutralizes all HIV-1, HIV-2 and SIV isolates that it has been tested against, suggesting that it may be useful in clinical settings where antibody escape is a concern. Here we characterize three new eCD4-Ig variants, each with different architectures and each utilizing D1.22, a stabilized form of CD4 domain 1. These variants were 10- to 20-fold more potent than our original eCD4-Ig variant, with a construct bearing four D1.22 domains (eD1.22-HL-Ig) exhibiting the greatest potency. However, this variant mediated less efficient antibody-dependent cell-mediated cytotoxicity (ADCC) activity than eCD4-Ig itself or several other eCD4-Ig variants, including the smallest variant (eD1.22-Ig). A variant with the same architecture as original eCD4-Ig (eD1.22-D2-Ig) showed modestly higher thermal stability and best prevented promotion of infection of CCR5-positive, CD4-negative cells. All three variants, and eCD4-Ig itself, mediated more efficient shedding of the HIV-1 envelope glycoprotein gp120 than did CD4-Ig. Finally, we show that only three D1.22 mutations contributed to the potency of eD1.22-D2-Ig, and that introduction of these changes into eCD4-Ig resulted in a variant 9-fold more potent than eCD4-Ig and 2-fold more potent than eD1.22-D2-Ig. These studies will assist in developing eCD4-Ig variants with properties optimized for prophylaxis, therapy, and cure applications. IMPORTANCE HIV-1 bNAbs have properties different from antiretroviral compounds. Specifically, antibodies can enlist immune effector cells to eliminate infected cells, whereas antiretroviral compounds simply interfere with various steps in the viral lifecycle. Unfortunately, HIV-1 is adept at evading antibody recognition, limiting the

  20. Reliability analysis techniques in power plant design

    International Nuclear Information System (INIS)

    Chang, N.E.

    1981-01-01

    An overview of reliability analysis techniques is presented as applied to power plant design. The key terms, power plant performance, reliability, availability and maintainability are defined. Reliability modeling, methods of analysis and component reliability data are briefly reviewed. Application of reliability analysis techniques from a design engineering approach to improving power plant productivity is discussed. (author)

  1. Reliability methods in nuclear power plant ageing management

    International Nuclear Information System (INIS)

    Simola, K.

    1999-01-01

    The aim of nuclear power plant ageing management is to maintain an adequate safety level throughout the lifetime of the plant. In ageing studies, the reliability of components, systems and structures is evaluated taking into account the possible time-dependent degradation. The phases of ageing analyses are generally the identification of critical components, identification and evaluation of ageing effects, and development of mitigation methods. This thesis focuses on the use of reliability methods and analyses of plant- specific operating experience in nuclear power plant ageing studies. The presented applications and method development have been related to nuclear power plants, but many of the approaches can also be applied outside the nuclear industry. The thesis consists of a summary and seven publications. The summary provides an overview of ageing management and discusses the role of reliability methods in ageing analyses. In the publications, practical applications and method development are described in more detail. The application areas at component and system level are motor-operated valves and protection automation systems, for which experience-based ageing analyses have been demonstrated. Furthermore, Bayesian ageing models for repairable components have been developed, and the management of ageing by improving maintenance practices is discussed. Recommendations for improvement of plant information management in order to facilitate ageing analyses are also given. The evaluation and mitigation of ageing effects on structural components is addressed by promoting the use of probabilistic modelling of crack growth, and developing models for evaluation of the reliability of inspection results. (orig.)

  2. Reliability methods in nuclear power plant ageing management

    Energy Technology Data Exchange (ETDEWEB)

    Simola, K. [VTT Automation, Espoo (Finland). Industrial Automation

    1999-07-01

    The aim of nuclear power plant ageing management is to maintain an adequate safety level throughout the lifetime of the plant. In ageing studies, the reliability of components, systems and structures is evaluated taking into account the possible time-dependent degradation. The phases of ageing analyses are generally the identification of critical components, identification and evaluation of ageing effects, and development of mitigation methods. This thesis focuses on the use of reliability methods and analyses of plant- specific operating experience in nuclear power plant ageing studies. The presented applications and method development have been related to nuclear power plants, but many of the approaches can also be applied outside the nuclear industry. The thesis consists of a summary and seven publications. The summary provides an overview of ageing management and discusses the role of reliability methods in ageing analyses. In the publications, practical applications and method development are described in more detail. The application areas at component and system level are motor-operated valves and protection automation systems, for which experience-based ageing analyses have been demonstrated. Furthermore, Bayesian ageing models for repairable components have been developed, and the management of ageing by improving maintenance practices is discussed. Recommendations for improvement of plant information management in order to facilitate ageing analyses are also given. The evaluation and mitigation of ageing effects on structural components is addressed by promoting the use of probabilistic modelling of crack growth, and developing models for evaluation of the reliability of inspection results. (orig.)

  3. Reliability of Interaural Time Difference-Based Localization Training in Elderly Individuals with Speech-in-Noise Perception Disorder

    Directory of Open Access Journals (Sweden)

    Maryam Delphi

    2017-09-01

    Full Text Available Background: Previous studies have shown that interaural-time-difference (ITD training can improve localization ability. Surprisingly little is, however, known about localization training vis-à-vis speech perception in noise based on interaural time difference in the envelope (ITD ENV. We sought to investigate the reliability of an ITD ENV-based training program in speech-in-noise perception among elderly individuals with normal hearing and speech-in-noise disorder. Methods: The present interventional study was performed during 2016. Sixteen elderly men between 55 and 65 years of age with the clinical diagnosis of normal hearing up to 2000 Hz and speech-in-noise perception disorder participated in this study. The training localization program was based on changes in ITD ENV. In order to evaluate the reliability of the training program, we performed speech-in-noise tests before the training program, immediately afterward, and then at 2 months’ follow-up. The reliability of the training program was analyzed using the Friedman test and the SPSS software. Results: Significant statistical differences were shown in the mean scores of speech-in-noise perception between the 3 time points (P=0.001. The results also indicated no difference in the mean scores of speech-in-noise perception between the 2 time points of immediately after the training program and 2 months’ follow-up (P=0.212. Conclusion: The present study showed the reliability of an ITD ENV-based localization training in elderly individuals with speech-in-noise perception disorder.

  4. ARCHITECTURE AND RELIABILITY OF OPERATING SYSTEMS

    Directory of Open Access Journals (Sweden)

    Stanislav V. Nazarov

    2018-03-01

    Full Text Available Progress in the production technology of microprocessors significantly increased reliability and performance of the computer systems hardware. It cannot be told about the corresponding characteristics of the software and its basis – the operating system (OS. Some achievements of program engineering are more modest in this field. Both directions of OS improvement (increasing of productivity and reliability are connected with the development of effective structures of these systems. OS functional complexity leads to the multiplicity of the structure, which is further enhanced by the specialization of the operating system depending on scope of computer system (complex scientific calculations, real time, information retrieval systems, systems of the automated and automatic control, etc. The functional complexity of the OS leads to the complexity of its architecture, which is further enhanced by the specialization of the operating system, depending on the computer system application area (complex scientific calculations, real-time, information retrieval systems, automated and automatic control systems, etc.. That fact led to variety of modern OS. It is possible to estimate reliability of different OS structures only as results of long-term field experiment or simulation modeling. However it is most often unacceptable because of time and funds expenses for carrying out such research. This survey attempts to evaluate the reliability of two main OS architectures: large multi-layered modular core and a multiserver (client-server system. Represented by continuous Markov chains which are explored in the stationary mode on the basis of transition from systems of the differential equations of Kolmogorov to system of the linear algebraic equations, models of these systems are developed.

  5. Human reliability analysis using event trees

    International Nuclear Information System (INIS)

    Heslinga, G.

    1983-01-01

    The shut-down procedure of a technologically complex installation as a nuclear power plant consists of a lot of human actions, some of which have to be performed several times. The procedure is regarded as a chain of modules of specific actions, some of which are analyzed separately. The analysis is carried out by making a Human Reliability Analysis event tree (HRA event tree) of each action, breaking down each action into small elementary steps. The application of event trees in human reliability analysis implies more difficulties than in the case of technical systems where event trees were mainly used until now. The most important reason is that the operator is able to recover a wrong performance; memory influences play a significant role. In this study these difficulties are dealt with theoretically. The following conclusions can be drawn: (1) in principle event trees may be used in human reliability analysis; (2) although in practice the operator will recover his fault partly, theoretically this can be described as starting the whole event tree again; (3) compact formulas have been derived, by which the probability of reaching a specific failure consequence on passing through the HRA event tree after several times of recovery is to be calculated. (orig.)

  6. Validity and Reliability of 2 Goniometric Mobile Apps: Device, Application, and Examiner Factors.

    Science.gov (United States)

    Wellmon, Robert H; Gulick, Dawn T; Paterson, Mark L; Gulick, Colleen N

    2016-12-01

    Smartphones are being used in a variety of practice settings to measure joint range of motion (ROM). A number of factors can affect the validity of the measurements generated. However, there are no studies examining smartphone-based goniometer applications focusing on measurement variability and error arising from the electromechanical properties of the device being used. To examine the concurrent validity and interrater reliability of 2 goniometric mobile applications (Goniometer Records, Goniometer Pro), an inclinometer, and a universal goniometer (UG). Nonexperimental, descriptive validation study. University laboratory. 3 physical therapists having an average of 25 y of experience. Three standardized angles (acute, right, obtuse) were constructed to replicate the movement of a hinge joint in the human body. Angular changes were measured and compared across 3 raters who used 3 different devices (UG, inclinometer, and 2 goniometric apps installed on 3 different smartphones: Apple iPhone 5, LG Android, and Samsung SIII Android). Intraclass correlation coefficients (ICCs) and Bland-Altman plots were used to examine interrater reliability and concurrent validity. Interrater reliability for each of the smartphone apps, inclinometer and UG were excellent (ICC = .995-1.000). Concurrent validity was also good (ICC = .998-.999). Based on the Bland-Altman plots, the means of the differences between the devices were low (range = -0.4° to 1.2°). This study identifies the error inherent in measurement that is independent of patient factors and due to the smartphone, the installed apps, and examiner skill. Less than 2° of measurement variability was attributable to those factors alone. The data suggest that 3 smartphones with the 2 installed apps are a viable substitute for using a UG or an inclinometer when measuring angular changes that typically occur when examining ROM and demonstrate the capacity of multiple examiners to accurately use smartphone-based goniometers.

  7. Coverage and Rate of Downlink Sequence Transmissions with Reliability Guarantees

    DEFF Research Database (Denmark)

    Park, Jihong; Popovski, Petar

    2017-01-01

    Real-time distributed control is a promising application of 5G in which communication links should satisfy certain reliability guarantees. In this letter, we derive closed-form maximum average rate when a device (e.g. industrial machine) downloads a sequence of n operational commands through cell...

  8. Wavelet crosstalk matrix and its application to assessment of shift-variant imaging systems

    Energy Technology Data Exchange (ETDEWEB)

    Qi, Jinyi; Huesman, Ronald H.

    2002-11-01

    The objective assessment of image quality is essential for design of imaging systems. Barrett and Gifford [1] introduced the Fourier cross talk matrix. Because it is diagonal for continuous linear shift-invariant imaging systems, the Fourier cross talk matrix is a powerful technique for discrete imaging systems that are close to shift invariant. However, for a system that is intrinsically shift variant, Fourier techniques are not particularly effective. Because Fourier bases have no localization property, the shift-variance of the imaging system cannot be shown by the response of individual Fourier bases; rather, it is shown in the correlation between the Fourier coefficients. This makes the analysis and optimization quite difficult. In this paper, we introduce a wavelet cross talk matrix based on wavelet series expansions. The wavelet cross talk matrix allows simultaneous study of the imaging system in both the frequency and spatial domains. Hence it is well suited for shift variant systems. We compared the wavelet cross talk matrix with the Fourier cross talk matrix for several simulated imaging systems, namely the interior and exterior tomography problems, limited angle tomography, and a rectangular geometry positron emission tomograph. The results demonstrate the advantages of the wavelet cross talk matrix in analyzing shift-variant imaging systems.

  9. Wavelet crosstalk matrix and its application to assessment of shift-variant imaging systems

    International Nuclear Information System (INIS)

    Qi, Jinyi; Huesman, Ronald H.

    2002-01-01

    The objective assessment of image quality is essential for design of imaging systems. Barrett and Gifford [1] introduced the Fourier cross talk matrix. Because it is diagonal for continuous linear shift-invariant imaging systems, the Fourier cross talk matrix is a powerful technique for discrete imaging systems that are close to shift invariant. However, for a system that is intrinsically shift variant, Fourier techniques are not particularly effective. Because Fourier bases have no localization property, the shift-variance of the imaging system cannot be shown by the response of individual Fourier bases; rather, it is shown in the correlation between the Fourier coefficients. This makes the analysis and optimization quite difficult. In this paper, we introduce a wavelet cross talk matrix based on wavelet series expansions. The wavelet cross talk matrix allows simultaneous study of the imaging system in both the frequency and spatial domains. Hence it is well suited for shift variant systems. We compared the wavelet cross talk matrix with the Fourier cross talk matrix for several simulated imaging systems, namely the interior and exterior tomography problems, limited angle tomography, and a rectangular geometry positron emission tomograph. The results demonstrate the advantages of the wavelet cross talk matrix in analyzing shift-variant imaging systems

  10. Nonparametric predictive inference in reliability

    International Nuclear Information System (INIS)

    Coolen, F.P.A.; Coolen-Schrijner, P.; Yan, K.J.

    2002-01-01

    We introduce a recently developed statistical approach, called nonparametric predictive inference (NPI), to reliability. Bounds for the survival function for a future observation are presented. We illustrate how NPI can deal with right-censored data, and discuss aspects of competing risks. We present possible applications of NPI for Bernoulli data, and we briefly outline applications of NPI for replacement decisions. The emphasis is on introduction and illustration of NPI in reliability contexts, detailed mathematical justifications are presented elsewhere

  11. Reliability demonstration test planning using bayesian analysis

    International Nuclear Information System (INIS)

    Chandran, Senthil Kumar; Arul, John A.

    2003-01-01

    In Nuclear Power Plants, the reliability of all the safety systems is very critical from the safety viewpoint and it is very essential that the required reliability requirements be met while satisfying the design constraints. From practical experience, it is found that the reliability of complex systems such as Safety Rod Drive Mechanism is of the order of 10 -4 with an uncertainty factor of 10. To demonstrate the reliability of such systems is prohibitive in terms of cost and time as the number of tests needed is very large. The purpose of this paper is to develop a Bayesian reliability demonstrating testing procedure for exponentially distributed failure times with gamma prior distribution on the failure rate which can be easily and effectively used to demonstrate component/subsystem/system reliability conformance to stated requirements. The important questions addressed in this paper are: With zero failures, how long one should perform the tests and how many components are required to conclude with a given degree of confidence, that the component under test, meets the reliability requirement. The procedure is explained with an example. This procedure can also be extended to demonstrate with more number of failures. The approach presented is applicable for deriving test plans for demonstrating component failure rates of nuclear power plants, as the failure data for similar components are becoming available in existing plants elsewhere. The advantages of this procedure are the criterion upon which the procedure is based is simple and pertinent, the fitting of the prior distribution is an integral part of the procedure and is based on the use of information regarding two percentiles of this distribution and finally, the procedure is straightforward and easy to apply in practice. (author)

  12. Prediction of safety critical software operational reliability from test reliability using testing environment factors

    International Nuclear Information System (INIS)

    Jung, Hoan Sung; Seong, Poong Hyun

    1999-01-01

    It has been a critical issue to predict the safety critical software reliability in nuclear engineering area. For many years, many researches have focused on the quantification of software reliability and there have been many models developed to quantify software reliability. Most software reliability models estimate the reliability with the failure data collected during the test assuming that the test environments well represent the operation profile. User's interest is however on the operational reliability rather than on the test reliability. The experiences show that the operational reliability is higher than the test reliability. With the assumption that the difference in reliability results from the change of environment, from testing to operation, testing environment factors comprising the aging factor and the coverage factor are developed in this paper and used to predict the ultimate operational reliability with the failure data in testing phase. It is by incorporating test environments applied beyond the operational profile into testing environment factors. The application results show that the proposed method can estimate the operational reliability accurately. (Author). 14 refs., 1 tab., 1 fig

  13. Mosquito bottlenecks alter viral mutant swarm in a tissue and time-dependent manner with contraction and expansion of variant positions and diversity.

    Science.gov (United States)

    Patterson, Edward I; Khanipov, Kamil; Rojas, Mark M; Kautz, Tiffany F; Rockx-Brouwer, Dedeke; Golovko, Georgiy; Albayrak, Levent; Fofanov, Yuriy; Forrester, Naomi L

    2018-01-01

    Viral diversity is theorized to play a significant role during virus infections, particularly for arthropod-borne viruses (arboviruses) that must infect both vertebrate and invertebrate hosts. To determine how viral diversity influences mosquito infection and dissemination Culex taeniopus mosquitoes were infected with the Venezuelan equine encephalitis virus endemic strain 68U201. Bodies and legs/wings of the mosquitoes were collected individually and subjected to multi-parallel sequencing. Virus sequence diversity was calculated for each tissue. Greater diversity was seen in mosquitoes with successful dissemination versus those with no dissemination. Diversity across time revealed that bottlenecks influence diversity following dissemination to the legs/wings, but levels of diversity are restored by Day 12 post-dissemination. Specific minority variants were repeatedly identified across the mosquito cohort, some in nearly every tissue and time point, suggesting that certain variants are important in mosquito infection and dissemination. This study demonstrates that the interaction between the mosquito and the virus results in changes in diversity and the mutational spectrum and may be essential for successful transition of the bottlenecks associated with arbovirus infection.

  14. Practical application of reliability engineering in detailed design and maintenance

    International Nuclear Information System (INIS)

    Barden, S.E.

    1975-01-01

    Modern plant systems are closely coupled combinations of sophisticated and expensive equipment, some important parts of which may be in the development stage (high technology sector), and simpler, crude but not necessarily cheap equipment (low technology sector). Manpower resources involved with such plant systems can also be placed in high and low technology categories (i.e. specialist design and construction staff, and production staff, respectively). Neither can operate effectively without the other, and both are equally important. A sophisticated on-line computer controlling plant or analysing fault symptoms is useless, if not unsafe, if the peripheral sensing and control equipment on plant providing input data is poorly designed and inaccurate, and/or unreliable because of inadequate maintenance. Similarly, the designer can be misled and misinformed, and subsequent design evolution can be wrongly directed, if production recors do not accurately reflect what is actually happening on the plant. The application of Reliability Technology can be counter productive if it demands more effort in the collection of data that it save in facilitating quick, correct engineering decisions, and more accurate assessments of resource requirements. Reliability Engineering techniques must be simplified to made their use widely adopted in the important low technology sector, and established in all financial and contractural procedures associated with design specification and production management. This paper develops this theme with practical examples. (author)

  15. Assessment of physical server reliability in multi cloud computing system

    Science.gov (United States)

    Kalyani, B. J. D.; Rao, Kolasani Ramchand H.

    2018-04-01

    Business organizations nowadays functioning with more than one cloud provider. By spreading cloud deployment across multiple service providers, it creates space for competitive prices that minimize the burden on enterprises spending budget. To assess the software reliability of multi cloud application layered software reliability assessment paradigm is considered with three levels of abstractions application layer, virtualization layer, and server layer. The reliability of each layer is assessed separately and is combined to get the reliability of multi-cloud computing application. In this paper, we focused on how to assess the reliability of server layer with required algorithms and explore the steps in the assessment of server reliability.

  16. Reliable Maintanace of Wireless Sensor Networks for Event-detection Applications%事件检测型传感器网络的可靠性维护

    Institute of Scientific and Technical Information of China (English)

    胡四泉; 杨金阳; 王俊峰

    2011-01-01

    The reliability maintannace of the wireless sensor network is a key point to keep the alarm messages delivered reliably to the monitor center on time in a event-detection application. Based on the unreliable links in the wireless sensor network and the network charateristics of an event detection application,MPRRM,a multiple path redundant reliability maintanace algoritm was proposed in this paper. Both analytical and simulation results show that the MPRRM algorithm is superior to the previous published solutions in the metrics of reliability, false positive rate, latency and message overhead.%传感器网络(Wireless Sensor Networks,WSN)的事件检测型应用中,如何通过可靠性维护来保证在检测到事件时报警信息能及时、可靠地传输到监控主机至关重要.通过对不可靠的无线链路和网络传输的分析,提出多路冗余可靠性维护算法MPRRM.通过解析方法和仿真分析证明,该算法在可靠性、误报率、延迟和消息开销量上比同类算法具有优势.

  17. Reliability engineering theory and practice

    CERN Document Server

    Birolini, Alessandro

    2010-01-01

    Presenting a solid overview of reliability engineering, this volume enables readers to build and evaluate the reliability of various components, equipment and systems. Current applications are presented, and the text itself is based on the author's 30 years of experience in the field.

  18. Reliability of dc power supplies in nuclear power plant application

    International Nuclear Information System (INIS)

    Eisenhut, D.G.

    1978-01-01

    In June 1977 the reliability of dc power supplies at nuclear power facilities was questioned. It was postulated that a sudden gross failure of the redundant dc power supplies might occur during normal plant operation, and that this could lead to insufficient shutdown cooling of the reactor core. It was further suggested that this potential for insufficient cooling is great enough to warrant consideration of prompt remedies. The work described herein was part of the NRC staff's efforts aimed towards putting the performance of dc power supplies in proper perspective and was mainly directed towards the particular concern raised at that time. While the staff did not attempt to perform a systematic study of overall dc power supply reliability including all possible failure modes for such supplies, the work summarized herein describes how a probabilistic approach was used to supplement our more usual deterministic approach to reactor safety. Our evaluation concluded that the likelihood of dc power supply failures leading to insufficient shutdown cooling of the reactor core is sufficiently small as to not require any immediate action

  19. Reliability and criterion-related validity with a smartphone used in timed-up-and-go test

    OpenAIRE

    Galán-Mercant, Alejandro; Barón-López, Francisco Javier; Labajos-Manzanares, María T; Cuesta-Vargas, Antonio I

    2014-01-01

    Background The capacity to diagnosys, quantify and evaluate movement beyond the general confines of a clinical environment under effectiveness conditions may alleviate rampant strain on limited, expensive and highly specialized medical resources. An iPhone 4® mounted a three dimensional accelerometer subsystem with highly robust software applications. The present study aimed to evaluate the reliability and concurrent criterion-related validity of the accelerations with an iPhone 4® in an Exte...

  20. A flexible and reusable software for real-time control applications at JET

    International Nuclear Information System (INIS)

    De Tommasi, G.; Piccolo, F.; Sartori, F.

    2005-01-01

    The fast growth of the JET real-time control network and the increasing demand for new systems have been the triggers that started the development of the JETRT software framework. This new architecture is designed for maximum reuse and is particularly suited for implementation of both real-time control and data acquisition systems in a complex experimental environment such as JET. Most of the software is the same in all applications independent of the platform. The varying part is the project specific algorithm, which is also compiled into a separate software component, in order to achieve a separation from the plant interface code. This design choice maximises the software reliability, reduces development costs and allows non-specialist programmers to contribute to the implementation of real-time projects. JETRT also provides an integrated set of debugging and testing tools, some of them well integrated with the Matlab environment. This feature besides the framework portability among different platforms allows to perform most of the test and validation phase on a desktop PC running Windows, significantly reducing the commissioning time of a new real-time system

  1. Restoration of retinal images with space-variant blur

    Czech Academy of Sciences Publication Activity Database

    Marrugo, A.; Millán, M. S.; Šorel, Michal; Šroubek, Filip

    2014-01-01

    Roč. 19, č. 1 (2014), 016023-1-016023-12 ISSN 1083-3668 R&D Projects: GA ČR GA13-29225S Institutional support: RVO:67985556 Keywords : blind deconvolution * space-variant restoration * retinal image Subject RIV: JD - Computer Applications, Robotics Impact factor: 2.859, year: 2014 http://library.utia.cas.cz/separaty/2014/ZOI/sorel-0424586.pdf

  2. Splicing analysis of 14 BRCA1 missense variants classifies nine variants as pathogenic

    DEFF Research Database (Denmark)

    Ahlborn, Lise B; Dandanell, Mette; Steffensen, Ane Y

    2015-01-01

    by functional analysis at the protein level. Results from a validated mini-gene splicing assay indicated that nine BRCA1 variants resulted in splicing aberrations leading to truncated transcripts and thus can be considered pathogenic (c.4987A>T/p.Met1663Leu, c.4988T>A/p.Met1663Lys, c.5072C>T/p.Thr1691Ile, c......Pathogenic germline mutations in the BRCA1 gene predispose carriers to early onset breast and ovarian cancer. Clinical genetic screening of BRCA1 often reveals variants with uncertain clinical significance, complicating patient and family management. Therefore, functional examinations are urgently...... needed to classify whether these uncertain variants are pathogenic or benign. In this study, we investigated 14 BRCA1 variants by in silico splicing analysis and mini-gene splicing assay. All 14 alterations were missense variants located within the BRCT domain of BRCA1 and had previously been examined...

  3. Modeling road traffic fatalities in India: Smeed's law, time invariance and regional specificity

    Directory of Open Access Journals (Sweden)

    Raj V. Ponnaluri

    2012-07-01

    Full Text Available Mathematical formulations linking road traffic fatalities to vehicle ownership, regional population, and economic growth continue to be developed against the backdrop of Smeed and Andreassen models. Though a few attempts were made, Smeed's law has not been fully tested in India. Using the 1991–2009 panel data from all states, this work (a developed the generalized Smeed and Andreassen models; (b evaluated if traffic fatalities were impacted by structural changes; and (c examined if – in relation to the generalized model – the individual (time and regional models are more relevant for application. Seven models (Smeed: original, generalized, time-variant, state-variant; and Andreassen: generalized, time-variant, state-variant were developed and tested for fit with the actual data. Results showed that the per vehicle fatality rate closely resembled Smeed's formulation. Chow-test yielded a significant F-stat, suggesting that the models for four pre-defined time-blocks are structurally different from the 19-year generalized model. The counterclockwise rotation of the log-linear form also suggested lower fatality rates. While the new government policies, reduced vehicle operating speeds, better healthcare, and improved vehicle technology could be the factors, further research is required to understand the reasons for fatality rate reductions. The intercept and gradients of the time-series models showed high stability and varied only slightly in comparison to the 19-year generalized models, thus suggesting that the latter are pragmatic for application. Regional formulations, however, indicate that they may be more relevant for studying trends and tendencies. This research illustrates the robustness of Smeed's law, and provides evidence for time-invariance but state-specificity.

  4. Application of rare variant transmission disequilibrium tests to epileptic encephalopathy trio sequence data

    DEFF Research Database (Denmark)

    2017-01-01

    The classic epileptic encephalopathies, including infantile spasms (IS) and Lennox-Gastaut syndrome (LGS), are severe seizure disorders that usually arise sporadically. De novo variants in genes mainly encoding ion channel and synaptic proteins have been found to account for over 15% of patients...... with IS or LGS. The contribution of autosomal recessive genetic variation, however, is less well understood. We implemented a rare variant transmission disequilibrium test (TDT) to search for autosomal recessive epileptic encephalopathy genes in a cohort of 320 outbred patient-parent trios that were generally...... not find evidence of a role for individual autosomal recessive genes, our current sample is insufficiently powered to assess the overall role of autosomal recessive genotypes in an outbred epileptic encephalopathy population....

  5. Reliability and risk treatment centered maintenance

    International Nuclear Information System (INIS)

    Pexa, Martin; Hladik, Tomas; Ales, Zdenek; Legat, Vaclav; Muller, Miroslav; Valasek, Petr; Havlu, Vit

    2014-01-01

    We propose a new methodology for application of well-known tools - RCM, RBI and SIF pro - with the aim to treat risks by means of suitable maintenance. The basis of the new methodology is the complex application of all three methods at the same time and not separately as is typical today. The proposed methodology suggests having just one managing team for reliability and risk treatment centred maintenance (RRTCM), employing existing RCM, RBI, and SIFpro tools concurrently. This approach allows for significant reduction of engineering activities' duration. In the proposed methodology these activities are staged into five phases and structured to eliminate all duplication resulting from separate application of the three tools. The newly proposed methodology saves 45% to 50% of the engineering workload and dequate significant financial savings.

  6. Reliability and risk treatment centered maintenance

    Energy Technology Data Exchange (ETDEWEB)

    Pexa, Martin; Hladik, Tomas; Ales, Zdenek; Legat, Vaclav; Muller, Miroslav; Valasek, Petr [Czech University of Life Sciences Prague, Kamycka (Czech Republic); Havlu, Vit [Unipetrol A. S, Prague (Czech Republic)

    2014-10-15

    We propose a new methodology for application of well-known tools - RCM, RBI and SIF pro - with the aim to treat risks by means of suitable maintenance. The basis of the new methodology is the complex application of all three methods at the same time and not separately as is typical today. The proposed methodology suggests having just one managing team for reliability and risk treatment centred maintenance (RRTCM), employing existing RCM, RBI, and SIFpro tools concurrently. This approach allows for significant reduction of engineering activities' duration. In the proposed methodology these activities are staged into five phases and structured to eliminate all duplication resulting from separate application of the three tools. The newly proposed methodology saves 45% to 50% of the engineering workload and dequate significant financial savings.

  7. ADRA2B deletion variant influences time-dependent effects of pre-learning stress on long-term memory.

    Science.gov (United States)

    Zoladz, Phillip R; Dailey, Alison M; Nagle, Hannah E; Fiely, Miranda K; Mosley, Brianne E; Brown, Callie M; Duffy, Tessa J; Scharf, Amanda R; Earley, McKenna B; Rorabaugh, Boyd R

    2017-04-01

    Extensive work over the past few decades has shown that certain genetic variations interact with life events to confer increased susceptibility for the development of psychological disorders. The deletion variant of the ADRA2B gene, which has been associated with enhanced emotional memory and heightened amygdala responses to emotional stimuli, might confer increased susceptibility for the development of post-traumatic stress disorder (PTSD) or related phenotypes by increasing the likelihood of traumatic memory formation. Thus, we examined whether this genetic variant would predict stress effects on learning and memory in a non-clinical sample. Two hundred and thirty-five individuals were exposed to the socially evaluated cold pressor test or a control condition immediately or 30min prior to learning a list of words that varied in emotional valence and arousal level. Participants' memory for the words was tested immediately (recall) and 24h after learning (recall and recognition), and saliva samples were collected to genotype participants for the ADRA2B deletion variant. Results showed that stress administered immediately before learning selectively enhanced long-term recall in deletion carriers. Stress administered 30min before learning impaired recognition memory in male deletion carriers, while enhancing recognition memory in female deletion carriers. These findings provide additional evidence to support the idea that ADRA2B deletion variant carriers retain a sensitized stress response system, which results in amplified effects of stress on learning and memory. The accumulating evidence regarding this genetic variant implicates it as a susceptibility factor for traumatic memory formation and PTSD-related phenotypes. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Anatomic variants of the pancreatic duct and their clinical relevance: an MR-guided study in the general population

    International Nuclear Information System (INIS)

    Buelow, Robin; Thiel, Robert; Thamm, Patrick; Messner, Philip; Hosten, Norbert; Kuehn, Jens-Peter; Simon, Peter; Lerch, Markus M.; Mayerle, Julia; Voelzke, Henry

    2014-01-01

    To investigate the frequency of pancreatic duct (PD) variants and their effect on pancreatic exocrine function in a population-based study using non-invasive secretin-stimulated magnetic resonance cholangiopancreatography (sMRCP). Nine hundred and ninety-five volunteers, 457 women and 538 men, aged 51.9 ± 13.4 years, underwent navigator-triggered, T2-weighted, 3D turbo spin echo MRCP on a 1.5 T system after 1 unit/kg secretin administration. Two readers evaluated images for PD variants. Pancreatic exocrine function and morphological signs of chronic pancreatitis such as abnormalities of the main PD, side branch dilatation, and pancreatic cysts were evaluated and related to PD variants using a Kruskal-Wallis test and post hoc analysis. Of all sMRCP, 93.2 % were of diagnostic quality. Interobserver reliability for detection of PD variants was found to be kappa 0.752 (95 %CI, 0.733 - 0.771). Normal PD variants were observed in 90.4 % (n = 838/927). Variants of pancreas divisum was identified in 9.6 % (n = 89/927). Abnormalities of the main PD, side branch dilatation, and pancreatic cysts were observed in 2.4 %, 16.6 %, and 27.7 %, respectively, and were not significantly different between pancreas divisum and non-divisum group (P = 0.122; P = 0.152; P = 0.741). There was no association between PD variants and pancreatic exocrine function (P = 0.367). PD variants including pancreas divisum are not associated with morphological signs of chronic pancreatitis or restriction of pancreatic exocrine function. (orig.)

  9. Anatomic variants of the pancreatic duct and their clinical relevance: an MR-guided study in the general population

    Energy Technology Data Exchange (ETDEWEB)

    Buelow, Robin; Thiel, Robert; Thamm, Patrick; Messner, Philip; Hosten, Norbert; Kuehn, Jens-Peter [University Medicine, Ernst Moritz Arndt University Greifswald, Department of Radiology and Neuroradiology, Greifswald (Germany); Simon, Peter; Lerch, Markus M.; Mayerle, Julia [University Medicine, Ernst Moritz Arndt University Greifswald, Division of Gastroenterology and Department of Medicine A, Greifswald (Germany); Voelzke, Henry [University Medicine, Ernst Moritz Arndt University Greifswald, Institute for Community Medicine, Greifswald (Germany)

    2014-12-15

    To investigate the frequency of pancreatic duct (PD) variants and their effect on pancreatic exocrine function in a population-based study using non-invasive secretin-stimulated magnetic resonance cholangiopancreatography (sMRCP). Nine hundred and ninety-five volunteers, 457 women and 538 men, aged 51.9 ± 13.4 years, underwent navigator-triggered, T2-weighted, 3D turbo spin echo MRCP on a 1.5 T system after 1 unit/kg secretin administration. Two readers evaluated images for PD variants. Pancreatic exocrine function and morphological signs of chronic pancreatitis such as abnormalities of the main PD, side branch dilatation, and pancreatic cysts were evaluated and related to PD variants using a Kruskal-Wallis test and post hoc analysis. Of all sMRCP, 93.2 % were of diagnostic quality. Interobserver reliability for detection of PD variants was found to be kappa 0.752 (95 %CI, 0.733 - 0.771). Normal PD variants were observed in 90.4 % (n = 838/927). Variants of pancreas divisum was identified in 9.6 % (n = 89/927). Abnormalities of the main PD, side branch dilatation, and pancreatic cysts were observed in 2.4 %, 16.6 %, and 27.7 %, respectively, and were not significantly different between pancreas divisum and non-divisum group (P = 0.122; P = 0.152; P = 0.741). There was no association between PD variants and pancreatic exocrine function (P = 0.367). PD variants including pancreas divisum are not associated with morphological signs of chronic pancreatitis or restriction of pancreatic exocrine function. (orig.)

  10. A reliable sewage quality abnormal event monitoring system.

    Science.gov (United States)

    Li, Tianling; Winnel, Melissa; Lin, Hao; Panther, Jared; Liu, Chang; O'Halloran, Roger; Wang, Kewen; An, Taicheng; Wong, Po Keung; Zhang, Shanqing; Zhao, Huijun

    2017-09-15

    With closing water loop through purified recycled water, wastewater becomes a part of source water, requiring reliable wastewater quality monitoring system (WQMS) to manage wastewater source and mitigate potential health risks. However, the development of reliable WQMS is fatally constrained by severe contamination and biofouling of sensors due to the hostile analytical environment of wastewaters, especially raw sewages, that challenges the limit of existing sensing technologies. In this work, we report a technological solution to enable the development of WQMS for real-time abnormal event detection with high reliability and practicality. A vectored high flow hydrodynamic self-cleaning approach and a dual-sensor self-diagnostic concept are adopted for WQMS to effectively encounter vital sensor failing issues caused by contamination and biofouling and ensure the integrity of sensing data. The performance of the WQMS has been evaluated over a 3-year trial period at different sewage catchment sites across three Australian states. It has demonstrated that the developed WQMS is capable of continuously operating in raw sewage for a prolonged period up to 24 months without maintenance and failure, signifying the high reliability and practicality. The demonstrated WQMS capability to reliably acquire real-time wastewater quality information leaps forward the development of effective wastewater source management system. The reported self-cleaning and self-diagnostic concepts should be applicable to other online water quality monitoring systems, opening a new way to encounter the common reliability and stability issues caused by sensor contamination and biofouling. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Application of a digital technique in evaluating the reliability of shade guides.

    Science.gov (United States)

    Cal, E; Sonugelen, M; Guneri, P; Kesercioglu, A; Kose, T

    2004-05-01

    There appears to be a need for a reliable method for quantification of tooth colour and analysis of shade. Therefore, the primary objective of this study was to show the applicability of graphic software in colour analysis and secondly to investigate the reliability of commercial shade guides produced by the same manufacturer, using this digital technique. After confirming the reliability and reproducibility of the digital method by using self-assessed coloured images, three shade guides of the same manufacturer were photographed in daylight and in studio environments with a digital camera and saved in tagged image file format (TIFF) format. Colour analysis of each photograph was performed using the Adobe Photoshop 4.0 graphic program. Luminosity, and red, green, blue (L and RGB) values of each shade tab of each shade guide were measured and the data were subjected to statistical analysis using the repeated measure Anova test. The L and RGB values of the images taken in daylight differed significantly from those of the images taken in studio environment (P < 0.05). In both environments, the luminosity and red values of the shade tabs were significantly different from each other (P < 0.05). It was concluded that, when the environmental conditions were kept constant, the Adobe Photoshop 4.0 colour analysis program could be used to analyse the colour of images. On the other hand, the results revealed that the accuracy of shade tabs widely being used in colour matching should be readdressed.

  12. Human factors perspective on the reliability of NDT in nuclear applications

    International Nuclear Information System (INIS)

    Bertovic, Marija; Mueller, Christina; Fahlbruch, Babette

    2013-01-01

    A series of research studies have been conducted over the course of five years venturing into the fields of in-service inspections (ISI) in nuclear power plants (NPPs) and inspection of manufactured components to be used for permanent nuclear waste disposal. This paper will provide an overview of four research studies, present selected experimental results and suggest ways for optimization of the NDT process, procedures, and training. The experimental results have shown that time pressure and mental workload negatively influence the quality of the manual inspection performance. Noticeable were influences of the organization of the working schedule, communication, procedures, supervision, and demonstration task. Customized Failure Mode and Effects Analysis (FMEA) was used to identify potential human risks, arising during acquisition and evaluation of NDT data. Several preventive measures were suggested and furthermore discussed, with respect to problems that could arise from their application. Experimental results show that implementing human redundancy in critical tasks, such as defect identification, as well as using an automated aid (software) to help operators in decision making about the existence and size of defects, could lead to other kinds of problems, namely social loafing and automation bias that might affect the reliability of NDT in an undesired manner. Shifting focus from the operator, as the main source of errors, to the organization, as the underlying source, is a recommended approach to ensure safety. (orig.) [de

  13. Versatile, reprogrammable area pixel array detector for time-resolved synchrotron x-ray applications

    Energy Technology Data Exchange (ETDEWEB)

    Gruner, Sol [Cornell Univ., Ithaca, NY (United States)

    2010-05-01

    The final technical report for DOE grant DE-SC0004079 is presented. The goal of the grant was to perform research, development and application of novel imaging x-ray detectors so as to effectively utilize the high intensity and brightness of the national synchrotron radiation facilities to enable previously unfeasible time-resolved x-ray research. The report summarizes the development of the resultant imaging x-ray detectors. Two types of detector platforms were developed: The first is a detector platform (called a Mixed-Mode Pixel Array Detector, or MM-PAD) that can image continuously at over a thousand images per second while maintaining high efficiency for wide dynamic range signals ranging from 1 to hundreds of millions of x-rays per pixel per image. Research on an even higher dynamic range variant is also described. The second detector platform (called the Keck Pixel Array Detector) is capable of acquiring a burst of x-ray images at a rate of millions of images per second.

  14. Reliability Centered Maintenance - Methodologies

    Science.gov (United States)

    Kammerer, Catherine C.

    2009-01-01

    Journal article about Reliability Centered Maintenance (RCM) methodologies used by United Space Alliance, LLC (USA) in support of the Space Shuttle Program at Kennedy Space Center. The USA Reliability Centered Maintenance program differs from traditional RCM programs because various methodologies are utilized to take advantage of their respective strengths for each application. Based on operational experience, USA has customized the traditional RCM methodology into a streamlined lean logic path and has implemented the use of statistical tools to drive the process. USA RCM has integrated many of the L6S tools into both RCM methodologies. The tools utilized in the Measure, Analyze, and Improve phases of a Lean Six Sigma project lend themselves to application in the RCM process. All USA RCM methodologies meet the requirements defined in SAE JA 1011, Evaluation Criteria for Reliability-Centered Maintenance (RCM) Processes. The proposed article explores these methodologies.

  15. On reliability of singular-value decomposition in attractor reconstruction

    International Nuclear Information System (INIS)

    Palus, M.; Dvorak, I.

    1990-12-01

    Applicability of singular-value decomposition for reconstructing the strange attractor from one-dimensional chaotic time series, proposed by Broomhead and King, is extensively tested and discussed. Previously published doubts about its reliability are confirmed: singular-value decomposition, by nature a linear method, is only of a limited power when nonlinear structures are studied. (author). 29 refs, 9 figs

  16. A weighted anisotropic variant of the Caffarelli-Kohn-Nirenberg inequality and applications

    Science.gov (United States)

    Bahrouni, Anouar; Rădulescu, Vicenţiu D.; Repovš, Dušan D.

    2018-04-01

    We present a weighted version of the Caffarelli-Kohn-Nirenberg inequality in the framework of variable exponents. The combination of this inequality with a variant of the fountain theorem, yields the existence of infinitely many solutions for a class of non-homogeneous problems with Dirichlet boundary condition.

  17. Reliability and mechanical design

    International Nuclear Information System (INIS)

    Lemaire, Maurice

    1997-01-01

    A lot of results in mechanical design are obtained from a modelisation of physical reality and from a numerical solution which would lead to the evaluation of needs and resources. The goal of the reliability analysis is to evaluate the confidence which it is possible to grant to the chosen design through the calculation of a probability of failure linked to the retained scenario. Two types of analysis are proposed: the sensitivity analysis and the reliability analysis. Approximate methods are applicable to problems related to reliability, availability, maintainability and safety (RAMS)

  18. Reliability issues at the LHC

    CERN Multimedia

    CERN. Geneva. Audiovisual Unit; Gillies, James D

    2002-01-01

    The Lectures on reliability issues at the LHC will be focused on five main Modules on five days. Module 1: Basic Elements in Reliability Engineering Some basic terms, definitions and methods, from components up to the system and the plant, common cause failures and human factor issues. Module 2: Interrelations of Reliability & Safety (R&S) Reliability and risk informed approach, living models, risk monitoring. Module 3: The ideal R&S Process for Large Scale Systems From R&S goals via the implementation into the system to the proof of the compliance. Module 4: Some Applications of R&S on LHC Master logic, anatomy of risk, cause - consequence diagram, decomposition and aggregation of the system. Module 5: Lessons learned from R&S Application in various Technologies Success stories, pitfalls, constrains in data and methods, limitations per se, experienced in aviation, space, process, nuclear, offshore and transport systems and plants. The Lectures will reflect in summary the compromise in...

  19. Biochemical characterization of the GM2 gangliosidosis B1 variant

    Directory of Open Access Journals (Sweden)

    Tutor J.C.

    2004-01-01

    Full Text Available The deficiency of the A isoenzyme of ß-hexosaminidase (Hex produced by different mutations of the gene that codes for the alpha subunit (Tay-Sachs disease has two variants with enzymological differences: the B variant consists of the absence of Hex A isoenzyme and the B1 variant produces an inactive Hex A isoenzyme for the hydrolysis of the GM2 ganglioside and synthetic substrates with negative charge. In contrast to the early childhood form of the B variant, the B1 variant appears at a later clinical stage (3 to 7 years of age with neurodegenerative symptoms leading to the death of the patient in the second decade of life. The most frequent mutation responsible for the GM2 gangliosidosis B1 variant is R178H, which has a widespread geographic and ethnic distribution. The highest incidence has been described in Portugal, which has been suggested as the point of origin of this mutation. Biochemical characterization of this lysosomal disease is carried out using negatively charged synthetic alpha subunit-specific sulfated substrates, since Hex A isoenzyme heat-inactivation assays are not applicable. However, the determination of the apparent activation energy of Hex using the neutral substrate 3,3'-dichlorophenolsulfonphthaleinyl N-acetyl-ß-D-glucosaminide, may offer a valid alternative. The presence of an alpha subunit in the alphaß heterodimer Hex A means that its activation energy (41.8 kJ/mol is significantly lower than that of the ßß homodimer Hex B (75.1 kJ/mol; however, as mutation inactivates the alpha subunit, the Hex A of the B1 variant presents an activation energy that is similar to that of the Hex B isoenzyme.

  20. Application of genetic algorithm for reliability allocation in nuclear power plants

    International Nuclear Information System (INIS)

    Yang, Joon-Eon; Hwang, Mee-Jung; Sung, Tae-Yong; Jin, Youngho

    1999-01-01

    Reliability allocation is an optimization process of minimizing the total plant costs subject to the overall plant safety goal constraints. Reliability allocation was applied to determine the reliability characteristics of reactor systems, subsystems, major components and plant procedures that are consistent with a set of top-level performance goals; the core melt frequency, acute fatalities and latent fatalities. Reliability allocation can be performed to improve the design, operation and safety of new and/or existing nuclear power plants. Reliability allocation is a kind of a difficult multi-objective optimization problem as well as a global optimization problem. The genetic algorithm, known as one of the most powerful tools for most optimization problems, is applied to the reliability allocation problem of a typical pressurized water reactor in this article. One of the main problems of reliability allocation is defining realistic objective functions. Hence, in order to optimize the reliability of the system, the cost for improving and/or degrading the reliability of the system should be included in the reliability allocation process. We used techniques derived from the value impact analysis to define the realistic objective function in this article