Sample records for reliable computer system

  1. Computer system reliability safety and usability

    Dhillon, BS


    Computer systems have become an important element of the world economy, with billions of dollars spent each year on development, manufacture, operation, and maintenance. Combining coverage of computer system reliability, safety, usability, and other related topics into a single volume, Computer System Reliability: Safety and Usability eliminates the need to consult many different and diverse sources in the hunt for the information required to design better computer systems.After presenting introductory aspects of computer system reliability such as safety, usability-related facts and figures,

  2. Reliable computer systems design and evaluatuion

    Siewiorek, Daniel


    Enhance your hardware/software reliabilityEnhancement of system reliability has been a major concern of computer users and designers ¦ and this major revision of the 1982 classic meets users' continuing need for practical information on this pressing topic. Included are case studies of reliablesystems from manufacturers such as Tandem, Stratus, IBM, and Digital, as well as coverage of special systems such as the Galileo Orbiter fault protection system and AT&T telephone switching processors.

  3. Computer System Reliability Allocation Method and Supporting Tool


    This paper presents a computer system reliability allocationmethod that is based on the theory of statistic and Markovian chain,which can be used to allocate reliability to subsystem, to hybrid system and software modules. Arele vant supporting tool built by us is introduced.

  4. Reliability and safety analysis of redundant vehicle management computer system

    Shi Jian; Meng Yixuan; Wang Shaoping; Bian Mengmeng; Yan Dungong


    Redundant techniques are widely adopted in vehicle management computer (VMC) to ensure that VMC has high reliability and safety. At the same time, it makes VMC have special char-acteristics, e.g., failure correlation, event simultaneity, and failure self-recovery. Accordingly, the reliability and safety analysis to redundant VMC system (RVMCS) becomes more difficult. Aimed at the difficulties in RVMCS reliability modeling, this paper adopts generalized stochastic Petri nets to establish the reliability and safety models of RVMCS. Then this paper analyzes RVMCS oper-ating states and potential threats to flight control system. It is verified by simulation that the reli-ability of VMC is not the product of hardware reliability and software reliability, and the interactions between hardware and software faults can reduce the real reliability of VMC obviously. Furthermore, the failure undetected states and false alarming states inevitably exist in RVMCS due to the influences of limited fault monitoring coverage and false alarming probability of fault mon-itoring devices (FMD). RVMCS operating in some failure undetected states will produce fatal threats to the safety of flight control system. RVMCS operating in some false alarming states will reduce utility of RVMCS obviously. The results abstracted in this paper can guide reliable VMC and efficient FMD designs. The methods adopted in this paper can also be used to analyze other intelligent systems’ reliability.

  5. Reliability computation from reliability block diagrams

    Chelson, P. O.; Eckstein, E. Y.


    Computer program computes system reliability for very general class of reliability block diagrams. Four factors are considered in calculating probability of system success: active block redundancy, standby block redundancy, partial redundancy, and presence of equivalent blocks in the diagram.

  6. Reliability analysis and design of on-board computer system for small stereo mapping satellite

    马秀娟; 曹喜滨; 马兴瑞


    The on-board computer system for a small satellite is required to be high in reliability, light in weight, small in volume and low in power consumption. This paper describes the on-board computer system with the advantages of both centralized and distributed systems, analyzes its reliability, and briefs the key techniques used to improve its reliability.

  7. Design for reliability information and computer-based systems

    Bauer, Eric


    "System reliability, availability and robustness are often not well understood by system architects, engineers and developers. They often don't understand what drives customer's availability expectations, how to frame verifiable availability/robustness requirements, how to manage and budget availability/robustness, how to methodically architect and design systems that meet robustness requirements, and so on. The book takes a very pragmatic approach of framing reliability and robustness as a functional aspect of a system so that architects, designers, developers and testers can address it as a concrete, functional attribute of a system, rather than an abstract, non-functional notion"--Provided by publisher.

  8. Workload, Performance and Reliability of Digital Computing Systems.


    Proschan. Mathematical Theory of Reliability. John Wiley & Sons, 1965. [ Bazaraa 79] M.S. Bazaraa and C.M. Shetty. Nonlinear Programming. Theory and...exercised. System software failures are due to: a) the (static) input data to a progam module presents some peculiarities that the program is not able of...available, this is a typical nonlinear programming problem, subject to nonlinear inequality constraints. Since this problem will have to be solved

  9. A new method for computing the reliability of consecutive k-out-of-n:F systems

    Gökdere Gökhan


    Full Text Available In many physical systems, reliability evaluation, such as ones encountered in telecommunications, the design of integrated circuits, microwave relay stations, oil pipeline systems, vacuum systems in accelerators, computer ring networks, and spacecraft relay stations, have had applied consecutive k-out-of-n system models. These systems are characterized as logical connections among the components of the systems placed in lines or circles. In literature, a great deal of attention has been paid to the study of the reliability evaluation of consecutive k-out-of-n systems. In this paper, we propose a new method to compute the reliability of consecutive k-out-of-n:F systems, with n linearly and circularly arranged components. The proposed method provides a simple way for determining the system failure probability. Also, we write R-Project codes based on our proposed method to compute the reliability of the linear and circular systems which have a great number of components.

  10. Ultra-reliable computer systems: an integrated approach for application in reactor safety systems

    Chisholm, G.H.


    Improvements in operation and maintenance of nuclear reactors can be realized with the application of computers in the reactor control systems. In the context of this paper a reactor control system encompasses the control aspects of the Reactor Safety System (RSS). Equipment qualification for application in reactor safety systems requires a rigorous demonstration of reliability. For the purpose of this paper, the reliability demonstration will be divided into two categories. These categories are demonstrations of compliance with respect to (a) environmental; and (b) functional design constrains. This paper presents an approach for the determination of computer-based RSS respective to functional design constraints only. It is herein postulated that the design for compliance with environmental design constraints is a reasonably definitive problem and within the realm of available technology. The demonstration of compliance with design constraints respective to functionality, as described herein, is an extension of available technology and requires development.

  11. A support architecture for reliable distributed computing systems

    Dasgupta, Partha; Leblanc, Richard J., Jr.


    The Clouds project is well underway to its goal of building a unified distributed operating system supporting the object model. The operating system design uses the object concept of structuring software at all levels of the system. The basic operating system was developed and work is under progress to build a usable system.

  12. Stochastic data-flow graph models for the reliability analysis of communication networks and computer systems

    Chen, D.J.


    The literature is abundant with combinatorial reliability analysis of communication networks and fault-tolerant computer systems. However, it is very difficult to formulate reliability indexes using combinatorial methods. These limitations have led to the development of time-dependent reliability analysis using stochastic processes. In this research, time-dependent reliability-analysis techniques using Dataflow Graphs (DGF) are developed. The chief advantages of DFG models over other models are their compactness, structural correspondence with the systems, and general amenability to direct interpretation. This makes the verification of the correspondence of the data-flow graph representation to the actual system possible. Several DGF models are developed and used to analyze the reliability of communication networks and computer systems. Specifically, Stochastic Dataflow graphs (SDFG), both the discrete-time and the continuous time models are developed and used to compute time-dependent reliability of communication networks and computer systems. The repair and coverage phenomenon of communication networks is also analyzed using SDFG models.

  13. The application of emulation techniques in the analysis of highly reliable, guidance and control computer systems

    Migneault, Gerard E.


    Emulation techniques can be a solution to a difficulty that arises in the analysis of the reliability of guidance and control computer systems for future commercial aircraft. Described here is the difficulty, the lack of credibility of reliability estimates obtained by analytical modeling techniques. The difficulty is an unavoidable consequence of the following: (1) a reliability requirement so demanding as to make system evaluation by use testing infeasible; (2) a complex system design technique, fault tolerance; (3) system reliability dominated by errors due to flaws in the system definition; and (4) elaborate analytical modeling techniques whose precision outputs are quite sensitive to errors of approximation in their input data. Use of emulation techniques for pseudo-testing systems to evaluate bounds on the parameter values needed for the analytical techniques is then discussed. Finally several examples of the application of emulation techniques are described.

  14. An Algorithm for Optimized Time, Cost, and Reliability in a Distributed Computing System

    Pankaj Saxena


    Full Text Available Distributed Computing System (DCS refers to multiple computer systems working on a single problem. A distributed system consists of a collection of autonomous computers, connected through a network which enables computers to coordinate their activities and to share the resources of the system. In distributed computing, a single problem is divided into many parts, and each part is solved by different computers. As long as the computers are networked, they can communicate with each other to solve the problem. DCS consists of multiple software components that are on multiple computers, but run as a single system. The computers that are in a distributed system can be physically close together and connected by a local network, or they can be geographically distant and connected by a wide area network. The ultimate goal of distributed computing is to maximize performance in a time effective, cost-effective, and reliability effective manner. In DCS the whole workload is divided into small and independent units, called tasks and it allocates onto the available processors. It also ensures fault tolerance and enables resource accessibility in the event that one of the components fails. The problem is addressed of assigning a task to a distributed computing system. The assignment of the modules of tasks is done statically. We have to give an algorithm to solve the problem of static task assignment in DCS, i.e. given a set of communicating tasks to be executed on a distributed system on a set of processors, to which processor should each task be assigned to get the more reliable results in lesser time and cost. In this paper an efficient algorithm for task allocation in terms of optimum time or optimum cost or optimum reliability is presented where numbers of tasks are more then the number of processors.

  15. Reliability Evaluation of Distributed Computer Systems Subject to Imperfect Coverage and Dependent Common-Cause Failures

    Liudong Xing


    Full Text Available Imperfect coverage (IPC occurs when a malicious component failure causes extensive damage due to inadequate fault detection, fault location or fault recovery. Common-cause failures (CCF are multiple dependent component failures within a system due to a shared root cause. Both imperfect coverage and common-cause failures can exist in distributed computer systems and can contribute significantly to the overall system unreliability. Moreover they can complicate the reliability analysis. In this study, we propose an efficient approach to the reliability analysis of distributed computer systems (DCS with both IPC and CCF. The proposed methodology is to decouple the effects of IPC and CCF from the combinatorics of the solution. The resulting approach is applicable to the computationally efficient binary decision diagrams (BDD based method for the reliability analysis of DCS. We provide a concrete analysis of an example DCS to illustrate the application and advantages of our approach. Due to the consideration of IPC and CCF, our approach can evaluate a wider class of DCS as compared with existing approaches. Due to the nature of the BDD and the separation of IPC and CCF from the solution combinatorics, our approach has high computational efficiency and is easy to implement, which means that it can be easily applied to the accurate reliability analysis of large-scale DCS subject to IPC and CCF. The DCS without IPC or CCF appear to be special cases of our approach.

  16. Computer aided reliability, availability, and safety modeling for fault-tolerant computer systems with commentary on the HARP program

    Shooman, Martin L.


    Many of the most challenging reliability problems of our present decade involve complex distributed systems such as interconnected telephone switching computers, air traffic control centers, aircraft and space vehicles, and local area and wide area computer networks. In addition to the challenge of complexity, modern fault-tolerant computer systems require very high levels of reliability, e.g., avionic computers with MTTF goals of one billion hours. Most analysts find that it is too difficult to model such complex systems without computer aided design programs. In response to this need, NASA has developed a suite of computer aided reliability modeling programs beginning with CARE 3 and including a group of new programs such as: HARP, HARP-PC, Reliability Analysts Workbench (Combination of model solvers SURE, STEM, PAWS, and common front-end model ASSIST), and the Fault Tree Compiler. The HARP program is studied and how well the user can model systems using this program is investigated. One of the important objectives will be to study how user friendly this program is, e.g., how easy it is to model the system, provide the input information, and interpret the results. The experiences of the author and his graduate students who used HARP in two graduate courses are described. Some brief comparisons were made with the ARIES program which the students also used. Theoretical studies of the modeling techniques used in HARP are also included. Of course no answer can be any more accurate than the fidelity of the model, thus an Appendix is included which discusses modeling accuracy. A broad viewpoint is taken and all problems which occurred in the use of HARP are discussed. Such problems include: computer system problems, installation manual problems, user manual problems, program inconsistencies, program limitations, confusing notation, long run times, accuracy problems, etc.

  17. Enlightenment on Computer Network Reliability From Transportation Network Reliability

    Hu Wenjun; Zhou Xizhao


    Referring to transportation network reliability problem, five new computer network reliability definitions are proposed and discussed. They are computer network connectivity reliability, computer network time reliability, computer network capacity reliability, computer network behavior reliability and computer network potential reliability. Finally strategies are suggested to enhance network reliability.

  18. Design and reliability, availability, maintainability, and safety analysis of a high availability quadruple vital computer system

    Ping TAN; Wei-ting HE; Jia LIN; Hong-ming ZHAO; Jian CHU


    With the development of high-speed railways in China,more than 2000 high-speed trains will be put into use.Safety and efficiency of railway transportation is increasingly important.We have designed a high availability quadruple vital computer (HAQVC) system based on the analysis of the architecture of the traditional double 2-out-of-2 system and 2-out-of-3 system.The HAQVC system is a system with high availability and safety,with prominent characteristics such as fire-new internal architecture,high efficiency,reliable data interaction mechanism,and operation state change mechanism.The hardware of the vital CPU is based on ARM7 with the real-time embedded safe operation system (ES-OS).The Markov modeling method is designed to evaluate the reliability,availability,maintainability,and safety (RAMS) of the system.In this paper,we demonstrate that the HAQVC system is more reliable than the all voting triple modular redundancy (AVTMR) system and double 2-out-of-2 system.Thus,the design can be used for a specific application system,such as an airplane or high-speed railway system.

  19. Reliable Quantum Computers

    Preskill, J


    The new field of quantum error correction has developed spectacularly since its origin less than two years ago. Encoded quantum information can be protected from errors that arise due to uncontrolled interactions with the environment. Recovery from errors can work effectively even if occasional mistakes occur during the recovery procedure. Furthermore, encoded quantum information can be processed without serious propagation of errors. Hence, an arbitrarily long quantum computation can be performed reliably, provided that the average probability of error per quantum gate is less than a certain critical value, the accuracy threshold. A quantum computer storing about 10^6 qubits, with a probability of error per quantum gate of order 10^{-6}, would be a formidable factoring engine. Even a smaller, less accurate quantum computer would be able to perform many useful tasks. (This paper is based on a talk presented at the ITP Conference on Quantum Coherence and Decoherence, 15-18 December 1996.)

  20. Reliability in the utility computing era: Towards reliable Fog computing

    Madsen, Henrik; Burtschy, Bernard; Albeanu, G.


    This paper considers current paradigms in computing and outlines the most important aspects concerning their reliability. The Fog computing paradigm as a non-trivial extension of the Cloud is considered and the reliability of the networks of smart devices are discussed. Combining the reliability...... requirements of grid and cloud paradigms with the reliability requirements of networks of sensor and actuators it follows that designing a reliable Fog computing platform is feasible....

  1. The reliability model of the fault-tolerant computing system with triple-modular redundancy based on the independent nodes

    Rahman, P. A.; Bobkova, E. Yu


    This paper deals with a reliability model of the restorable non-stop computing system with triple-modular redundancy based on independent computing nodes, taking into consideration the finite time for node activation and different node failure rates in the active and passive states. The obtained by authors generalized reliability model and calculation formulas for reliability indices for the system based on identical and independent computing nodes with the given threshold for quantity of active nodes, at which system is considered as operable, are also discussed. Finally, the application of the generalized model to the particular case of the non-stop restorable computing system with triple-modular redundancy based on independent nodes and calculation examples for reliability indices are also provided.

  2. Multidisciplinary System Reliability Analysis

    Mahadevan, Sankaran; Han, Song; Chamis, Christos C. (Technical Monitor)


    The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code, developed under the leadership of NASA Glenn Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multidisciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

  3. Design and analysis of the reliability of on-board computer system based on Markov-model

    MA Xiu-juan; CAO Xi-bin; ZHAO Guo-liang


    An on-board computer system should have such advantages as light weight, small volume and low power to meet the demand of micro-satellites. This paper, based on specific characteristics of Stereo Mapping Micro-Satellite ( SMMS), describes the on-board computer system with its advantage of having centralized and distributed control in the same system and analyzes its reliability based on a Markov model in order to provide a theoretical foundation for a reliable design. The on-board computer system has been put into use in principle prototype model of Stereo Mapping Micro-Satellite and has already been debugged. All indexes meet the requirements of the design.

  4. Non-binary decomposition trees - a method of reliability computation for systems with known minimal paths/cuts

    Malinowski, Jacek


    A coherent system with independent components and known minimal paths (cuts) is considered. In order to compute its reliability, a tree structure T is constructed whose nodes contain the modified minimal paths (cuts) and numerical values. The value of a non-leaf node is a function of its child nodes' values. The values of leaf nodes are calculated from a simple formula. The value of the root node is the system's failure probability (reliability). Subsequently, an algorithm computing the system's failure probability (reliability) is constructed. The algorithm scans all nodes of T using a stack structure for this purpose. The nodes of T are alternately put on and removed from the stack, their data being modified in the process. Once the algorithm has terminated, the stack contains only the final modification of the root node of T, and its value is equal to the system's failure probability (reliability)

  5. Reliability and Availability of Cloud Computing

    Bauer, Eric


    A holistic approach to service reliability and availability of cloud computing Reliability and Availability of Cloud Computing provides IS/IT system and solution architects, developers, and engineers with the knowledge needed to assess the impact of virtualization and cloud computing on service reliability and availability. It reveals how to select the most appropriate design for reliability diligence to assure that user expectations are met. Organized in three parts (basics, risk analysis, and recommendations), this resource is accessible to readers of diverse backgrounds and experience le

  6. Beyond redundancy how geographic redundancy can improve service availability and reliability of computer-based systems

    Bauer, Eric; Eustace, Dan


    "While geographic redundancy can obviously be a huge benefit for disaster recovery, it is far less obvious what benefit is feasible and likely for more typical non-catastrophic hardware, software, and human failures. Georedundancy and Service Availability provides both a theoretical and practical treatment of the feasible and likely benefits of geographic redundancy for both service availability and service reliability. The text provides network/system planners, IS/IT operations folks, system architects, system engineers, developers, testers, and other industry practitioners with a general discussion about the capital expense/operating expense tradeoff that frames system redundancy and georedundancy"--

  7. Reliability history of the Apollo guidance computer

    Hall, E. C.


    The Apollo guidance computer was designed to provide the computation necessary for guidance, navigation and control of the command module and the lunar landing module of the Apollo spacecraft. The computer was designed using the technology of the early 1960's and the production was completed by 1969. During the development, production, and operational phase of the program, the computer has accumulated a very interesting history which is valuable for evaluating the technology, production methods, system integration, and the reliability of the hardware. The operational experience in the Apollo guidance systems includes 17 computers which flew missions and another 26 flight type computers which are still in various phases of prelaunch activity including storage, system checkout, prelaunch spacecraft checkout, etc. These computers were manufactured and maintained under very strict quality control procedures with requirements for reporting and analyzing all indications of failure. Probably no other computer or electronic equipment with equivalent complexity has been as well documented and monitored. Since it has demonstrated a unique reliability history, it is important to evaluate the techniques and methods which have contributed to the high reliability of this computer.

  8. Effect of Maintenance on Computer Network Reliability

    Rima Oudjedi Damerdji


    Full Text Available At the time of the new information technologies, computer networks are inescapable in any large organization, where they are organized so as to form powerful internal means of communication. In a context of dependability, the reliability parameter proves to be fundamental to evaluate the performances of such systems. In this paper, we study the reliability evaluation of a real computer network, through three reliability models. The computer network considered (set of PCs and server interconnected is localized in a company established in the west of Algeria and dedicated to the production of ammonia and fertilizers. The result permits to compare between the three models to determine the most appropriate reliability model to the studied network, and thus, contribute to improving the quality of the network. In order to anticipate system failures as well as improve the reliability and availability of the latter, we must put in place a policy of adequate and effective maintenance based on a new model of the most common competing risks in maintenance, Alert-Delay model. At the end, dependability measures such as MTBF and reliability are calculated to assess the effectiveness of maintenance strategies and thus, validate the alert delay model.

  9. LED system reliability

    Driel, W.D. van; Yuan, C.A.; Koh, S.; Zhang, G.Q.


    This paper presents our effort to predict the system reliability of Solid State Lighting (SSL) applications. A SSL system is composed of a LED engine with micro-electronic driver(s) that supplies power to the optic design. Knowledge of system level reliability is not only a challenging scientific ex

  10. Reliability of fluid systems

    Kopáček Jaroslav


    Full Text Available This paper focuses on the importance of detection reliability, especially in complex fluid systems for demanding production technology. The initial criterion for assessing the reliability is the failure of object (element, which is seen as a random variable and their data (values can be processed using by the mathematical methods of theory probability and statistics. They are defined the basic indicators of reliability and their applications in calculations of serial, parallel and backed-up systems. For illustration, there are calculation examples of indicators of reliability for various elements of the system and for the selected pneumatic circuit.

  11. Measurement System Reliability Assessment

    Kłos Ryszard


    Full Text Available Decision-making in problem situations is based on up-to-date and reliable information. A great deal of information is subject to rapid changes, hence it may be outdated or manipulated and enforce erroneous decisions. It is crucial to have the possibility to assess the obtained information. In order to ensure its reliability it is best to obtain it with an own measurement process. In such a case, conducting assessment of measurement system reliability seems to be crucial. The article describes general approach to assessing reliability of measurement systems.

  12. A Computation Infrastructure for Knowledge-Based Development of Reliable Software Systems


    Lecture Notes in Computer Science , pages 449-465, 2005. "* Mark Bickford and David Guaspari, A Programming Logic for...Proving in Higher Order Logics, volume 2152 of Lecture Notes in Computer Science , pages 105-120. Springer Verlag, 2001. [CAB+86] Robert L. Constable...checking and model checking. In Rajeev Alur and Thomas A. Henzinger, editors, Computer-Aided Verification, volume 1102 of Lecture Notes in Computer Science

  13. Reliability of a computer-aided detection system in detecting lung metastases compared to manual palpation during surgery.

    Schramm, Alexandra; Wormanns, Dag; Leschber, Gunda; Merk, Johannes


    For resection of lung metastases computed tomography (CT) is needed to determine the operative strategy. A computer-aided detection (CAD) system, a software tool for automated detection of lung nodules, analyses the CT scans in addition to the radiologists and clearly marks lesions. The aim of this feasibility study was to evaluate the reliability of CAD in detecting lung metastases. Preoperative CT scans of 18 patients, who underwent surgery for suspected lung metastases, were analysed with CAD (September-December 2009). During surgery all suspected lesions were traced and resected. Histological examination was performed and results compared to radiologically suspicious nodes. Radiological analysis assisted by CAD detected 64 nodules (mean 3.6, range 1-7). During surgery 91 nodules (mean 5.0, range 1-11) were resected, resulting in 27 additionally palpated nodules. Histologically all these additional nodules were benign. In contrast, all 30 nodules shown to be metastases by histological studies were correctly described by CAD. The CAD system is a sensible and useful tool for finding pulmonary lesions. It detects more and smaller lesions than conventional radiological analysis. In this feasibility study we were able to show a greater reliability of the CAD analysis. A further and prospective study to confirm these data is ongoing.

  14. Hawaii Electric System Reliability

    Loose, Verne William [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Silva Monroy, Cesar Augusto [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)


    This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers’ views of reliability “worth” and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and for application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers’ views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.

  15. Hawaii electric system reliability.

    Silva Monroy, Cesar Augusto; Loose, Verne William


    This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers' views of reliability %E2%80%9Cworth%E2%80%9D and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and for application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers' views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.

  16. Photovoltaic system reliability

    Maish, A.B.; Atcitty, C. [Sandia National Labs., NM (United States); Greenberg, D. [Ascension Technology, Inc., Lincoln Center, MA (United States)] [and others


    This paper discusses the reliability of several photovoltaic projects including SMUD`s PV Pioneer project, various projects monitored by Ascension Technology, and the Colorado Parks project. System times-to-failure range from 1 to 16 years, and maintenance costs range from 1 to 16 cents per kilowatt-hour. Factors contributing to the reliability of these systems are discussed, and practices are recommended that can be applied to future projects. This paper also discusses the methodology used to collect and analyze PV system reliability data.

  17. Reliability and Validity of a Computer-Based Knowledge Mapping System To Measure Content Understanding.

    Herl, H. E.; O'Neil, H. F., Jr.; Chung, G. K. W. K.; Schacter, J.


    Presents results from two computer-based knowledge-mapping studies developed by the National Center for Research on Evaluation, Standards, and Student Testing (CRESST): in one, middle and high school students constructed group maps while collaborating over a network, and in the second, students constructed individual maps while searching the Web.…

  18. Reliability of Power Electronic Converter Systems

    -link capacitance in power electronic converter systems; wind turbine systems; smart control strategies for improved reliability of power electronics system; lifetime modelling; power module lifetime test and state monitoring; tools for performance and reliability analysis of power electronics systems; fault......-tolerant adjustable speed drive systems; mission profile oriented reliability design in wind turbine and photovoltaic systems; reliability of power conversion systems in photovoltaic applications; power supplies for computers; and high-power converters. Reliability of Power Electronic Converter Systems is essential...... reading for researchers, professionals and students working with power electronics and their applications, particularly those specializing in the development and application of power electronic converters and systems....

  19. The reliability of an easy measuring method for abutment convergence angle with a computer-aided design (CAD) system

    Seo, Yong-Joon; Kwon, Taek-Ka; Han, Jung-Suk; Lee, Jai-Bong; Kim, Sung-Hun


    PURPOSE The purpose of this study was to evaluate the intra-rater reliability and inter-rater reliability of three different methods using a drawing protractor, a digital protractor after tracing, and a CAD system. MATERIALS AND METHODS Twenty-four artificial abutments that had been prepared by dental students were used in this study. Three dental students measured the convergence angles by each method three times. Bland-Altman plots were applied to examine the overall reliability by comparing the traditional tracing method with a new method using the CAD system. Intraclass Correlation Coefficients (ICC) evaluated intra-rater reliability and inter-rater reliability. RESULTS All three methods exhibited high intra-rater and inter-rater reliability (ICC>0.80, P<.05). Measurements with the CAD system showed the highest intra-rater reliability. In addition, it showed improved inter-rater reliability compared with the traditional tracing methods. CONCLUSION Based on the results of this study, the CAD system may be an easy and reliable tool for measuring the abutment convergence angle. PMID:25006382

  20. Component reliability for electronic systems

    Bajenescu, Titu-Marius I


    The main reason for the premature breakdown of today's electronic products (computers, cars, tools, appliances, etc.) is the failure of the components used to build these products. Today professionals are looking for effective ways to minimize the degradation of electronic components to help ensure longer-lasting, more technically sound products and systems. This practical book offers engineers specific guidance on how to design more reliable components and build more reliable electronic systems. Professionals learn how to optimize a virtual component prototype, accurately monitor product reliability during the entire production process, and add the burn-in and selection procedures that are the most appropriate for the intended applications. Moreover, the book helps system designers ensure that all components are correctly applied, margins are adequate, wear-out failure modes are prevented during the expected duration of life, and system interfaces cannot lead to failure.

  1. Computation of posterior distribution in Bayesian analysis – application in an intermittently used reliability system

    V. S.S. Yadavalli


    Full Text Available Bayesian estimation is presented for the stationary rate of disappointments, D∞, for two models (with different specifications of intermittently used systems. The random variables in the system are considered to be independently exponentially distributed. Jeffreys’ prior is assumed for the unknown parameters in the system. Inference about D∞ is being restrained in both models by the complex and non-linear definition of D∞. Monte Carlo simulation is used to derive the posterior distribution of D∞ and subsequently the highest posterior density (HPD intervals. A numerical example where Bayes estimates and the HPD intervals are determined illustrates these results. This illustration is extended to determine the frequentistical properties of this Bayes procedure, by calculating covering proportions for each of these HPD intervals, assuming fixed values for the parameters.

  2. System Reliability Analysis: Foundations.


    performance formulas for systems subject to pre- ventive maintenance are given. V * ~, , 9 D -2 SYSTEM RELIABILITY ANALYSIS: FOUNDATIONS Richard E...reliability in this case is V P{s can communicate with the terminal t = h(p) Sp2(((((p p)p) p)p)gp) + p(l -p)(((pL p)p)(p 2 JLp)) + p(l -p)((p(p p...For undirected networks, the basic reference is A. Satyanarayana and Kevin Wood (1982). For directed networks, the basic reference is Avinash

  3. Toward formal analysis of ultra-reliable computers: A total systems approach

    Chisholm, G.H.; Kljaich, J.; Smith, B.T.; Wojcik, A.S.


    This paper describes the application of modeling and analysis techniques to software that is designed to execute on four channel version of the the Charles Stark Draper Laboratory (CSDL) Fault-Tolerant Processor, referred to as the Draper FTP. The software performs sensor validation of four independent measures (singlas) from the primary pumps of the Experimental Breeder Reactor-II operated by Argonne National Laboratory-West, and from the validated signals formulates a flow trip signal for the reactor safety system. 11 refs., 4 figs.

  4. Expert system aids reliability

    Johnson, A.T. [Tennessee Gas Pipeline, Houston, TX (United States)


    Quality and Reliability are key requirements in the energy transmission industry. Tennessee Gas Co. a division of El Paso Energy, has applied Gensym`s G2, object-oriented Expert System programming language as a standard tool for maintaining and improving quality and reliability in pipeline operation. Tennessee created a small team of gas controllers and engineers to develop a Proactive Controller`s Assistant (ProCA) that provides recommendations for operating the pipeline more efficiently, reliably and safely. The controller`s pipeline operating knowledge is recreated in G2 in the form of Rules and Procedures in ProCA. Two G2 programmers supporting the Gas Control Room add information to the ProCA knowledge base daily. The result is a dynamic, constantly improving system that not only supports the pipeline controllers in their operations, but also the measurement and communications departments` requests for special studies. The Proactive Controller`s Assistant development focus is in the following areas: Alarm Management; Pipeline Efficiency; Reliability; Fuel Efficiency; and Controller Development.

  5. Advances in reliability and system engineering

    Davim, J


    This book presents original studies describing the latest research and developments in the area of reliability and systems engineering. It helps the reader identifying gaps in the current knowledge and presents fruitful areas for further research in the field. Among others, this book covers reliability measures, reliability assessment of multi-state systems, optimization of multi-state systems, continuous multi-state systems, new computational techniques applied to multi-state systems and probabilistic and non-probabilistic safety assessment.

  6. Load Control System Reliability

    Trudnowski, Daniel [Montana Tech of the Univ. of Montana, Butte, MT (United States)


    This report summarizes the results of the Load Control System Reliability project (DOE Award DE-FC26-06NT42750). The original grant was awarded to Montana Tech April 2006. Follow-on DOE awards and expansions to the project scope occurred August 2007, January 2009, April 2011, and April 2013. In addition to the DOE monies, the project also consisted of matching funds from the states of Montana and Wyoming. Project participants included Montana Tech; the University of Wyoming; Montana State University; NorthWestern Energy, Inc., and MSE. Research focused on two areas: real-time power-system load control methodologies; and, power-system measurement-based stability-assessment operation and control tools. The majority of effort was focused on area 2. Results from the research includes: development of fundamental power-system dynamic concepts, control schemes, and signal-processing algorithms; many papers (including two prize papers) in leading journals and conferences and leadership of IEEE activities; one patent; participation in major actual-system testing in the western North American power system; prototype power-system operation and control software installed and tested at three major North American control centers; and, the incubation of a new commercial-grade operation and control software tool. Work under this grant certainly supported the DOE-OE goals in the area of “Real Time Grid Reliability Management.”

  7. Ultimately Reliable Pyrotechnic Systems

    Scott, John H.; Hinkel, Todd


    This paper presents the methods by which NASA has designed, built, tested, and certified pyrotechnic devices for high reliability operation in extreme environments and illustrates the potential applications in the oil and gas industry. NASA's extremely successful application of pyrotechnics is built upon documented procedures and test methods that have been maintained and developed since the Apollo Program. Standards are managed and rigorously enforced for performance margins, redundancy, lot sampling, and personnel safety. The pyrotechnics utilized in spacecraft include such devices as small initiators and detonators with the power of a shotgun shell, detonating cord systems for explosive energy transfer across many feet, precision linear shaped charges for breaking structural membranes, and booster charges to actuate valves and pistons. NASA's pyrotechnics program is one of the more successful in the history of Human Spaceflight. No pyrotechnic device developed in accordance with NASA's Human Spaceflight standards has ever failed in flight use. NASA's pyrotechnic initiators work reliably in temperatures as low as -420 F. Each of the 135 Space Shuttle flights fired 102 of these initiators, some setting off multiple pyrotechnic devices, with never a failure. The recent landing on Mars of the Opportunity rover fired 174 of NASA's pyrotechnic initiators to complete the famous '7 minutes of terror.' Even after traveling through extreme radiation and thermal environments on the way to Mars, every one of them worked. These initiators have fired on the surface of Titan. NASA's design controls, procedures, and processes produce the most reliable pyrotechnics in the world. Application of pyrotechnics designed and procured in this manner could enable the energy industry's emergency equipment, such as shutoff valves and deep-sea blowout preventers, to be left in place for years in extreme environments and still be relied upon to function when needed, thus greatly enhancing

  8. The process group approach to reliable distributed computing

    Birman, Kenneth P.


    The difficulty of developing reliable distribution software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems that are substantially easier to develop, exploit sophisticated forms of cooperative computation, and achieve high reliability. Six years of research on ISIS, describing the model, its implementation challenges, and the types of applications to which ISIS has been applied are reviewed.

  9. Telecommunications system reliability engineering theory and practice

    Ayers, Mark L


    "Increasing system complexity require new, more sophisticated tools for system modeling and metric calculation. Bringing the field up to date, this book provides telecommunications engineers with practical tools for analyzing, calculating, and reporting availability, reliability, and maintainability metrics. It gives the background in system reliability theory and covers in-depth applications in fiber optic networks, microwave networks, satellite networks, power systems, and facilities management. Computer programming tools for simulating the approaches presented, using the Matlab software suite, are also provided"

  10. Exact reliability quantification of highly reliable systems with maintenance

    Bris, Radim, E-mail: radim.bris@vsb.c [VSB-Technical University Ostrava, Faculty of Electrical Engineering and Computer Science, Department of Applied Mathematics, 17. listopadu 15, 70833 Ostrava-Poruba (Czech Republic)


    When a system is composed of highly reliable elements, exact reliability quantification may be problematic, because computer accuracy is limited. Inaccuracy can be due to different aspects. For example, an error may be made when subtracting two numbers that are very close to each other, or at the process of summation of many very different numbers, etc. The basic objective of this paper is to find a procedure, which eliminates errors made by PC when calculations close to an error limit are executed. Highly reliable system is represented by the use of directed acyclic graph which is composed from terminal nodes, i.e. highly reliable input elements, internal nodes representing subsystems and edges that bind all of these nodes. Three admissible unavailability models of terminal nodes are introduced, including both corrective and preventive maintenance. The algorithm for exact unavailability calculation of terminal nodes is based on merits of a high-performance language for technical computing MATLAB. System unavailability quantification procedure applied to a graph structure, which considers both independent and dependent (i.e. repeatedly occurring) terminal nodes is based on combinatorial principle. This principle requires summation of a lot of very different non-negative numbers, which may be a source of an inaccuracy. That is why another algorithm for exact summation of such numbers is designed in the paper. The summation procedure uses benefits from a special number system with the base represented by the value 2{sup 32}. Computational efficiency of the new computing methodology is compared with advanced simulation software. Various calculations on systems from references are performed to emphasize merits of the methodology.

  11. High-reliability computing for the smarter planet

    Quinn, Heather M [Los Alamos National Laboratory; Graham, Paul [Los Alamos National Laboratory; Manuzzato, Andrea [UNIV OF PADOVA; Dehon, Andre [UNIV OF PENN; Carter, Nicholas [INTEL CORPORATION


    The geometric rate of improvement of transistor size and integrated circuit performance, known as Moore's Law, has been an engine of growth for our economy, enabling new products and services, creating new value and wealth, increasing safety, and removing menial tasks from our daily lives. Affordable, highly integrated components have enabled both life-saving technologies and rich entertainment applications. Anti-lock brakes, insulin monitors, and GPS-enabled emergency response systems save lives. Cell phones, internet appliances, virtual worlds, realistic video games, and mp3 players enrich our lives and connect us together. Over the past 40 years of silicon scaling, the increasing capabilities of inexpensive computation have transformed our society through automation and ubiquitous communications. In this paper, we will present the concept of the smarter planet, how reliability failures affect current systems, and methods that can be used to increase the reliable adoption of new automation in the future. We will illustrate these issues using a number of different electronic devices in a couple of different scenarios. Recently IBM has been presenting the idea of a 'smarter planet.' In smarter planet documents, IBM discusses increased computer automation of roadways, banking, healthcare, and infrastructure, as automation could create more efficient systems. A necessary component of the smarter planet concept is to ensure that these new systems have very high reliability. Even extremely rare reliability problems can easily escalate to problematic scenarios when implemented at very large scales. For life-critical systems, such as automobiles, infrastructure, medical implantables, and avionic systems, unmitigated failures could be dangerous. As more automation moves into these types of critical systems, reliability failures will need to be managed. As computer automation continues to increase in our society, the need for greater radiation reliability is

  12. Fault tolerant computing systems

    Randell, B


    Fault tolerance involves the provision of strategies for error detection, damage assessment, fault treatment and error recovery. A survey is given of the different sorts of strategies used in highly reliable computing systems, together with an outline of recent research on the problems of providing fault tolerance in parallel and distributed computing systems. (15 refs).

  13. Reliability Growth in Space Life Support Systems

    Jones, Harry W.


    A hardware system's failure rate often increases over time due to wear and aging, but not always. Some systems instead show reliability growth, a decreasing failure rate with time, due to effective failure analysis and remedial hardware upgrades. Reliability grows when failure causes are removed by improved design. A mathematical reliability growth model allows the reliability growth rate to be computed from the failure data. The space shuttle was extensively maintained, refurbished, and upgraded after each flight and it experienced significant reliability growth during its operational life. In contrast, the International Space Station (ISS) is much more difficult to maintain and upgrade and its failure rate has been constant over time. The ISS Carbon Dioxide Removal Assembly (CDRA) reliability has slightly decreased. Failures on ISS and with the ISS CDRA continue to be a challenge.

  14. Reliability and diagnostic of modular systems

    J. Kohlas


    Full Text Available Reliability and diagnostic are in general two problems discussed separately. Yet the two problems are in fact closely related to each other. Here, this relation is considered in the simple case of modular systems. We show, how the computation of reliability and diagnostic can efficiently be done within the same Bayesian network induced by the modularity of the structure function of the system.

  15. Multi-Disciplinary System Reliability Analysis

    Mahadevan, Sankaran; Han, Song


    The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code developed under the leadership of NASA Lewis Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multi-disciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

  16. Innovations in power systems reliability

    Santora, Albert H; Vaccaro, Alfredo


    Electrical grids are among the world's most reliable systems, yet they still face a host of issues, from aging infrastructure to questions of resource distribution. Here is a comprehensive and systematic approach to tackling these contemporary challenges.

  17. Modeling and Simulation Reliable Spacecraft On-Board Computing

    Park, Nohpill


    The proposed project will investigate modeling and simulation-driven testing and fault tolerance schemes for Spacecraft On-Board Computing, thereby achieving reliable spacecraft telecommunication. A spacecraft communication system has inherent capabilities of providing multipoint and broadcast transmission, connectivity between any two distant nodes within a wide-area coverage, quick network configuration /reconfiguration, rapid allocation of space segment capacity, and distance-insensitive cost. To realize the capabilities above mentioned, both the size and cost of the ground-station terminals have to be reduced by using reliable, high-throughput, fast and cost-effective on-board computing system which has been known to be a critical contributor to the overall performance of space mission deployment. Controlled vulnerability of mission data (measured in sensitivity), improved performance (measured in throughput and delay) and fault tolerance (measured in reliability) are some of the most important features of these systems. The system should be thoroughly tested and diagnosed before employing a fault tolerance into the system. Testing and fault tolerance strategies should be driven by accurate performance models (i.e. throughput, delay, reliability and sensitivity) to find an optimal solution in terms of reliability and cost. The modeling and simulation tools will be integrated with a system architecture module, a testing module and a module for fault tolerance all of which interacting through a centered graphical user interface.

  18. Reliability Management for Information System

    李睿; 俞涛; 刘明伦


    An integrated intelligent management is presented to help organizations manage many heterogeneous resources in their information system. A general architecture of management for information system reliability is proposed, and the architecture from two aspects, process model and hierarchical model, described. Data mining techniques are used in data analysis. A data analysis system applicable to real-time data analysis is developed by improved data mining on the critical processes. The framework of the integrated management for information system reliability based on real-time data mining is illustrated, and the development of integrated and intelligent management of information system discussed.

  19. On Bayesian System Reliability Analysis

    Soerensen Ringi, M.


    The view taken in this thesis is that reliability, the probability that a system will perform a required function for a stated period of time, depends on a person`s state of knowledge. Reliability changes as this state of knowledge changes, i.e. when new relevant information becomes available. Most existing models for system reliability prediction are developed in a classical framework of probability theory and they overlook some information that is always present. Probability is just an analytical tool to handle uncertainty, based on judgement and subjective opinions. It is argued that the Bayesian approach gives a much more comprehensive understanding of the foundations of probability than the so called frequentistic school. A new model for system reliability prediction is given in two papers. The model encloses the fact that component failures are dependent because of a shared operational environment. The suggested model also naturally permits learning from failure data of similar components in non identical environments. 85 refs.

  20. Bayesian system reliability assessment under fuzzy environments

    Wu, H.-C


    The Bayesian system reliability assessment under fuzzy environments is proposed in this paper. In order to apply the Bayesian approach, the fuzzy parameters are assumed as fuzzy random variables with fuzzy prior distributions. The (conventional) Bayes estimation method will be used to create the fuzzy Bayes point estimator of system reliability by invoking the well-known theorem called 'Resolution Identity' in fuzzy sets theory. On the other hand, we also provide the computational procedures to evaluate the membership degree of any given Bayes point estimate of system reliability. In order to achieve this purpose, we transform the original problem into a nonlinear programming problem. This nonlinear programming problem is then divided into four subproblems for the purpose of simplifying computation. Finally, the subproblems can be solved by using any commercial optimizers, e.g. GAMS or LINGO.

  1. 应用马尔科夫状态图法进行可靠性评估%Evaluation of Reliability of a Fault-tolerance Computer System by Markov Status Graph Evaluation of Reliability of a Fault-tolerance Computer System by Markov Status Graph



    应用马尔科夫状态图法,对一个实际的硬件式可修容错计算机系统进行了可靠性评估。并针对两种容错方式分别得出各自的评估数据,通过实际的数据分析了其优缺点及最佳适用范围。%In this paper, the reliability of a fault-tolerance computer system is evaluated by Markov status graph. Majority voting method and single store method are used to evaluate the reliability and usability of the fault-tolerance system. Through practical computation, the comparison data are also given.

  2. Computer systems

    Olsen, Lola


    In addition to the discussions, Ocean Climate Data Workshop hosts gave participants an opportunity to hear about, see, and test for themselves some of the latest computer tools now available for those studying climate change and the oceans. Six speakers described computer systems and their functions. The introductory talks were followed by demonstrations to small groups of participants and some opportunities for participants to get hands-on experience. After this familiarization period, attendees were invited to return during the course of the Workshop and have one-on-one discussions and further hands-on experience with these systems. Brief summaries or abstracts of introductory presentations are addressed.

  3. Resilient computer system design

    Castano, Victor


    This book presents a paradigm for designing new generation resilient and evolving computer systems, including their key concepts, elements of supportive theory, methods of analysis and synthesis of ICT with new properties of evolving functioning, as well as implementation schemes and their prototyping. The book explains why new ICT applications require a complete redesign of computer systems to address challenges of extreme reliability, high performance, and power efficiency. The authors present a comprehensive treatment for designing the next generation of computers, especially addressing safety-critical, autonomous, real time, military, banking, and wearable health care systems.   §  Describes design solutions for new computer system - evolving reconfigurable architecture (ERA) that is free from drawbacks inherent in current ICT and related engineering models §  Pursues simplicity, reliability, scalability principles of design implemented through redundancy and re-configurability; targeted for energy-,...

  4. Recent advances in computational structural reliability analysis methods

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.


    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  5. Reliability measures of a computer system with priority to PM over the H/W repair activities subject to MOT and MRT

    Ashish Kumar


    Full Text Available This paper concentrates on the evaluation of reliability measures of a computer system of two-identical units having independent failure of h/w and s/w components. Initially one unit is operative and the other is kept as spare in cold standby. There is a single server visiting the system immediately whenever needed. The server conducts preventive maintenance of the unit after a maximum operation time. If server is unable to repair the h/w components in maximum repair time, then components in the unit are replaced immediately by new one. However, only replacement of the s/w components has been made at their failure. The priority is given to the preventive maintenance over repair activities of the h/w. The time to failure of the components follows negative exponential distribution whereas the distribution of preventive maintenance, repair and replacement time are taken as arbitrary. The expressions for some important reliability measures of system effectiveness have been derived using semi-Markov process and regenerative point technique. The graphical behavior of the results has also been shown for a particular case.

  6. System Reliability Analysis of Redundant Condition Monitoring Systems

    YI Pengxing; HU Youming; YANG Shuzi; WU Bo; CUI Feng


    The development and application of new reliability models and methods are presented to analyze the system reliability of complex condition monitoring systems. The methods include a method analyzing failure modes of a type of redundant condition monitoring systems (RCMS) by invoking failure tree model, Markov modeling techniques for analyzing system reliability of RCMS, and methods for estimating Markov model parameters. Furthermore, a computing case is investigated and many conclusions upon this case are summarized. Results show that the method proposed here is practical and valuable for designing condition monitoring systems and their maintenance.

  7. A Research Roadmap for Computation-Based Human Reliability Analysis

    Boring, Ronald [Idaho National Lab. (INL), Idaho Falls, ID (United States); Mandelli, Diego [Idaho National Lab. (INL), Idaho Falls, ID (United States); Joe, Jeffrey [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis [Idaho National Lab. (INL), Idaho Falls, ID (United States); Groth, Katrina [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)


    The United States (U.S.) Department of Energy (DOE) is sponsoring research through the Light Water Reactor Sustainability (LWRS) program to extend the life of the currently operating fleet of commercial nuclear power plants. The Risk Informed Safety Margin Characterization (RISMC) research pathway within LWRS looks at ways to maintain and improve the safety margins of these plants. The RISMC pathway includes significant developments in the area of thermalhydraulics code modeling and the development of tools to facilitate dynamic probabilistic risk assessment (PRA). PRA is primarily concerned with the risk of hardware systems at the plant; yet, hardware reliability is often secondary in overall risk significance to human errors that can trigger or compound undesirable events at the plant. This report highlights ongoing efforts to develop a computation-based approach to human reliability analysis (HRA). This computation-based approach differs from existing static and dynamic HRA approaches in that it: (i) interfaces with a dynamic computation engine that includes a full scope plant model, and (ii) interfaces with a PRA software toolset. The computation-based HRA approach presented in this report is called the Human Unimodels for Nuclear Technology to Enhance Reliability (HUNTER) and incorporates in a hybrid fashion elements of existing HRA methods to interface with new computational tools developed under the RISMC pathway. The goal of this research effort is to model human performance more accurately than existing approaches, thereby minimizing modeling uncertainty found in current plant risk models.

  8. The reliability of tablet computers in depicting maxillofacial radiographic landmarks


    Purpose This study was performed to evaluate the reliability of the identification of anatomical landmarks in panoramic and lateral cephalometric radiographs on a standard medical grade picture archiving communication system (PACS) monitor and a tablet computer (iPad 5). Materials and Methods A total of 1000 radiographs, including 500 panoramic and 500 lateral cephalometric radiographs, were retrieved from the de-identified dataset of the archive of the Section of Oral and Maxillofacial Radio...

  9. Secure computing on reconfigurable systems

    Fernandes Chaves, R.J.


    This thesis proposes a Secure Computing Module (SCM) for reconfigurable computing systems. SC provides a protected and reliable computational environment, where data security and protection against malicious attacks to the system is assured. SC is strongly based on encryption algorithms and on the

  10. Secure computing on reconfigurable systems

    Fernandes Chaves, R.J.


    This thesis proposes a Secure Computing Module (SCM) for reconfigurable computing systems. SC provides a protected and reliable computational environment, where data security and protection against malicious attacks to the system is assured. SC is strongly based on encryption algorithms and on the a

  11. Defining Requirements for Improved Photovoltaic System Reliability

    Maish, A.B.


    Reliable systems are an essential ingredient of any technology progressing toward commercial maturity and large-scale deployment. This paper defines reliability as meeting system fictional requirements, and then develops a framework to understand and quantify photovoltaic system reliability based on initial and ongoing costs and system value. The core elements necessary to achieve reliable PV systems are reviewed. These include appropriate system design, satisfactory component reliability, and proper installation and servicing. Reliability status, key issues, and present needs in system reliability are summarized for four application sectors.

  12. Software reliability and safety in nuclear reactor protection systems

    Lawrence, J.D. [Lawrence Livermore National Lab., CA (United States)


    Planning the development, use and regulation of computer systems in nuclear reactor protection systems in such a way as to enhance reliability and safety is a complex issue. This report is one of a series of reports from the Computer Safety and Reliability Group, Lawrence Livermore that investigates different aspects of computer software in reactor National Laboratory, that investigates different aspects of computer software in reactor protection systems. There are two central themes in the report, First, software considerations cannot be fully understood in isolation from computer hardware and application considerations. Second, the process of engineering reliability and safety into a computer system requires activities to be carried out throughout the software life cycle. The report discusses the many activities that can be carried out during the software life cycle to improve the safety and reliability of the resulting product. The viewpoint is primarily that of the assessor, or auditor.

  13. Algorithmic mechanisms for reliable crowdsourcing computation under collusion.

    Fernández Anta, Antonio; Georgiou, Chryssis; Mosteiro, Miguel A; Pareja, Daniel


    We consider a computing system where a master processor assigns a task for execution to worker processors that may collude. We model the workers' decision of whether to comply (compute the task) or not (return a bogus result to save the computation cost) as a game among workers. That is, we assume that workers are rational in a game-theoretic sense. We identify analytically the parameter conditions for a unique Nash Equilibrium where the master obtains the correct result. We also evaluate experimentally mixed equilibria aiming to attain better reliability-profit trade-offs. For a wide range of parameter values that may be used in practice, our simulations show that, in fact, both master and workers are better off using a pure equilibrium where no worker cheats, even under collusion, and even for colluding behaviors that involve deviating from the game.

  14. Algorithmic mechanisms for reliable crowdsourcing computation under collusion.

    Antonio Fernández Anta

    Full Text Available We consider a computing system where a master processor assigns a task for execution to worker processors that may collude. We model the workers' decision of whether to comply (compute the task or not (return a bogus result to save the computation cost as a game among workers. That is, we assume that workers are rational in a game-theoretic sense. We identify analytically the parameter conditions for a unique Nash Equilibrium where the master obtains the correct result. We also evaluate experimentally mixed equilibria aiming to attain better reliability-profit trade-offs. For a wide range of parameter values that may be used in practice, our simulations show that, in fact, both master and workers are better off using a pure equilibrium where no worker cheats, even under collusion, and even for colluding behaviors that involve deviating from the game.

  15. The 747 primary flight control systems reliability and maintenance study


    The major operational characteristics of the 747 Primary Flight Control Systems (PFCS) are described. Results of reliability analysis for separate control functions are presented. The analysis makes use of a NASA computer program which calculates reliability of redundant systems. Costs for maintaining the 747 PFCS in airline service are assessed. The reliabilities and cost will provide a baseline for use in trade studies of future flight control system design.

  16. An introduction to reliable quantum computation

    Aliferis, Panos


    This is an introduction to software methods of quantum fault tolerance. Broadly speaking, these methods describe strategies for using the noisy hardware components of a quantum computer to perform computations while continually monitoring and actively correcting the hardware faults. We discuss parallels and differences with similar methods for ordinary digital computation, we discuss some of the noise models used in designing and analyzing noisy quantum circuits, and we sketch the logic of some of the central results in this area of research.

  17. System Reliability of Timber Structures

    Kirkegaard, Poul Henning; Sørensen, John Dalsgaard


    For reduction of the risk of collapse in the event of loss of structural element(s), a structural engineer may take necessary steps to design a collapse-resistant structure that is insensitive to accidental circumstances e.g. by incorporating characteristics like redundancy, ties, ductility, key ...... elements, alternate load path(s) etc. in the structural design. In general these characteristics can have a positive influence on system reliability of a structure however, in Eurocodes ductility is only awarded for concrete and steel structures but not for timber structures. It is well......-know that structural systems can redistribute internal forces due to ductility of a connection, i.e. some additional loads can be carried by the structure. The same effect is also possible for reinforced concrete structures and structures of steel. However, for timber structures codes do not award that ductility...

  18. Computer programming and computer systems

    Hassitt, Anthony


    Computer Programming and Computer Systems imparts a "reading knowledge? of computer systems.This book describes the aspects of machine-language programming, monitor systems, computer hardware, and advanced programming that every thorough programmer should be acquainted with. This text discusses the automatic electronic digital computers, symbolic language, Reverse Polish Notation, and Fortran into assembly language. The routine for reading blocked tapes, dimension statements in subroutines, general-purpose input routine, and efficient use of memory are also elaborated.This publication is inten

  19. A fast, reliable algorithm for computing frequency responses of state space models

    Wette, Matt


    Computation of frequency responses for large order systems described by time invariant state space systems often provides a bottleneck in control system analysis. It is shown that banding the A-matrix in the state space model can effectively reduce the computation time for such systems while maintaining reliability in the results produced.

  20. The reliability of tablet computers in depicting maxillofacial radiographic landmarks

    Tadinada, Aditya; Mahdian, Mina; Sheth, Sonam; Chandhoke, Taranpreet K.; Gopalakrishna, Aadarsh; Potluri, Anitha; Yadav, Sumit [University of Connecticut School of Dental Medicine, Farmington (United States)


    This study was performed to evaluate the reliability of the identification of anatomical landmarks in panoramic and lateral cephalometric radiographs on a standard medical grade picture archiving communication system (PACS) monitor and a tablet computer (iPad 5). A total of 1000 radiographs, including 500 panoramic and 500 lateral cephalometric radiographs, were retrieved from the de-identified dataset of the archive of the Section of Oral and Maxillofacial Radiology of the University Of Connecticut School Of Dental Medicine. Major radiographic anatomical landmarks were independently reviewed by two examiners on both displays. The examiners initially reviewed ten panoramic and ten lateral cephalometric radiographs using each imaging system, in order to verify interoperator agreement in landmark identification. The images were scored on a four-point scale reflecting the diagnostic image quality and exposure level of the images. Statistical analysis showed no significant difference between the two displays regarding the visibility and clarity of the landmarks in either the panoramic or cephalometric radiographs. Tablet computers can reliably show anatomical landmarks in panoramic and lateral cephalometric radiographs.

  1. Reliability analysis of an associated system

    陈长杰; 魏一鸣; 蔡嗣经


    Based on engineering reliability of large complex system and distinct characteristic of soft system, some new conception and theory on the medium elements and the associated system are created. At the same time, the reliability logic model of associated system is provided. In this paper, through the field investigation of the trial operation, the engineering reliability of the paste fill system in No.2 mine of Jinchuan Non-ferrous Metallic Corporation is analyzed by using the theory of associated system.

  2. Composite system reliability evaluation by stochastic calculation of system operation

    Haubrick, H.-J.; Hinz, H.-J.; Landeck, E. [Dept. of Power Systems and Power Economics (Germany)


    This report describes a new developed probabilistic approach for steady-state composite system reliability evaluation and its exemplary application to a bulk power test system. The new computer program called PHOENIX takes into consideration transmission limitations, outages of lines and power stations and, as a central element, a highly sophisticated model to the dispatcher performing remedial actions after disturbances. The kernel of the new method is a procedure for optimal power flow calculation that has been specially adapted for the use in reliability evaluations under the above mentioned conditions. (author) 11 refs., 8 figs., 1 tab.

  3. 75 FR 71625 - System Restoration Reliability Standards


    ... Energy Regulatory Commission 18 CFR Part 40 System Restoration Reliability Standards November 18, 2010... to approve Reliability Standards EOP-001-1 (Emergency Operations Planning), EOP- 005-2 (System... modifications to proposed EOP-005-2 and EOP-006-2. The proposed Reliability Standards require that...

  4. Multi-Agent System for Resource Reliability


    1 research and development of a prototype for network resource reliability has laid the groundwork for the Phase 2 implementation of MASRR, a Multi - Agent System for Resource Reliability, and its eventual commercialization.

  5. Reliability of computer memories in radiation environment

    Fetahović Irfan S.


    Full Text Available The aim of this paper is examining a radiation hardness of the magnetic (Toshiba MK4007 GAL and semiconductor (AT 27C010 EPROM and AT 28C010 EEPROM computer memories in the field of radiation. Magnetic memories have been examined in the field of neutron radiation, and semiconductor memories in the field of gamma radiation. The obtained results have shown a high radiation hardness of magnetic memories. On the other side, it has been shown that semiconductor memories are significantly more sensitive and a radiation can lead to an important damage of their functionality. [Projekat Ministarstva nauke Republike Srbije, br. 171007

  6. Quality and Reliability of Missile System

    Mr. Prahlada


    Full Text Available Missile system is a single-shot weapon system which requires very high quality and reliability. Therefore, quality and reliability have to be built into the system from designing to testing and evaluation. In this paper, the technological challenges encountered during development of operational missile system and the factors considered to build quality and reliability through the design, manufacture, assembly, testing and by sharing the knowledge with other aerospace agencies, industries and institutions, etc. have been presented.

  7. Distribution System Reliability Evaluation Taking Circuit Capacity into Consideration


    Distribution system reliability evaluation using the method ofconnectivity ignores the effect of operation constraints. This paper presents an approach that includes the effect of circuit capacity. Reliability evaluation of distribution systems with parallel circuits generally requires load flow solutions. The proposed approach combines the Z-matrix contingency method with DC load flow for a much faster direct solution. Three different methods for distribution system reliability evaluation have been incorporated into a computer program. The program was validated using two distribution systems connected to the IEEE-RTS and another sample distribution system.

  8. Towards early software reliability prediction for computer forensic tools (case study).

    Abu Talib, Manar


    Versatility, flexibility and robustness are essential requirements for software forensic tools. Researchers and practitioners need to put more effort into assessing this type of tool. A Markov model is a robust means for analyzing and anticipating the functioning of an advanced component based system. It is used, for instance, to analyze the reliability of the state machines of real time reactive systems. This research extends the architecture-based software reliability prediction model for computer forensic tools, which is based on Markov chains and COSMIC-FFP. Basically, every part of the computer forensic tool is linked to a discrete time Markov chain. If this can be done, then a probabilistic analysis by Markov chains can be performed to analyze the reliability of the components and of the whole tool. The purposes of the proposed reliability assessment method are to evaluate the tool's reliability in the early phases of its development, to improve the reliability assessment process for large computer forensic tools over time, and to compare alternative tool designs. The reliability analysis can assist designers in choosing the most reliable topology for the components, which can maximize the reliability of the tool and meet the expected reliability level specified by the end-user. The approach of assessing component-based tool reliability in the COSMIC-FFP context is illustrated with the Forensic Toolkit Imager case study.

  9. On reliability optimization for power generation systems


    The reliability level of a power generation system is an important problem which is concerned by both electricity producers and electricity consumers. Why? It is known that the high reliability level may result in additional utility cost, and the low reliability level may result in additional consumer's cost, so the optimum reliability level should be determined such that the total cost can reach its minimum. Four optimization models for power generation system reliability are constructed, and the proven efficient solutions for these models are also given.

  10. Computed tomography for the detection of distal radioulnar joint instability: normal variation and reliability of four CT scoring systems in 46 patients

    Wijffels, Mathieu; Krijnen, Pieta; Schipper, Inger [Leiden University Medical Center, Department of Surgery-Trauma Surgery, P.O. Box 9600, Leiden (Netherlands); Stomp, Wouter; Reijnierse, Monique [Leiden University Medical Center, Department of Radiology, P.O. Box 9600, Leiden (Netherlands)


    The diagnosis of distal radioulnar joint (DRUJ) instability is clinically challenging. Computed tomography (CT) may aid in the diagnosis, but the reliability and normal variation for DRUJ translation on CT have not been established in detail. The aim of this study was to evaluate inter- and intraobserver agreement and normal ranges of CT scoring methods for determination of DRUJ translation in both posttraumatic and uninjured wrists. Patients with a conservatively treated, unilateral distal radius fracture were included. CT scans of both wrists were evaluated independently, by two readers using the radioulnar line method, subluxation ratio method, epicenter method and radioulnar ratio method. The inter- and intraobserver agreement was assessed and normal values were determined based on the uninjured wrists. Ninety-two wrist CTs (mean age: 56.5 years, SD: 17.0, mean follow-up 4.2 years, SD: 0.5) were evaluated. Interobserver agreement was best for the epicenter method [ICC = 0.73, 95 % confidence interval (CI) 0.65-0.79]. Intraobserver agreement was almost perfect for the radioulnar line method (ICC = 0.82, 95 % CI 0.77-0.87). Each method showed a wide normal range for normal DRUJ translation. Normal range for the epicenter method is -0.35 to -0.06 in pronation and -0.11 to 0.19 in supination. DRUJ translation on CT in pro- and supination can be reliably evaluated in both normal and posttraumatic wrists, however with large normal variation. The epicenter method seems the most reliable. Scanning of both wrists might be helpful to prevent the radiological overdiagnosis of instability. (orig.)

  11. Towards Reliable Integrated Services for Dependable Systems

    Schiøler, Henrik; Ravn, Anders Peter; Izadi-Zamanabadi, Roozbeh


    Reliability issues for various technical systems are discussed and focus is directed towards distributed systems, where communication facilities are vital to maintain system functionality. Reliability in communication subsystems is considered as a resource to be shared among a number of logical...

  12. Towards Reliable Integrated Services for Dependable Systems

    Schiøler, Henrik; Ravn, Anders Peter; Izadi-Zamanabadi, Roozbeh

    Reliability issues for various technical systems are discussed and focus is directed towards distributed systems, where communication facilities are vital to maintain system functionality. Reliability in communication subsystems is considered as a resource to be shared among a number of logical...


    Chokchai " Box" Leangsuksun


    Our project is a multi-institutional research effort that adopts interplay of RELIABILITY, AVAILABILITY, and SERVICEABILITY (RAS) aspects for solving resilience issues in highend scientific computing in the next generation of supercomputers. results lie in the following tracks: Failure prediction in a large scale HPC; Investigate reliability issues and mitigation techniques including in GPGPU-based HPC system; HPC resilience runtime & tools.

  14. Three-dimensional imaging of the spine using the EOS system: is it reliable? A comparative study using computed tomography imaging

    Al-Aubaidi, Z.; Lebel, D.; Oudjhane, K.


    The aim of this study was to evaluate the precision of three-dimensional geometry compared with computed tomography (CT) images. This retrospective study included patients who had undergone both imaging of the spine using the EOS imaging system and CT scanning of the spine. The apical vertebral o...

  15. Solid State Lighting Reliability Components to Systems

    Fan, XJ


    Solid State Lighting Reliability: Components to Systems begins with an explanation of the major benefits of solid state lighting (SSL) when compared to conventional lighting systems including but not limited to long useful lifetimes of 50,000 (or more) hours and high efficacy. When designing effective devices that take advantage of SSL capabilities the reliability of internal components (optics, drive electronics, controls, thermal design) take on critical importance. As such a detailed discussion of reliability from performance at the device level to sub components is included as well as the integrated systems of SSL modules, lamps and luminaires including various failure modes, reliability testing and reliability performance. This book also: Covers the essential reliability theories and practices for current and future development of Solid State Lighting components and systems Provides a systematic overview for not only the state-of-the-art, but also future roadmap and perspectives of Solid State Lighting r...

  16. PV Systems Reliability Final Technical Report.

    Lavrova, Olga [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Flicker, Jack David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Johnson, Jay [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Armijo, Kenneth Miguel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gonzalez, Sigifredo [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Schindelholz, Eric John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sorensen, Neil R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Yang, Benjamin Bing-Yeh [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)


    The continued exponential growth of photovoltaic technologies paves a path to a solar-powered world, but requires continued progress toward low-cost, high-reliability, high-performance photovoltaic (PV) systems. High reliability is an essential element in achieving low-cost solar electricity by reducing operation and maintenance (O&M) costs and extending system lifetime and availability, but these attributes are difficult to verify at the time of installation. Utilities, financiers, homeowners, and planners are demanding this information in order to evaluate their financial risk as a prerequisite to large investments. Reliability research and development (R&D) is needed to build market confidence by improving product reliability and by improving predictions of system availability, O&M cost, and lifetime. This project is focused on understanding, predicting, and improving the reliability of PV systems. The two areas being pursued include PV arc-fault and ground fault issues, and inverter reliability.

  17. Reliability of large and complex systems

    Kolowrocki, Krzysztof


    Reliability of Large and Complex Systems, previously titled Reliability of Large Systems, is an innovative guide to the current state and reliability of large and complex systems. In addition to revised and updated content on the complexity and safety of large and complex mechanisms, this new edition looks at the reliability of nanosystems, a key research topic in nanotechnology science. The author discusses the importance of safety investigation of critical infrastructures that have aged or have been exposed to varying operational conditions. This reference provides an asympt

  18. Bayesian Missile System Reliability from Point Estimates


    OCT 2014 2. REPORT TYPE N/A 3. DATES COVERED - 4. TITLE AND SUBTITLE Bayesian Missile System Reliability from Point Estimates 5a. CONTRACT...Principle (MEP) to convert point estimates to probability distributions to be used as priors for Bayesian reliability analysis of missile data, and...illustrate this approach by applying the priors to a Bayesian reliability model of a missile system. 15. SUBJECT TERMS priors, Bayesian , missile

  19. Reliability Architecture for Collaborative Robot Control Systems in Complex Environments

    Liang Tang


    Full Text Available Many different kinds of robot systems have been successfully deployed in complex environments, while research into collaborative control systems between different robots, which can be seen as a hybrid internetware safety-critical system, has become essential. This paper discusses ways to construct robust and secure reliability architecture for collaborative robot control systems in complex environments. First, the indication system for evaluating the realtime reliability of hybrid internetware systems is established. Next, a dynamic collaborative reliability model for components of hybrid internetware systems is proposed. Then, a reliable, adaptive and evolutionary computation method for hybrid internetware systems is proposed, and a timing consistency verification solution for collaborative robot control internetware applications is studied. Finally, a multi-level security model supporting dynamic resource allocation is established.

  20. Multi-hop routing mechanism for reliable sensor computing.

    Chen, Jiann-Liang; Ma, Yi-Wei; Lai, Chia-Ping; Hu, Chia-Cheng; Huang, Yueh-Min


    Current research on routing in wireless sensor computing concentrates on increasing the service lifetime, enabling scalability for large number of sensors and supporting fault tolerance for battery exhaustion and broken nodes. A sensor node is naturally exposed to various sources of unreliable communication channels and node failures. Sensor nodes have many failure modes, and each failure degrades the network performance. This work develops a novel mechanism, called Reliable Routing Mechanism (RRM), based on a hybrid cluster-based routing protocol to specify the best reliable routing path for sensor computing. Table-driven intra-cluster routing and on-demand inter-cluster routing are combined by changing the relationship between clusters for sensor computing. Applying a reliable routing mechanism in sensor computing can improve routing reliability, maintain low packet loss, minimize management overhead and save energy consumption. Simulation results indicate that the reliability of the proposed RRM mechanism is around 25% higher than that of the Dynamic Source Routing (DSR) and ad hoc On-demand Distance Vector routing (AODV) mechanisms.


    Popescu V.S.


    Full Text Available Power distribution systems are basic parts of power systems and reliability of these systems at present is a key issue for power engineering development and requires special attention. Operation of distribution systems is accompanied by a number of factors that produce random data a large number of unplanned interruptions. Research has shown that the predominant factors that have a significant influence on the reliability of distribution systems are: weather conditions (39.7%, defects in equipment(25% and unknown random factors (20.1%. In the article is studied the influence of random behavior and are presented estimations of reliability of predominantly rural electrical distribution systems.


    Adrian Stere PARIS


    Full Text Available Manufacturing systems are characterized by three fundamental properties: performance, cost, and dependability. The analysis of a system from the pure performance viewpoint tends to be optimistic since it ignores the failure/repair behaviour in the system. On the other hand, pure availability analysis tends to be too conservative since performance considerations are not taken into account. They improve product quality and reduce production losses. Dependability of a manufacturing system is the ability to deliver service (products that can justifiably be trusted. Performance engineering addresses sustainability along with other factors like quality, reliability, maintainability and safety. The current paper introduces a framework for the fault analysis of production process, and the availability evaluation especially in the Flexible Manufacturing Systems (FMS’s. Generally, computing dependability measures of repairable systems with usual failure and repair processes is difficult, through either analytical or numerical methods. Mathematical models and application software for this kind of problems are in good stead. Recent advances in neural networks show that they can be used in applications that involve predictions, including the manufacturing.

  3. Design for Reliability of Power Electronic Systems

    Wang, Huai; Ma, Ke; Blaabjerg, Frede


    Advances in power electronics enable efficient and flexible processing of electric power in the application of renewable energy sources, electric vehicles, adjustable-speed drives, etc. More and more efforts are devoted to better power electronic systems in terms of reliability to ensure high...... on a 2.3 MW wind power converter is discussed with emphasis on the reliability critical components IGBTs. Different aspects of improving the reliability of the power converter are mapped. Finally, the challenges and opportunities to achieve more reliable power electronic systems are addressed....

  4. Research on assembly reliability control technology for computer numerical control machine tools

    Yan Ran


    Full Text Available Nowadays, although more and more companies focus on improving the quality of computer numerical control machine tools, its reliability control still remains as an unsolved problem. Since assembly reliability control is very important in product reliability assurance in China, a new key assembly processes extraction method based on the integration of quality function deployment; failure mode, effects, and criticality analysis; and fuzzy theory for computer numerical control machine tools is proposed. Firstly, assembly faults and assembly reliability control flow of computer numerical control machine tools are studied. Secondly, quality function deployment; failure mode, effects, and criticality analysis; and fuzzy theory are integrated to build a scientific extraction model, by which the key assembly processes meeting both customer functional demands and failure data distribution can be extracted, also an example is given to illustrate the correctness and effectiveness of the method. Finally, the assembly reliability monitoring system is established based on key assembly processes to realize and simplify this method.

  5. Reliability of power electronic converter systems

    Chung, Henry Shu-hung; Blaabjerg, Frede; Pecht, Michael


    This book outlines current research into the scientific modeling, experimentation, and remedial measures for advancing the reliability, availability, system robustness, and maintainability of Power Electronic Converter Systems (PECS) at different levels of complexity.

  6. Sensitivity Analysis for the System Reliability Function


    reliabilities. The unique feature of the approach is that stunple data collected on K inde-ndent replications using a specified component reliability % v &:•r...Carlo method. The polynomial time algorithm of Agrawaw Pad Satyanarayana (104) fIr the exact reliability computaton for seres- allel systems exemplifies...consideration. As an example for the s-t connectedness problem, let denote -7- edge-disjoint minimal s-t paths of G and let V , denote edge-disjoint

  7. Specification for reliability data management system



    This document contains the objectives, scopes, a brief delineation of the design, performance, and definition of the Reliability Data Management System (RDMS). The General Electric Company - Fast Breeder Reactor Department (GE-FBRD) has responsibility for the design and implementation of a Reliability Data Management System (RDMS). This document describes and specifies the RDMS requirements. The RDMS is currently focused on obtaining data for the Clinch River Breeder Reactor Plant (CRBRP) Shutdown System and Shutdown Heat Removal System, but has been designed with capability and flexibility for accommodating additional CRBRP and Liquid Metal Fast Breeder Reactor (LMFBR) reliability data.

  8. Electronic System Reliability and Effectiveness.


    v III. KEY WORDS (Continue .’n MVeta* side Ifneoee UMOW ~ deat by bloharsmAe) 20. ABSTRACT (ContIinue an reves e it*1 Imeeeeep aind IlU)ti III We" .ie...from June 24-July 3. 1985. The featured speaker was Professor A. Satyanarayana from the Stevens Institute of Technology in New Jersey. I also gave a...Nonseparability Testing and the Factoring Algorithm. Professor A. Satyanarayana . Computer Science Department, Stevens Institute of Technology. Travel expenses

  9. The ATS F&G systems reliability program.

    Doyle, H.


    Assurance of reliability, quality, and proper testing requires a large coordinating effort and a means for connecting the various areas involved. All parts used on the spacecraft are required to meet strict specifications and consequently must be approved by the systems reliability manager. The parts program has access to a computer data bank into which all information concerning nonstandard parts approval requests has been stored. Through the data bank, the system collects and distributes timely information concerning quality and reliability to all departments that may be concerned.

  10. Simulation methods for reliability and availability of complex systems

    Faulin Fajardo, Javier; Martorell Alsina, Sebastián Salvador; Ramirez-Marquez, Jose Emmanuel; Martorell, Sebastian


    This text discusses the use of computer simulation-based techniques and algorithms to determine reliability and/or availability levels in complex systems and to help improve these levels both at the design stage and during the system operating stage.

  11. 基于计算机仿真的风电机组系统可靠性综合%System Reliability Synthesis of Wind Turbine Based on Computer Simulation

    郭建英; 孙永全; 王铭义; 丁喜波


    Only relying on life testing to conduct reliability assessment is limited, and it is an intractable problem to synthesize system reliability which consisted of units with different life distributions. In order to solve the above problems, numerical computation method based on the computer simulation is proposed. Use the unit original information sufficiently and adopt simulation testing instead of life testing, to generate sufficient amount value of simulation of system life by means of logical operation, on the basis of reliability models, and then analyze the simulation data, deduce the distribution of the system life, following test of goodness of fit, estimate the point estimation and confidence interval of the parameters and reliability measure. Allow for practical applicability of engineering, concerns of the above, can be carried out in computer. This approach could work well in the engineering to solve the complex system reliability synthesis, and be applied to system reliability synthesis and Prediction of wind turbine.%单纯依赖寿命试验对复杂系统进行可靠性评定已受到多种制约,单元为不同分布时的复杂系统可靠性综合与预测问题更是一个棘手的难题.为解决这一问题,提出一种基于计算机模拟仿真的数值解析方法.充分利用单元可靠性信息,用模拟仿真替代寿命试验,根据系统可靠性模型通过逻辑运算生成足够量的系统寿命模拟值,并据此推断系统寿命分布类型、完成拟合优度检验、解析分布参数及可靠性测度的点估计和置信区间.考虑工程实用性,上述全部过程均在计算机上编程实现.这一方法能够有效地解决复杂系统可靠性多级综合所面临的诸多难题,并在风电机组系统可靠性综合与预测中应用.

  12. Some Computer Algorithms to Implement a Reliability Shorthand.



  13. Reliability Survey of Military Acquisition Systems


    software-intensive sensor and weapons systems, ensuring that there are no open Category 1 or 2 deficiency reports prior to OT. There is also evidence...reliability growth curve used to develop intermediate reliability goal(s)? 5c3 Are the reliability growth goal(s) linked to OTs (e.g., IOT &E, FOT&E...the reliability growth potential? 8 Did your program have an operational test in FY12? 8a What type of operational test was it? (DT/OT, OA/LUT, IOT &E

  14. System Reliability Assessment of Offshore Pipelines

    Mustaffa, Z.


    The title of this thesis, System Reliability Assessment of Offshore Pipelines, portrays the application of probabilistic methods in assessing the reliability of these structures. The main intention of this thesis is to identify, apply and judge the suitability of the probabilistic methods in evalua

  15. Reliability Based Optimization of Structural Systems

    Sørensen, John Dalsgaard


    The optimization problem to design structural systems such that the reliability is satisfactory during the whole lifetime of the structure is considered in this paper. Some of the quantities modelling the loads and the strength of the structure are modelled as random variables. The reliability...

  16. Reliability assessment of continuous mining technological systems


    Article is devoted to the analysis of reliability of parameters of continuous system of extraction with use of various approaches with the purpose of performance of a fast estimation of general efficiency of the equipment of such industrial systems, as Monte-Carlo and stress-force of intervention. Article is devoted to the analysis of reliability of parameters of continuous system of extraction with use of various approaches with the purpose of performance of a fast estimation of general e...

  17. Production Facility System Reliability Analysis Report

    Dale, Crystal Buchanan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Klein, Steven Karl [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)


    This document describes the reliability, maintainability, and availability (RMA) modeling of the Los Alamos National Laboratory (LANL) design for the Closed Loop Helium Cooling System (CLHCS) planned for the NorthStar accelerator-based 99Mo production facility. The current analysis incorporates a conceptual helium recovery system, beam diagnostics, and prototype control system into the reliability analysis. The results from the 1000 hr blower test are addressed.

  18. Leveraging Cloud Technology to Provide a Responsive, Reliable and Scalable Backend for the Virtual Ice Sheet Laboratory Using the Ice Sheet System Model and Amazon's Elastic Compute Cloud

    Perez, G. L.; Larour, E. Y.; Halkides, D. J.; Cheng, D. L. C.


    The Virtual Ice Sheet Laboratory(VISL) is a Cryosphere outreach effort byscientists at the Jet Propulsion Laboratory(JPL) in Pasadena, CA, Earth and SpaceResearch(ESR) in Seattle, WA, and the University of California at Irvine (UCI), with the goal of providing interactive lessons for K-12 and college level students,while conforming to STEM guidelines. At the core of VISL is the Ice Sheet System Model(ISSM), an open-source project developed jointlyat JPL and UCI whose main purpose is to model the evolution of the polar ice caps in Greenland and Antarctica. By using ISSM, VISL students have access tostate-of-the-art modeling software that is being used to conduct scientificresearch by users all over the world. However, providing this functionality isby no means simple. The modeling of ice sheets in response to sea and atmospheric temperatures, among many other possible parameters, requiressignificant computational resources. Furthermore, this service needs to beresponsive and capable of handling burst requests produced by classrooms ofstudents. Cloud computing providers represent a burgeoning industry. With majorinvestments by tech giants like Amazon, Google and Microsoft, it has never beeneasier or more affordable to deploy computational elements on-demand. This isexactly what VISL needs and ISSM is capable of. Moreover, this is a promisingalternative to investing in expensive and rapidly devaluing hardware.

  19. System reliability with correlated components: Accuracy of the Equivalent Planes method

    Roscoe, K.; Diermanse, F.; Vrouwenvelder, A.C.W.M.


    Computing system reliability when system components are correlated presents a challenge because it usually requires solving multi-fold integrals numerically, which is generally infeasible due to the computational cost. In Dutch flood defense reliability modeling, an efficient method for computing th

  20. Operational reliability of standby safety systems

    Grant, G.M.; Atwood, C.L.; Gentillon, C.D. [Idaho National Engineering Lab., Idaho Falls, ID (United States)] [and others


    The Idaho National Engineering Laboratory (INEL) is evaluating the operational reliability of several risk-significant standby safety systems based on the operating experience at US commercial nuclear power plants from 1987 through 1993. The reliability assessed is the probability that the system will perform its Probabilistic Risk Assessment (PRA) defined safety function. The quantitative estimates of system reliability are expected to be useful in risk-based regulation. This paper is an overview of the analysis methods and the results of the high pressure coolant injection (HPCI) system reliability study. Key characteristics include (1) descriptions of the data collection and analysis methods, (2) the statistical methods employed to estimate operational unreliability, (3) a description of how the operational unreliability estimates were compared with typical PRA results, both overall and for each dominant failure mode, and (4) a summary of results of the study.

  1. Reliability of microtechnology interconnects, devices and systems

    Liu, Johan; Sarkka, Jussi; Tegehall, Per-Erik; Andersson, Cristina


    This text discusses the reliability of microtechnology products from the bottom up, beginning with devices and extending to systems. It covers many topics, and it addresses specific failure modes in solder and conductive adhesives at great length.


    A. Stepanov


    Full Text Available The ways of increasing of exploitation reliability of dump trucks with the aim of increasing of effectiveness of exploitation of transportation systems of rock heaps at coal mines.

  3. a Reliability Evaluation System of Association Rules

    Chen, Jiangping; Feng, Wanshu; Luo, Minghai


    In mining association rules, the evaluation of the rules is a highly important work because it directly affects the usability and applicability of the output results of mining. In this paper, the concept of reliability was imported into the association rule evaluation. The reliability of association rules was defined as the accordance degree that reflects the rules of the mining data set. Such degree contains three levels of measurement, namely, accuracy, completeness, and consistency of rules. To show its effectiveness, the "accuracy-completeness-consistency" reliability evaluation system was applied to two extremely different data sets, namely, a basket simulation data set and a multi-source lightning data fusion. Results show that the reliability evaluation system works well in both simulation data set and the actual problem. The three-dimensional reliability evaluation can effectively detect the useless rules to be screened out and add the missing rules thereby improving the reliability of mining results. Furthermore, the proposed reliability evaluation system is applicable to many research fields; using the system in the analysis can facilitate obtainment of more accurate, complete, and consistent association rules.

  4. Reliable Fluid Power Pitch Systems

    Liniger, Jesper; Pedersen, Henrik Clemmensen; Soltani, Mohsen


    The key objectives of wind turbine manufactures and buyers are to reduce the Total Cost of Ownership and Total Cost of Energy. Among others, low downtime of a wind turbine is important to increase the amount of energy produced during its lifetime. Historical data indicate that pitch systems...

  5. Optimization of life support systems and their systems reliability

    Fan, L. T.; Hwang, C. L.; Erickson, L. E.


    The identification, analysis, and optimization of life support systems and subsystems have been investigated. For each system or subsystem that has been considered, the procedure involves the establishment of a set of system equations (or mathematical model) based on theory and experimental evidences; the analysis and simulation of the model; the optimization of the operation, control, and reliability; analysis of sensitivity of the system based on the model; and, if possible, experimental verification of the theoretical and computational results. Research activities include: (1) modeling of air flow in a confined space; (2) review of several different gas-liquid contactors utilizing centrifugal force: (3) review of carbon dioxide reduction contactors in space vehicles and other enclosed structures: (4) application of modern optimal control theory to environmental control of confined spaces; (5) optimal control of class of nonlinear diffusional distributed parameter systems: (6) optimization of system reliability of life support systems and sub-systems: (7) modeling, simulation and optimal control of the human thermal system: and (8) analysis and optimization of the water-vapor eletrolysis cell.

  6. Reliability of Naval Radar Systems


    CONFIDENTIAL (THIS PACE IS UNCUIASSFIED) CONFIDENTIAL (U) For airborne radars, 3-M’(Maintenance and Material Management )and RISE (Readiness Improvement...of the 3-M Program reports (3-M from Maintenance and Material Management ) as well as Naval Air Systems Command RISE (Readi-. ness Improvement Summary...TRANSIT PULSE LE11CTR (;As): 12.8 ANTENNA UEIGHr (k 1058 (2331 lbs.) excluding pedestal COMPRESSED PLUE LENGTH (.is): 0.2 BEAN POSITIObiNG TECNIQUES : H)RZ

  7. Demand Response For Power System Reliability: FAQ

    Kirby, Brendan J [ORNL


    Demand response is the most underutilized power system reliability resource in North America. Technological advances now make it possible to tap this resource to both reduce costs and improve. Misconceptions concerning response capabilities tend to force loads to provide responses that they are less able to provide and often prohibit them from providing the most valuable reliability services. Fortunately this is beginning to change with some ISOs making more extensive use of load response. This report is structured as a series of short questions and answers that address load response capabilities and power system reliability needs. Its objective is to further the use of responsive load as a bulk power system reliability resource in providing the fastest and most valuable ancillary services.

  8. Evaluation of Network Reliability for Computer Networks with Multiple Sources

    Yi-Kuei Lin


    Full Text Available Evaluating the reliability of a network with multiple sources to multiple sinks is a critical issue from the perspective of quality management. Due to the unrealistic definition of paths of network models in previous literature, existing models are not appropriate for real-world computer networks such as the Taiwan Advanced Research and Education Network (TWAREN. This paper proposes a modified stochastic-flow network model to evaluate the network reliability of a practical computer network with multiple sources where data is transmitted through several light paths (LPs. Network reliability is defined as being the probability of delivering a specified amount of data from the sources to the sink. It is taken as a performance index to measure the service level of TWAREN. This paper studies the network reliability of the international portion of TWAREN from two sources (Taipei and Hsinchu to one sink (New York that goes through a submarine and land surface cable between Taiwan and the United States.

  9. Site reliability engineering how Google runs production systems

    Petoff, Jennifer; Jones, Chris; Murphy, Niall Richard


    The overwhelming majority of a software system’s lifespan is spent in use, not in design or implementation. So, why does conventional wisdom insist that software engineers focus primarily on the design and development of large-scale computing systems? In this collection of essays and articles, key members of Google’s Site Reliability Team explain how and why their commitment to the entire lifecycle has enabled the company to successfully build, deploy, monitor, and maintain some of the largest software systems in the world. You’ll learn the principles and practices that enable Google engineers to make systems more scalable, reliable, and efficient—lessons directly applicable to your organization. This book is divided into four sections: Introduction—Learn what site reliability engineering is and why it differs from conventional IT industry practices Principles—Examine the patterns, behaviors, and areas of concern that influence the work of a site reliability engineer (SRE) Practices...

  10. Reliability-oriented energy storage sizing in wind power systems

    Qin, Zian; Liserre, Marco; Blaabjerg, Frede;


    Energy storage can be used to suppress the power fluctuations in wind power systems, and thereby reduce the thermal excursion and improve the reliability. Since the cost of the energy storage in large power application is high, it is crucial to have a better understanding of the relationship...... between the size of the energy storage and the reliability benefit it can generate. Therefore, a reliability-oriented energy storage sizing approach is proposed for the wind power systems, where the power, energy, cost and the control strategy of the energy storage are all taken into account....... With the proposed approach, the computational effort is reduced and the impact of the energy storage system on the reliability of the wind power converter can be quantified....

  11. A Sensitivity Analysis on Component Reliability from Fatigue Life Computations


    AD-A247 430 MTL TR 92-5 AD A SENSITIVITY ANALYSIS ON COMPONENT RELIABILITY FROM FATIGUE LIFE COMPUTATIONS DONALD M. NEAL, WILLIAM T. MATTHEWS, MARK G...HAGI OR GHANI NUMBI:H(s) Donald M. Neal, William T. Matthews, Mark G. Vangel, and Trevor Rudalevige 9. PERFORMING ORGANIZATION NAME AND ADDRESS lU...Technical Information Center, Cameron Station, Building 5, 5010 Duke Street, Alexandria, VA 22304-6145 2 ATTN: DTIC-FDAC I MIAC/ CINDAS , Purdue




    Full Text Available The introduction of pervasive devices and mobile devices has led to immense growth of real time distributed processing. In such context reliability of the computing environment is very important. Reliability is the probability that the devices, links, processes, programs and files work efficiently for the specified period of time and in the specified condition. Distributed systems are available as conventional ring networks, clusters and agent based systems. Reliability of such systems is focused. These networks are heterogeneous and scalable in nature. There are several factors, which are to be considered for reliability estimation. These include the application related factors like algorithms, data-set sizes, memory usage pattern, input-output, communication patterns, task granularity and load-balancing. It also includes the hardware related factors like processor architecture, memory hierarchy, input-output configuration and network. The software related factors concerning reliability are operating systems, compiler, communication protocols, libraries and preprocessor performance. In estimating the reliability of a system, the performance estimation is an important aspect. Reliability analysis is approached using probability.

  13. Computer system identification

    Lesjak, Borut


    The concept of computer system identity in computer science bears just as much importance as does the identity of an individual in a human society. Nevertheless, the identity of a computer system is incomparably harder to determine, because there is no standard system of identification we could use and, moreover, a computer system during its life-time is quite indefinite, since all of its regular and necessary hardware and software upgrades soon make it almost unrecognizable: after a number o...

  14. Reliability of photovoltaic systems: A field report

    Thomas, M. G.; Fuentes, M. K.; Lashway, C.; Black, B. D.

    Performance studies and field measurements of photovoltaic systems indicate a 1 to 2% per year degradation in array energy production. The cause for much of the degradation has been identified as soiling, failed modules, and failures in interconnections. System performance evaluation continues to be complicated by the poor reliability of some power conditioning hardware that has greatly diminished the system availability and by inconsistent field ratings. Nevertheless, the current system reliability is consistent with degradation of less than 10% in 5 years and with estimates of less than 10% per year of the energy value for O and M.

  15. Reliability of photovoltaic systems - A field report

    Thomas, M. G.; Fuentes, M. K.; Lashway, C.; Black, B. D.

    Performance studies and field measurements of photovoltaic systems indicate a 1-2-percent/yr degradation in array energy production. The cause for much of the degradation has been identified as soiling, failed modules, and failures in interconnections. System performance evaluation continues to be complicated by the poor reliability of some power conditioning hardware (which greatly diminished system availability) and by inconsistent field ratings. Nevertheless, the current system reliability is consistent with degradation of less than 10 percent in 5 years and with estimates of less than 10 percent/yr of the energy value for O&M.

  16. Reliability Analysis of Random Vibration Transmission Path Systems

    Wei Zhao


    Full Text Available The vibration transmission path systems are generally composed of the vibration source, the vibration transfer path, and the vibration receiving structure. The transfer path is the medium of the vibration transmission. Moreover, the randomness of transfer path influences the transfer reliability greatly. In this paper, based on the matrix calculus, the generalized second moment technique, and the stochastic finite element theory, the effective approach for the transfer reliability of vibration transfer path systems was provided. The transfer reliability of vibration transfer path system with uncertain path parameters including path mass and path stiffness was analyzed theoretically and computed numerically, and the correlated mathematical expressions were derived. Thus, it provides the theoretical foundation for the dynamic design of vibration systems in practical project, so that most random path parameters can be considered to solve the random problems for vibration transfer path systems, which can avoid the system resonance failure.

  17. Space Shuttle Program Primary Avionics Software System (PASS) Success Legacy - Quality and Reliability Date

    Orr, James K.; Peltier, Daryl


    Thsi slide presentation reviews the avionics software system on board the space shuttle, with particular emphasis on the quality and reliability. The Primary Avionics Software System (PASS) provides automatic and fly-by-wire control of critical shuttle systems which executes in redundant computers. Charts given show the number of space shuttle flights vs time, PASS's development history, and other charts that point to the reliability of the system's development. The reliability of the system is also compared to predicted reliability.

  18. Design for Reliability of Power Electronic Systems

    Wang, Huai; Ma, Ke; Blaabjerg, Frede


    availability, long lifetime, sufficient robustness, low maintenance cost and low cost of energy. However, the reliability predictions are still dominantly according to outdated models and terms, such as MIL-HDBK-217F handbook models, Mean-Time-To-Failure (MTTF), and Mean-Time-Between-Failures (MTBF......Advances in power electronics enable efficient and flexible processing of electric power in the application of renewable energy sources, electric vehicles, adjustable-speed drives, etc. More and more efforts are devoted to better power electronic systems in terms of reliability to ensure high......). A collection of methodologies based on Physics-of-Failure (PoF) approach and mission profile analysis are presented in this paper to perform reliability-oriented design of power electronic systems. The corresponding design procedures and reliability prediction models are provided. Further on, a case study...

  19. System Reliability for LED-Based Products

    Davis, J Lynn; Mills, Karmann; Lamvik, Michael; Yaga, Robert; Shepherd, Sarah D; Bittle, James; Baldasaro, Nick; Solano, Eric; Bobashev, Georgiy; Johnson, Cortina; Evans, Amy


    Results from accelerated life tests (ALT) on mass-produced commercially available 6” downlights are reported along with results from commercial LEDs. The luminaires capture many of the design features found in modern luminaires. In general, a systems perspective is required to understand the reliability of these devices since LED failure is rare. In contrast, components such as drivers, lenses, and reflector are more likely to impact luminaire reliability than LEDs.

  20. Software engineering practices for control system reliability

    S. K. Schaffner; K. S White


    This paper will discuss software engineering practices used to improve Control System reliability. The authors begin with a brief discussion of the Software Engineering Institute's Capability Maturity Model (CMM) which is a framework for evaluating and improving key practices used to enhance software development and maintenance capabilities. The software engineering processes developed and used by the Controls Group at the Thomas Jefferson National Accelerator Facility (Jefferson Lab), using the Experimental Physics and Industrial Control System (EPICS) for accelerator control, are described. Examples are given of how their procedures have been used to minimized control system downtime and improve reliability. While their examples are primarily drawn from their experience with EPICS, these practices are equally applicable to any control system. Specific issues addressed include resource allocation, developing reliable software lifecycle processes and risk management.

  1. Tensor computations in computer algebra systems

    Korolkova, A V; Sevastyanov, L A


    This paper considers three types of tensor computations. On their basis, we attempt to formulate criteria that must be satisfied by a computer algebra system dealing with tensors. We briefly overview the current state of tensor computations in different computer algebra systems. The tensor computations are illustrated with appropriate examples implemented in specific systems: Cadabra and Maxima.

  2. Reliability analysis of flood defence systems

    Steenbergen, H.M.G.M.; Lassing, B.L.; Vrouwenvelder, A.C.W.M.; Waarts, P.H.


    In recent years an advanced program for the reliability analysis of flood defence systems has been under development. This paper describes the global data requirements for the application and the setup of the models. The analysis generates the probability of system failure and the contribution of ea

  3. Reliable High Performance Peta- and Exa-Scale Computing

    Bronevetsky, G


    As supercomputers become larger and more powerful, they are growing increasingly complex. This is reflected both in the exponentially increasing numbers of components in HPC systems (LLNL is currently installing the 1.6 million core Sequoia system) as well as the wide variety of software and hardware components that a typical system includes. At this scale it becomes infeasible to make each component sufficiently reliable to prevent regular faults somewhere in the system or to account for all possible cross-component interactions. The resulting faults and instability cause HPC applications to crash, perform sub-optimally or even produce erroneous results. As supercomputers continue to approach Exascale performance and full system reliability becomes prohibitively expensive, we will require novel techniques to bridge the gap between the lower reliability provided by hardware systems and users unchanging need for consistent performance and reliable results. Previous research on HPC system reliability has developed various techniques for tolerating and detecting various types of faults. However, these techniques have seen very limited real applicability because of our poor understanding of how real systems are affected by complex faults such as soft fault-induced bit flips or performance degradations. Prior work on such techniques has had very limited practical utility because it has generally focused on analyzing the behavior of entire software/hardware systems both during normal operation and in the face of faults. Because such behaviors are extremely complex, such studies have only produced coarse behavioral models of limited sets of software/hardware system stacks. Since this provides little insight into the many different system stacks and applications used in practice, this work has had little real-world impact. My project addresses this problem by developing a modular methodology to analyze the behavior of applications and systems during both normal and faulty

  4. Diagnostic reliability of MMPI-2 computer-based test interpretations.

    Pant, Hina; McCabe, Brian J; Deskovitz, Mark A; Weed, Nathan C; Williams, John E


    Reflecting the common use of the MMPI-2 to provide diagnostic considerations, computer-based test interpretations (CBTIs) also typically offer diagnostic suggestions. However, these diagnostic suggestions can sometimes be shown to vary widely across different CBTI programs even for identical MMPI-2 profiles. The present study evaluated the diagnostic reliability of 6 commercially available CBTIs using a 20-item Q-sort task developed for this study. Four raters each sorted diagnostic classifications based on these 6 CBTI reports for 20 MMPI-2 profiles. Two questions were addressed. First, do users of CBTIs understand the diagnostic information contained within the reports similarly? Overall, diagnostic sorts of the CBTIs showed moderate inter-interpreter diagnostic reliability (mean r = .56), with sorts for the 1/2/3 profile showing the highest inter-interpreter diagnostic reliability (mean r = .67). Second, do different CBTIs programs vary with respect to diagnostic suggestions? It was found that diagnostic sorts of the CBTIs had a mean inter-CBTI diagnostic reliability of r = .56, indicating moderate but not strong agreement across CBTIs in terms of diagnostic suggestions. The strongest inter-CBTI diagnostic agreement was found for sorts of the 1/2/3 profile CBTIs (mean r = .71). Limitations and future directions are discussed. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  5. Distributed computer control systems

    Suski, G.J.


    This book focuses on recent advances in the theory, applications and techniques for distributed computer control systems. Contents (partial): Real-time distributed computer control in a flexible manufacturing system. Semantics and implementation problems of channels in a DCCS specification. Broadcast protocols in distributed computer control systems. Design considerations of distributed control architecture for a thermal power plant. The conic toolset for building distributed systems. Network management issues in distributed control systems. Interprocessor communication system architecture in a distributed control system environment. Uni-level homogenous distributed computer control system and optimal system design. A-nets for DCCS design. A methodology for the specification and design of fault tolerant real time systems. An integrated computer control system - architecture design, engineering methodology and practical experience.

  6. Rater reliability and concurrent validity of the Keyboard Personal Computer Style instrument (K-PeCS).

    Baker, Nancy A; Cook, James R; Redfern, Mark S


    This paper describes the inter-rater and intra-rater reliability, and the concurrent validity of an observational instrument, the Keyboard Personal Computer Style instrument (K-PeCS), which assesses stereotypical postures and movements associated with computer keyboard use. Three trained raters independently rated the video clips of 45 computer keyboard users to ascertain inter-rater reliability, and then re-rated a sub-sample of 15 video clips to ascertain intra-rater reliability. Concurrent validity was assessed by comparing the ratings obtained using the K-PeCS to scores developed from a 3D motion analysis system. The overall K-PeCS had excellent reliability [inter-rater: intra-class correlation coefficients (ICC)=.90; intra-rater: ICC=.92]. Most individual items on the K-PeCS had from good to excellent reliability, although six items fell below ICC=.75. Those K-PeCS items that were assessed for concurrent validity compared favorably to the motion analysis data for all but two items. These results suggest that most items on the K-PeCS can be used to reliably document computer keyboarding style.

  7. Reliability-Based Optimization of Series Systems of Parallel Systems

    Enevoldsen, I.; Sørensen, John Dalsgaard

    Reliability-based design of structural systems is considered. Especially systems where the reliability model is a series system of parallel systems are analysed. A sensitivity analysis for this class of problems is presented. Direct and sequential optimization procedures to solve the optimization...... problems are described. Numerical tests indicate that a sequential technique called the bounds iteration method (BIM) is particularly fast and stable....

  8. On Dependability of Computing Systems

    XU Shiyi


    With the rapid development and wideapplications of computing systems on which more reliance has been put, adependable system will be much more important than ever. This paper isfirst aimed at giving informal but precise definitions characterizingthe various attributes of dependability of computing systems and thenthe importance of (and the relationships among) all the attributes areexplained.Dependability is first introduced as a global concept which subsumes theusual attributes of reliability, availability, maintainability, safetyand security. The basic definitions given here are then commended andsupplemented by detailed material and additional explanations in thesubsequent sections.The presentation has been structured as follows so as to attract thereader's attention to the important attributions of dependability.* Search for a few number of concise concepts enabling thedependability attributes to be expressed as clearly as possible.* Use of terms which are identical or as close as possible tothose commonly used nowadays.This paper is also intended to provoke people's interest in designing adependable computing system.

  9. Simulation modeling of reliability and efficiency of mine ventilation systems

    Ushakov, V.K. (Moskovskii Gornyi Institut (USSR))


    Discusses a method developed by the MGI institute for computerized simulation of operation of ventilation systems used in deep underground coal mines. The modeling is aimed at assessment of system reliability and efficiency (probability of failure-free operation and stable air distribution). The following stages of the simulation procedure are analyzed: development of a scheme of the ventilation system (type, aerodynamic characteristics and parameters that describe system elements, e.g. ventilation tunnels, ventilation equipment, main blowers etc., dynamics of these parameters depending among others on mining and geologic conditions), development of mathematical models that describe system characteristics as well as external factors and their effects on the system, development of a structure of the simulated ventilation system, development of an algorithm, development of the final computer program for simulation of a mine ventilation system. Use of the model for forecasting reliability of air supply and efficiency of mine ventilation is discussed. 2 refs.

  10. DRD—Design of a Diagnosis Based on High Reliable Distributed Computer Systems%DRD—基于诊断的高可靠分布式计算机系统的设计

    左德承; 高巍; 杨孝宗


    This paper presents a fail-silent fault behavior based on high reliable distributed computer system, and uses two effective ways to reduce the fault detection latency: making use of the limited capability of fault detection of a single processor, and increasing the comparison of task signature state. With diagnosis of faulty nodes, all faulty processors are removed from the system service. To make the system more reliable, the faulty nodes must be recovered or the system must be reconfigured.%提出了一种基于Fail-silent节点故障行为的高可靠分布式计算机系统,提出了减少上述系统节点故障检测延迟的两条有效途径:利用单机有限的故障检测能力和增加任务特征状态比较。利用节点故障的诊断技术,将系统中的故障节点与正确节点隔离。通过节点故障恢复和系统重构,以提高整个系统的可靠性。

  11. Sustainable, Reliable Mission-Systems Architecture

    O'Neil, Graham; Orr, James K.; Watson, Steve


    A mission-systems architecture, based on a highly modular infrastructure utilizing: open-standards hardware and software interfaces as the enabling technology is essential for affordable and sustainable space exploration programs. This mission-systems architecture requires (a) robust communication between heterogeneous system, (b) high reliability, (c) minimal mission-to-mission reconfiguration, (d) affordable development, system integration, and verification of systems, and (e) minimal sustaining engineering. This paper proposes such an architecture. Lessons learned from the Space Shuttle program and Earthbound complex engineered system are applied to define the model. Technology projections reaching out 5 years are mde to refine model details.

  12. Optimal reliability allocation for large software projects through soft computing techniques

    Madsen, Henrik; Albeanu, Grigore; Albu, Razvan-Daniel


    or maximizing the system reliability subject to budget constraints. These kinds of optimization problems were considered both in deterministic and stochastic frameworks in literature. Recently, the intuitionistic-fuzzy optimization approach was considered as a soft computing successful modelling approach....... Firstly, a review on existing soft computing approaches to optimization is given. The main section extends the results considering self-organizing migrating algorithms for solving intuitionistic-fuzzy optimization problems attached to complex fault-tolerant software architectures which proved...

  13. Reliability of an interactive computer program for advance care planning.

    Schubart, Jane R; Levi, Benjamin H; Camacho, Fabian; Whitehead, Megan; Farace, Elana; Green, Michael J


    Despite widespread efforts to promote advance directives (ADs), completion rates remain low. Making Your Wishes Known: Planning Your Medical Future (MYWK) is an interactive computer program that guides individuals through the process of advance care planning, explaining health conditions and interventions that commonly involve life or death decisions, helps them articulate their values/goals, and translates users' preferences into a detailed AD document. The purpose of this study was to demonstrate that (in the absence of major life changes) the AD generated by MYWK reliably reflects an individual's values/preferences. English speakers ≥30 years old completed MYWK twice, 4 to 6 weeks apart. Reliability indices were assessed for three AD components: General Wishes; Specific Wishes for treatment; and Quality-of-Life values (QoL). Twenty-four participants completed the study. Both the Specific Wishes and QoL scales had high internal consistency in both time periods (Knuder Richardson formula 20 [KR-20]=0.83-0.95, and 0.86-0.89). Test-retest reliability was perfect for General Wishes (κ=1), high for QoL (Pearson's correlation coefficient=0.83), but lower for Specific Wishes (Pearson's correlation coefficient=0.57). MYWK generates an AD where General Wishes and QoL (but not Specific Wishes) statements remain consistent over time.

  14. Generating function approach to reliability analysis of structural systems


    The generating function approach is an important tool for performance assessment in multi-state systems. Aiming at strength reliability analysis of structural systems, generating function approach is introduced and developed. Static reliability models of statically determinate, indeterminate systems and fatigue reliability models are built by constructing special generating functions, which are used to describe probability distributions of strength (resistance), stress (load) and fatigue life, by defining composite operators of generating functions and performance structure functions thereof. When composition operators are executed, computational costs can be reduced by a big margin by means of collecting like terms. The results of theoretical analysis and numerical simulation show that the generating function approach can be widely used for probability modeling of large complex systems with hierarchical structures due to the unified form, compact expression, computer program realizability and high universality. Because the new method considers twin loads giving rise to component failure dependency, it can provide a theoretical reference and act as a powerful tool for static, dynamic reliability analysis in civil engineering structures and mechanical equipment systems with multi-mode damage coupling.

  15. System Reliability for Offshore Wind Turbines

    Marquez-Dominguez, Sergio; Sørensen, John Dalsgaard


    are considered for reliability verification according to international design standards of OWTs. System effects become important for each substructure with many potential fatigue hot spots. Therefore, in this paper a framework for system effects is presented. This information can be e.g. no detection of cracks......E). In consequence, a rational treatment of uncertainties is done in order to assess the reliability of critical details in OWTs. Limit state equations are formulated for fatigue critical details which are not influenced by wake effects generated in offshore wind farms. Furthermore, typical bi-linear S-N curves...... in inspections or measurements from condition monitoring systems. Finally, an example is established to illustrate the practical application of this framework for jacket type wind turbine substructure considering system effects....

  16. A new simulation estimator of system reliability

    Sheldon M. Ross


    Full Text Available A basic identity is proven and applied to obtain new simulation estimators concerning (a system reliability, (b a multi-valued system. We show that the variance of this new estimator is often of the order α2 when the usual raw estimator has variance of the order α and α is small. We also indicate how this estimator can be combined with standard variance reduction techniques of antithetic variables, stratified sampling and importance sampling.

  17. The Process Group Approach to Reliable Distributed Computing


    under DARPA/NASA grant NAG-2-593, and by grants from EBM , HP, Siemens, GTE and Hitachi. I Ir in I a i gress SW Shwnu i Pnc" IBU r 00 8 133-1/4 1BM...system, but could make it harder to administer and less reliable. A theme of the paper will be that one overcomes this intrinsic problem by standardizing

  18. ALMA correlator computer systems

    Pisano, Jim; Amestica, Rodrigo; Perez, Jesus


    We present a design for the computer systems which control, configure, and monitor the Atacama Large Millimeter Array (ALMA) correlator and process its output. Two distinct computer systems implement this functionality: a rack- mounted PC controls and monitors the correlator, and a cluster of 17 PCs process the correlator output into raw spectral results. The correlator computer systems interface to other ALMA computers via gigabit Ethernet networks utilizing CORBA and raw socket connections. ALMA Common Software provides the software infrastructure for this distributed computer environment. The control computer interfaces to the correlator via multiple CAN busses and the data processing computer cluster interfaces to the correlator via sixteen dedicated high speed data ports. An independent array-wide hardware timing bus connects to the computer systems and the correlator hardware ensuring synchronous behavior and imposing hard deadlines on the control and data processor computers. An aggregate correlator output of 1 gigabyte per second with 16 millisecond periods and computational data rates of approximately 1 billion floating point operations per second define other hard deadlines for the data processing computer cluster.

  19. Reliability in Warehouse-Scale Computing: Why Low Latency Matters

    Nannarelli, Alberto


    , the limiting factor of these warehouse-scale data centers is the power dissipation. Power is dissipated not only in the computation itself, but also in heat removal (fans, air conditioning, etc.) to keep the temperature of the devices within the operating ranges. The need to keep the temperature low within......Warehouse sized buildings are nowadays hosting several types of large computing systems: from supercomputers to large clusters of servers to provide the infrastructure to the cloud. Although the main target, especially for high-performance computing, is still to achieve high throughput...

  20. Fault-tolerant search algorithms reliable computation with unreliable information

    Cicalese, Ferdinando


    Why a book on fault-tolerant search algorithms? Searching is one of the fundamental problems in computer science. Time and again algorithmic and combinatorial issues originally studied in the context of search find application in the most diverse areas of computer science and discrete mathematics. On the other hand, fault-tolerance is a necessary ingredient of computing. Due to their inherent complexity, information systems are naturally prone to errors, which may appear at any level - as imprecisions in the data, bugs in the software, or transient or permanent hardware failures. This book pr

  1. Reliable, Memory Speed Storage for Cluster Computing Frameworks


    specification API that can capture computations in many of today’s popular data-parallel computing models, e.g., MapReduce and SQL. We also ported the Hadoop ...runs on. We present solutions for priority and weighted fair sharing, the most common policies in systems like Hadoop and Dryad [45, 27]. Priority Based...understand its own configuration. For example, in Hadoop , configurations are kept in HadoopConf, while Spark stores these in SparkEnv. Therefore, their wrap

  2. Program listing for the reliability block diagram computation program of JPL Technical Report 32-1543

    Chelson, P. O.; Eckstein, R. E.


    The computer program listing for the reliability block diagram computation program described in Reliability Computation From Reliability Block Diagrams is given. The program is written in FORTRAN 4 and is currently running on a Univac 1108. Each subroutine contains a description of its function.

  3. Computer controlled antenna system

    Raumann, N. A.


    The application of small computers using digital techniques for operating the servo and control system of large antennas is discussed. The advantages of the system are described. The techniques were evaluated with a forty foot antenna and the Sigma V computer. Programs have been completed which drive the antenna directly without the need for a servo amplifier, antenna position programmer or a scan generator.

  4. 76 FR 42534 - Mandatory Reliability Standards for Interconnection Reliability Operating Limits; System...


    ... Reliability Operating Limits; System Restoration Reliability Standards AGENCY: Federal Energy Regulatory... necessary to analyze and monitor Interconnection Reliability Operating Limits (IROL) within its Wide-Area... Interconnection Reliability Operating Limits, Order No. 748, 134 FERC ] 61,213 (2011). \\2\\ The term...

  5. Mass and Reliability System (MaRS)

    Barnes, Sarah


    The Safety and Mission Assurance (S&MA) Directorate is responsible for mitigating risk, providing system safety, and lowering risk for space programs from ground to space. The S&MA is divided into 4 divisions: The Space Exploration Division (NC), the International Space Station Division (NE), the Safety & Test Operations Division (NS), and the Quality and Flight Equipment Division (NT). The interns, myself and Arun Aruljothi, will be working with the Risk & Reliability Analysis Branch under the NC Division's. The mission of this division is to identify, characterize, diminish, and communicate risk by implementing an efficient and effective assurance model. The team utilizes Reliability and Maintainability (R&M) and Probabilistic Risk Assessment (PRA) to ensure decisions concerning risks are informed, vehicles are safe and reliable, and program/project requirements are realistic and realized. This project pertains to the Orion mission, so it is geared toward a long duration Human Space Flight Program(s). For space missions, payload is a critical concept; balancing what hardware can be replaced by components verse by Orbital Replacement Units (ORU) or subassemblies is key. For this effort a database was created that combines mass and reliability data, called Mass and Reliability System or MaRS. The U.S. International Space Station (ISS) components are used as reference parts in the MaRS database. Using ISS components as a platform is beneficial because of the historical context and the environment similarities to a space flight mission. MaRS uses a combination of systems: International Space Station PART for failure data, Vehicle Master Database (VMDB) for ORU & components, Maintenance & Analysis Data Set (MADS) for operation hours and other pertinent data, & Hardware History Retrieval System (HHRS) for unit weights. MaRS is populated using a Visual Basic Application. Once populated, the excel spreadsheet is comprised of information on ISS components including

  6. Attacks on computer systems

    Dejan V. Vuletić


    Full Text Available Computer systems are a critical component of the human society in the 21st century. Economic sector, defense, security, energy, telecommunications, industrial production, finance and other vital infrastructure depend on computer systems that operate at local, national or global scales. A particular problem is that, due to the rapid development of ICT and the unstoppable growth of its application in all spheres of the human society, their vulnerability and exposure to very serious potential dangers increase. This paper analyzes some typical attacks on computer systems.

  7. Practical reliability and uncertainty quantification in complex systems : final report.

    Grace, Matthew D.; Ringland, James T.; Marzouk, Youssef M. (Massachusetts Institute of Technology, Cambridge, MA); Boggs, Paul T.; Zurn, Rena M.; Diegert, Kathleen V. (Sandia National Laboratories, Albuquerque, NM); Pebay, Philippe Pierre; Red-Horse, John Robert (Sandia National Laboratories, Albuquerque, NM)


    The purpose of this project was to investigate the use of Bayesian methods for the estimation of the reliability of complex systems. The goals were to find methods for dealing with continuous data, rather than simple pass/fail data; to avoid assumptions of specific probability distributions, especially Gaussian, or normal, distributions; to compute not only an estimate of the reliability of the system, but also a measure of the confidence in that estimate; to develop procedures to address time-dependent or aging aspects in such systems, and to use these models and results to derive optimal testing strategies. The system is assumed to be a system of systems, i.e., a system with discrete components that are themselves systems. Furthermore, the system is 'engineered' in the sense that each node is designed to do something and that we have a mathematical description of that process. In the time-dependent case, the assumption is that we have a general, nonlinear, time-dependent function describing the process. The major results of the project are described in this report. In summary, we developed a sophisticated mathematical framework based on modern probability theory and Bayesian analysis. This framework encompasses all aspects of epistemic uncertainty and easily incorporates steady-state and time-dependent systems. Based on Markov chain, Monte Carlo methods, we devised a computational strategy for general probability density estimation in the steady-state case. This enabled us to compute a distribution of the reliability from which many questions, including confidence, could be addressed. We then extended this to the time domain and implemented procedures to estimate the reliability over time, including the use of the method to predict the reliability at a future time. Finally, we used certain aspects of Bayesian decision analysis to create a novel method for determining an optimal testing strategy, e.g., we can estimate the 'best' location to

  8. Reliability of electrical power systems for coal mines

    Razgil' deev, G.I.; Kovalev, A.P.; Serkyuk, L.I.


    This is a method for evaluating the reliability of comprehensive mining power systems. The systems reliability is influenced by the selectivity of maximum protection, ground-short protection, the subdivision of the circuitry, the character of the systems failures, etc.

  9. Fault tolerant highly reliable inertial navigation system

    Jeerage, Mahesh; Boettcher, Kevin

    This paper describes a development of failure detection and isolation (FDI) strategies for highly reliable inertial navigation systems. FDI strategies are developed based on the generalized likelihood ratio test (GLRT). A relationship between detection threshold and false alarm rate is developed in terms of the sensor parameters. A new method for correct isolation of failed sensors is presented. Evaluation of FDI performance parameters, such as false alarm rate, wrong isolation probability, and correct isolation probability, are presented. Finally a fault recovery scheme capable of correcting false isolation of good sensors is presented.

  10. Reliable neuronal systems: the importance of heterogeneity.

    Johannes Lengler

    Full Text Available For every engineer it goes without saying: in order to build a reliable system we need components that consistently behave precisely as they should. It is also well known that neurons, the building blocks of brains, do not satisfy this constraint. Even neurons of the same type come with huge variances in their properties and these properties also vary over time. Synapses, the connections between neurons, are highly unreliable in forwarding signals. In this paper we argue that both these fact add variance to neuronal processes, and that this variance is not a handicap of neural systems, but that instead predictable and reliable functional behavior of neural systems depends crucially on this variability. In particular, we show that higher variance allows a recurrently connected neural population to react more sensitively to incoming signals, and processes them faster and more energy efficient. This, for example, challenges the general assumption that the intrinsic variability of neurons in the brain is a defect that has to be overcome by synaptic plasticity in the process of learning.

  11. Software Reliability Cases: The Bridge Between Hardware, Software and System Safety and Reliability

    Herrmann, D.S.; Peercy, D.E.


    High integrity/high consequence systems must be safe and reliable; hence it is only logical that both software safety and software reliability cases should be developed. Risk assessments in safety cases evaluate the severity of the consequences of a hazard and the likelihood of it occurring. The likelihood is directly related to system and software reliability predictions. Software reliability cases, as promoted by SAE JA 1002 and 1003, provide a practical approach to bridge the gap between hardware reliability, software reliability, and system safety and reliability by using a common methodology and information structure. They also facilitate early insight into whether or not a project is on track for meeting stated safety and reliability goals, while facilitating an informed assessment by regulatory and/or contractual authorities.

  12. Reliable Date-Replication Using Grid Computing Tools

    Sonnick, D


    The LHCb detector at CERN is a physical experiment to measure rare b-decays after the collision of protons in the Large Hadron Collider ring. The measured collisions are called “Events”. These events are containing the data which are necessary to analyze and reconstruct the decays. The events are send to speed optimized writer processes which are writing the events into files on a local hard disk cluster. Because the space is limited on the hard disk cluster, the data needs to be replicated to a long term storage system. This diploma thesis will present the design and implementation of a software which replicates the data in a reliable manner. In addition this software registers the data in special databases to prepare the following analyzes and reconstructions. Because the software which is used in the LHCb experiment is still under development, there is a special need for reliability to deal with error situations or inconsistent data. The subject of this diploma thesis was also presented at the “17th ...

  13. Digital System Reliability Test for the Evaluation of safety Critical Software of Digital Reactor Protection System

    Hyun-Kook Shin


    Full Text Available A new Digital Reactor Protection System (DRPS based on VME bus Single Board Computer has been developed by KOPEC to prevent software Common Mode Failure(CMF inside digital system. The new DRPS has been proved to be an effective digital safety system to prevent CMF by Defense-in-Depth and Diversity (DID&D analysis. However, for practical use in Nuclear Power Plants, the performance test and the reliability test are essential for the digital system qualification. In this study, a single channel of DRPS prototype has been manufactured for the evaluation of DRPS capabilities. The integrated functional tests are performed and the system reliability is analyzed and tested. The results of reliability test show that the application software of DRPS has a very high reliability compared with the analog reactor protection systems.

  14. Dependent systems reliability estimation by structural reliability approach

    Kostandyan, Erik; Sørensen, John Dalsgaard


    ) and the component lifetimes follow some continuous and non-negative cumulative distribution functions. An illustrative example utilizing the proposed method is provided, where damage is modeled by a fracture mechanics approach with correlated components and a failure assessment diagram is applied for failure...... identification. Application of the proposed method can be found in many real world systems....

  15. The validity and reliability of computed tomography orbital volume measurements.

    Diaconu, Silviu C; Dreizin, David; Uluer, Mehmet; Mossop, Corey; Grant, Michael P; Nam, Arthur J


    Orbital volume calculations allow surgeons to design patient-specific implants to correct volume deficits. It is estimated that changes as small as 1 ml in orbital volume can lead to enophthalmos. Awareness of the limitations of orbital volume computed tomography (CT) measurements is critical to differentiate between true volume differences and measurement error. The aim of this study is to analyze the validity and reliability of CT orbital volume measurements. A total of 12 cadaver orbits were scanned using a standard CT maxillofacial protocol. Each orbit was dissected to isolate the extraocular muscles, fatty tissue, and globe. The empty bony orbital cavity was then filled with sculpting clay. The volumes of the muscle, fat, globe, and clay (i.e., bony orbital cavity) were then individually measured via water displacement. The CT-derived volumes, measured by manual segmentation, were compared to the direct measurements to determine validity. The difference between CT orbital volume measurements and physically measured volumes is not negligible. Globe volumes have the highest agreement with 95% of differences between -0.5 and 0.5 ml, bony volumes are more likely to be overestimated with 95% of differences between -1.8 and 2.6 ml, whereas extraocular muscle volumes have poor validity and should be interpreted with caution. Copyright © 2017 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  16. A cost modelling system for cloud computing

    Ajeh, Daniel; Ellman, Jeremy; Keogh, Shelagh


    An advance in technology unlocks new opportunities for organizations to increase their productivity, efficiency and process automation while reducing the cost of doing business as well. The emergence of cloud computing addresses these prospects through the provision of agile systems that are scalable, flexible and reliable as well as cost effective. Cloud computing has made hosting and deployment of computing resources cheaper and easier with no up-front charges but pay per-use flexible payme...

  17. Demand Response as a System Reliability Resource

    Eto, Joseph H. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Environmental Energy Technologies Division; Lewis, Nancy Jo [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Environmental Energy Technologies Division; Watson, David [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Environmental Energy Technologies Division; Kiliccote, Sila [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Environmental Energy Technologies Division; Auslander, David [Univ. of California, Berkeley, CA (United States); Paprotny, Igor [Univ. of California, Berkeley, CA (United States); Makarov, Yuri [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)


    The Demand Response as a System Reliability Resource project consists of six technical tasks: • Task 2.1. Test Plan and Conduct Tests: Contingency Reserves Demand Response (DR) Demonstration—a pioneering demonstration of how existing utility load-management assets can provide an important electricity system reliability resource known as contingency reserve. • Task 2.2. Participation in Electric Power Research Institute (EPRI) IntelliGrid—technical assistance to the EPRI IntelliGrid team in developing use cases and other high-level requirements for the architecture. • Task 2.3. Research, Development, and Demonstration (RD&D) Planning for Demand Response Technology Development—technical support to the Public Interest Energy Research (PIER) Program on five topics: Sub-task 1. PIER Smart Grid RD&D Planning Document; Sub-task 2. System Dynamics of Programmable Controllable Thermostats; Sub-task 3. California Independent System Operator (California ISO) DR Use Cases; Sub-task 4. California ISO Telemetry Requirements; and Sub-task 5. Design of a Building Load Data Storage Platform. • Task 2.4. Time Value of Demand Response—research that will enable California ISO to take better account of the speed of the resources that it deploys to ensure compliance with reliability rules for frequency control. • Task 2.5. System Integration and Market Research: Southern California Edison (SCE)—research and technical support for efforts led by SCE to conduct demand response pilot demonstrations to provide a contingency reserve service (known as non-spinning reserve) through a targeted sub-population of aggregated residential and small commercial customers enrolled in SCE’s traditional air conditioning (AC) load cycling program, the Summer Discount Plan. • Task 2.6. Demonstrate Demand Response Technologies: Pacific Gas and Electric (PG&E)—research and technical support for efforts led by PG&E to conduct a demand response pilot demonstration to provide non

  18. Integrated Reliability and Risk Analysis System (IRRAS)

    Russell, K D; McKay, M K; Sattison, M.B. Skinner, N.L.; Wood, S T [EG and G Idaho, Inc., Idaho Falls, ID (United States); Rasmuson, D M [Nuclear Regulatory Commission, Washington, DC (United States)


    The Integrated Reliability and Risk Analysis System (IRRAS) is a state-of-the-art, microcomputer-based probabilistic risk assessment (PRA) model development and analysis tool to address key nuclear plant safety issues. IRRAS is an integrated software tool that gives the user the ability to create and analyze fault trees and accident sequences using a microcomputer. This program provides functions that range from graphical fault tree construction to cut set generation and quantification. Version 1.0 of the IRRAS program was released in February of 1987. Since that time, many user comments and enhancements have been incorporated into the program providing a much more powerful and user-friendly system. This version has been designated IRRAS 4.0 and is the subject of this Reference Manual. Version 4.0 of IRRAS provides the same capabilities as Version 1.0 and adds a relational data base facility for managing the data, improved functionality, and improved algorithm performance.

  19. Extrapolation Method for System Reliability Assessment

    Qin, Jianjun; Nishijima, Kazuyoshi; Faber, Michael Havbro


    The present paper presents a new scheme for probability integral solution for system reliability analysis, which takes basis in the approaches by Naess et al. (2009) and Bucher (2009). The idea is to evaluate the probability integral by extrapolation, based on a sequence of MC approximations....... The scheme is extended so that it can be applied to cases where the asymptotic property may not be valid and/or the random variables are not normally distributed. The performance of the scheme is investigated by four principal series and parallel systems and some practical examples. The results indicate...... of integrals with scaled domains. The performance of this class of approximation depends on the approach applied for the scaling and the functional form utilized for the extrapolation. A scheme for this task is derived here taking basis in the theory of asymptotic solutions to multinormal probability integrals...


    Yi Pengxing; Yang Shuzi; Du Runsheng; Wu Bo; Liu Shiyuan


    Taking into account the whole system structure and the component reliability estimation uncertainty, a system reliability estimation method based on probability and statistical theory for distributed monitoring systems is presented. The variance and confidence intervals of the system reliability estimation are obtained by expressing system reliability as a linear sum of products of higher order moments of component reliability estimates when the number of component or system survivals obeys binomial distribution. The eigenfunction of binomial distribution is used to determine the moments of component reliability estimates, and a symbolic matrix which can facilitate the search of explicit system reliability estimates is proposed. Furthermore, a case of application is used to illustrate the procedure, and with the help of this example, various issues such as the applicability of this estimation model, and measures to improve system reliability of monitoring systems are discussed.

  1. Computer network defense system

    Urias, Vincent; Stout, William M. S.; Loverro, Caleb


    A method and apparatus for protecting virtual machines. A computer system creates a copy of a group of the virtual machines in an operating network in a deception network to form a group of cloned virtual machines in the deception network when the group of the virtual machines is accessed by an adversary. The computer system creates an emulation of components from the operating network in the deception network. The components are accessible by the group of the cloned virtual machines as if the group of the cloned virtual machines was in the operating network. The computer system moves network connections for the group of the virtual machines in the operating network used by the adversary from the group of the virtual machines in the operating network to the group of the cloned virtual machines, enabling protecting the group of the virtual machines from actions performed by the adversary.

  2. Reliability of decision-support systems for nuclear emergency management

    Ionescu, Tudor B.


    Decision support systems for nuclear emergency management (DSNE) are currently used worldwide to assist decision makers in taking emergency response countermeasures in case of accidental releases of radioactive materials from nuclear facilities. The present work has been motivated by the fact that, up until now, DSNE systems have not been regarded as safetycritical software systems, such as embedded software currently being used in vehicles and aircraft. The core of any DSNE system is represented by the different simulation codes linked together to form the dispersion simulation workflow. These codes require input emission and meteorological data to produce forecasts of the atmospheric dispersion of radioactive pollutants and other substances. However, the reliability of the system not only depends on the trustworthiness of the measured (or generated) input data but also on the reliability of the simulation codes used. The main goal of this work is to improve the reliability of DSNE systems by adapting current state of the art methods from the domain of software reliability engineering to the case of atmospheric dispersion simulation codes. The current approach is based on the design by diversity principle for improving the reliability of codes and the trustworthiness of results as well as on a flexible fault-tolerant workflow scheduling algorithm for ensuring the maximum availability of the system. The author's contribution is represented by (i) an acceptance test for dispersion simulation results, (ii) an adjudication algorithm (voter) based on comparing taxonomies of dispersion simulation results, and (iii) a feedback-control based fault-tolerant workflow scheduling algorithm. These tools provide means for the continuous verification of dispersion simulation codes while tolerating timing faults caused by disturbances in the underlying computational environment and will thus help increase the reliability and trustworthiness of DSNE systems in missioncritical

  3. Diagnostics and reliability of pipeline systems

    Timashev, Sviatoslav


    The book contains solutions to fundamental problems which arise due to the logic of development of specific branches of science, which are related to pipeline safety, but mainly are subordinate to the needs of pipeline transportation.          The book deploys important but not yet solved aspects of reliability and safety assurance of pipeline systems, which are vital aspects not only for the oil and gas industry and, in general, fuel and energy industries , but also to virtually all contemporary industries and technologies. The volume will be useful to specialists and experts in the field of diagnostics/ inspection, monitoring, reliability and safety of critical infrastructures. First and foremost, it will be useful to the decision making persons —operators of different types of pipelines, pipeline diagnostics/inspection vendors, and designers of in-line –inspection (ILI) tools, industrial and ecological safety specialists, as well as to researchers and graduate students.

  4. Reliability analysis and updating of deteriorating systems with subset simulation

    Schneider, Ronald; Thöns, Sebastian; Straub, Daniel


    Bayesian updating of the system deterioration model. The updated system reliability is then obtained through coupling the updated deterioration model with a probabilistic structural model. The underlying high-dimensional structural reliability problems are solved using subset simulation, which...

  5. Computer system operation

    Lee, Young Jae; Lee, Hae Cho; Lee, Ho Yeun; Kim, Young Taek; Lee, Sung Kyu; Park, Jeong Suk; Nam, Ji Wha; Kim, Soon Kon; Yang, Sung Un; Sohn, Jae Min; Moon, Soon Sung; Park, Bong Sik; Lee, Byung Heon; Park, Sun Hee; Kim, Jin Hee; Hwang, Hyeoi Sun; Lee, Hee Ja; Hwang, In A. [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)


    The report described the operation and the trouble shooting of main computer and KAERINet. The results of the project are as follows; 1. The operation and trouble shooting of the main computer system. (Cyber 170-875, Cyber 960-31, VAX 6320, VAX 11/780). 2. The operation and trouble shooting of the KAERINet. (PC to host connection, host to host connection, file transfer, electronic-mail, X.25, CATV etc.). 3. The development of applications -Electronic Document Approval and Delivery System, Installation the ORACLE Utility Program. 22 tabs., 12 figs. (Author) .new.

  6. A Method of Reliability Allocation of a Complicated Large System

    WANG Zhi-sheng; QIN Yuan-yuan; WANG Dao-bo


    Aiming at the problem of reliability allocation for a complicated large system, a new thought is brought up. Reliability allocation should be a kind of decision-making behavior; therefore the more information is used when apportioning a reliability index, the more reasonable an allocation is obtained. Reliability allocation for a complicated large system consists of two processes, the first one is a reliability information reporting process fromt bottom to top, and the other one is a reliability index apportioning process from top to bottom. By a typical example, we illustrate the concrete process of reliability allocation algorithms.

  7. Remote computer monitors corrosion protection system

    Kendrick, A.

    Effective corrosion protection with electrochemical methods requires some method of routine monitoring that provides reliable data that is free of human error. A test installation of a remote computer control monitoring system for electrochemical corrosion protection is described. The unit can handle up to six channel inputs. Each channel comprises 3 analog signals and 1 digital. The operation of the system is discussed.

  8. 77 FR 7526 - Interpretation of Protection System Reliability Standard


    ... Protection System maintenance and testing standard that were identified by the NOPR within the Reliability... Reliability Standards development process to address gaps in the Protection System maintenance and testing... Protection System maintenance and testing standard that were identified by the NOPR within the Reliability...

  9. 75 FR 81152 - Interpretation of Protection System Reliability Standard


    ... Systems that affect the reliability of the BES. The program shall include: R1.1. Maintenance and testing... provide a complete framework for maintenance and testing of equipment necessary to ensure the reliability... maintenance and testing of Protection Systems affecting the reliability of the Bulk-Power System. 13. If...

  10. Modular System Modeling for Quantitative Reliability Evaluation of Technical Systems

    Stephan Neumann


    Full Text Available In modern times, it is necessary to offer reliable products to match the statutory directives concerning product liability and the high expectations of customers for durable devices. Furthermore, to maintain a high competitiveness, engineers need to know as accurately as possible how long their product will last and how to influence the life expectancy without expensive and time-consuming testing. As the components of a system are responsible for the system reliability, this paper introduces and evaluates calculation methods for life expectancy of common machine elements in technical systems. Subsequently, a method for the quantitative evaluation of the reliability of technical systems is proposed and applied to a heavy-duty power shift transmission.

  11. Reliability-Based Control Design for Uncertain Systems

    Crespo, Luis G.; Kenny, Sean P.


    This paper presents a robust control design methodology for systems with probabilistic parametric uncertainty. Control design is carried out by solving a reliability-based multi-objective optimization problem where the probability of violating design requirements is minimized. Simultaneously, failure domains are optimally enlarged to enable global improvements in the closed-loop performance. To enable an efficient numerical implementation, a hybrid approach for estimating reliability metrics is developed. This approach, which integrates deterministic sampling and asymptotic approximations, greatly reduces the numerical burden associated with complex probabilistic computations without compromising the accuracy of the results. Examples using output-feedback and full-state feedback with state estimation are used to demonstrate the ideas proposed.

  12. Reliability-based design optimization of multiphysics, aerospace systems

    Allen, Matthew R.

    Aerospace systems are inherently plagued by uncertainties in their design, fabrication, and operation. Safety factors and expensive testing at the prototype level traditionally account for these uncertainties. Reliability-based design optimization (RBDO) can drastically decrease life-cycle development costs by accounting for the stochastic nature of the system response in the design process. The reduction in cost is amplified for conceptually new designs, for which no accepted safety factors currently exist. Aerospace systems often operate in environments dominated by multiphysics phenomena, such as the fluid-structure interaction of aeroelastic wings or the electrostatic-mechanical interaction of sensors and actuators. The analysis of such phenomena is generally complex and computationally expensive, and therefore is usually simplified or approximated in the design process. However, this leads to significant epistemic uncertainties in modeling, which may dominate the uncertainties for which the reliability analysis was intended. Therefore, the goal of this thesis is to present a RBDO framework that utilizes high-fidelity simulation techniques to minimize the modeling error for multiphysics phenomena. A key component of the framework is an extended reduced order modeling (EROM) technique that can analyze various states in the design or uncertainty parameter space at a reduced computational cost, while retaining characteristics of high-fidelity methods. The computational framework is verified and applied to the RBDO of aeroelastic systems and electrostatically driven sensors and actuators, utilizing steady-state analysis and design criteria. The framework is also applied to the design of electrostatic devices with transient criteria, which requires the use of the EROM technique to overcome the computational burden of multiple transient analyses.

  13. Virtualization of Legacy Instrumentation Control Computers for Improved Reliability, Operational Life, and Management.

    Katz, Jonathan E


    Laboratories tend to be amenable environments for long-term reliable operation of scientific measurement equipment. Indeed, it is not uncommon to find equipment 5, 10, or even 20+ years old still being routinely used in labs. Unfortunately, the Achilles heel for many of these devices is the control/data acquisition computer. Often these computers run older operating systems (e.g., Windows XP) and, while they might only use standard network, USB or serial ports, they require proprietary software to be installed. Even if the original installation disks can be found, it is a burdensome process to reinstall and is fraught with "gotchas" that can derail the process-lost license keys, incompatible hardware, forgotten configuration settings, etc. If you have running legacy instrumentation, the computer is the ticking time bomb waiting to put a halt to your operation.In this chapter, I describe how to virtualize your currently running control computer. This virtualized computer "image" is easy to maintain, easy to back up and easy to redeploy. I have used this multiple times in my own lab to greatly improve the robustness of my legacy devices.After completing the steps in this chapter, you will have your original control computer as well as a virtual instance of that computer with all the software installed ready to control your hardware should your original computer ever be decommissioned.

  14. Optimal reliability design method for remote solar systems

    Suwapaet, Nuchida

    A unique optimal reliability design algorithm is developed for remote communication systems. The algorithm deals with either minimizing an unavailability of the system within a fixed cost or minimizing the cost of the system with an unavailability constraint. The unavailability of the system is a function of three possible failure occurrences: individual component breakdown, solar energy deficiency (loss of load probability), and satellite/radio transmission loss. The three mathematical models of component failure, solar power failure, transmission failure are combined and formulated as a nonlinear programming optimization problem with binary decision variables, such as number and type (or size) of photovoltaic modules, batteries, radios, antennas, and controllers. Three possible failures are identified and integrated in computer algorithm to generate the parameters for the optimization algorithm. The optimization algorithm is implemented with a branch-and-bound technique solution in MS Excel Solver. The algorithm is applied to a case study design for an actual system that will be set up in remote mountainous areas of Peru. The automated algorithm is verified with independent calculations. The optimal results from minimizing the unavailability of the system with the cost constraint case and minimizing the total cost of the system with the unavailability constraint case are consistent with each other. The tradeoff feature in the algorithm allows designers to observe results of 'what-if' scenarios of relaxing constraint bounds, thus obtaining the most benefit from the optimization process. An example of this approach applied to an existing communication system in the Andes shows dramatic improvement in reliability for little increase in cost. The algorithm is a real design tool, unlike other existing simulation design tools. The algorithm should be useful for other stochastic systems where component reliability, random supply and demand, and communication are

  15. Development of Reliable Life Support Systems

    Carter, Layne


    The life support systems on the International Space Station (ISS) are the culmination of an extensive effort encompassing development, design, and test to provide the highest possible confidence in their operation on ISS. Many years of development testing are initially performed to identify the optimum technology and the optimum operational approach. The success of this development program depends on the accuracy of the system interfaces. The critical interfaces include the specific operational environment, the composition of the waste stream to be processed and the quality of the product. Once the development program is complete, a detailed system schematic is built based on the specific design requirements, followed by component procurement, assembly, and acceptance testing. A successful acceptance test again depends on accurately simulating the anticipated environment on ISS. The ISS Water Recovery System (WRS) provides an excellent example of where this process worked, as well as lessons learned that can be applied to the success of future missions. More importantly, ISS has provided a test bed to identify these design issues. Mechanical design issues have included an unreliable harmonic drive train in the Urine Processor's fluids pump, and seals in the Water Processor's Catalytic Reactor with insufficient life at the operational temperature. Systems issues have included elevated calcium in crew urine (due to microgravity effect) that resulted in precipitation at the desired water recovery rate, and the presence of an organosilicon compound (dimethylsilanediol) in the condensate that is not well removed by the water treatment process. Modifications to the WRS to address these issues are either complete (and now being evaluated on ISS) or are currently in work to insure the WRS has the required reliability before embarking on a mission to Mars.

  16. Computer Vision Systems

    Gunasekaran, Sundaram

    Food quality is of paramount consideration for all consumers, and its importance is perhaps only second to food safety. By some definition, food safety is also incorporated into the broad categorization of food quality. Hence, the need for careful and accurate evaluation of food quality is at the forefront of research and development both in the academia and industry. Among the many available methods for food quality evaluation, computer vision has proven to be the most powerful, especially for nondestructively extracting and quantifying many features that have direct relevance to food quality assessment and control. Furthermore, computer vision systems serve to rapidly evaluate the most readily observable foods quality attributes - the external characteristics such as color, shape, size, surface texture etc. In addition, it is now possible, using advanced computer vision technologies, to “see” inside a food product and/or package to examine important quality attributes ordinarily unavailable to human evaluators. With rapid advances in electronic hardware and other associated imaging technologies, the cost-effectiveness and speed of computer vision systems have greatly improved and many practical systems are already in place in the food industry.

  17. Fault and Defect Tolerant Computer Architectures: Reliable Computing with Unreliable Devices


    and polished using chemical- mechanical polishing ( CMP ) (Diagram 3). 5. Wet etching is done using hot H3PO4, then chemical dry etching is used to...modelled as a diode with a switchable threshold (i.e., turn-on) voltage. The switches are set or reset by electrochemical reduction or oxidation of the... characterizing the reliability of the overall system are examined. Key Definitions. Error is a manifestation of a fault in the system, in

  18. Computational systems chemical biology.

    Oprea, Tudor I; May, Elebeoba E; Leitão, Andrei; Tropsha, Alexander


    There is a critical need for improving the level of chemistry awareness in systems biology. The data and information related to modulation of genes and proteins by small molecules continue to accumulate at the same time as simulation tools in systems biology and whole body physiologically based pharmacokinetics (PBPK) continue to evolve. We called this emerging area at the interface between chemical biology and systems biology systems chemical biology (SCB) (Nat Chem Biol 3: 447-450, 2007).The overarching goal of computational SCB is to develop tools for integrated chemical-biological data acquisition, filtering and processing, by taking into account relevant information related to interactions between proteins and small molecules, possible metabolic transformations of small molecules, as well as associated information related to genes, networks, small molecules, and, where applicable, mutants and variants of those proteins. There is yet an unmet need to develop an integrated in silico pharmacology/systems biology continuum that embeds drug-target-clinical outcome (DTCO) triplets, a capability that is vital to the future of chemical biology, pharmacology, and systems biology. Through the development of the SCB approach, scientists will be able to start addressing, in an integrated simulation environment, questions that make the best use of our ever-growing chemical and biological data repositories at the system-wide level. This chapter reviews some of the major research concepts and describes key components that constitute the emerging area of computational systems chemical biology.

  19. Resilience assessment and evaluation of computing systems

    Wolter, Katinka; Vieira, Marco


    The resilience of computing systems includes their dependability as well as their fault tolerance and security. It defines the ability of a computing system to perform properly in the presence of various kinds of disturbances and to recover from any service degradation. These properties are immensely important in a world where many aspects of our daily life depend on the correct, reliable and secure operation of often large-scale distributed computing systems. Wolter and her co-editors grouped the 20 chapters from leading researchers into seven parts: an introduction and motivating examples,

  20. Response and Reliability Problems of Dynamic Systems

    Nielsen, Søren R. K.

    The present thesis consists of selected parts of the work performed by the author on stochastic dynamics and reliability theory of dynamically excited structures primarily during the period 1986-1996.......The present thesis consists of selected parts of the work performed by the author on stochastic dynamics and reliability theory of dynamically excited structures primarily during the period 1986-1996....

  1. Reliability program requirements for aeronautical and space system contractors


    General reliability program requirements for NASA contracts involving the design, development, fabrication, test, and/or use of aeronautical and space systems including critical ground support equipment are prescribed. The reliability program requirements require (1) thorough planning and effective management of the reliability effort; (2) definition of the major reliability tasks and their place as an integral part of the design and development process; (3) planning and evaluating the reliability of the system and its elements (including effects of software interfaces) through a program of analysis, review, and test; and (4) timely status indication by formal documentation and other reporting to facilitate control of the reliability program.

  2. Optimal Bi-Objective Redundancy Allocation for Systems Reliability and Risk Management

    Govindan, Kannan; Jafarian, Ahmad; Azbari, Mostafa E.


    In the big data era, systems reliability is critical to effective systems risk management. In this paper, a novel multiobjective approach, with hybridization of a known algorithm called NSGA-II and an adaptive population-based simulated annealing (APBSA) method is developed to solve the systems...... of domination; and 4) data envelopment analysis. The computational studies have shown that the proposed algorithm is an effective approach for systems reliability and risk management....

  3. High Reliability Oscillators for Terahertz Systems Project

    National Aeronautics and Space Administration — To develop reliable THz sources with high power and high DC-RF efficiency, Virginia Diodes, Inc. will develop a thorough understanding of the complex interactions...

  4. System reliability analysis for kinematic performance of planar mechanisms

    ZHANG YiMin; HUANG XianZhen; ZHANG XuFang; HE XiangDong; WEN BangChun


    Based on the reliability and mechanism kinematic accuracy theories, we propose a general methodology for system reliability analysis of kinematic performance of planar mechanisms. The loop closure equations are used to estimate the kinematic performance errors of planar mechanisms. Reliability and system reliability theories are introduced to develop the limit state functions (LSF) for failure of kinematic performance qualities. The statistical fourth moment method and the Edgeworth series technique are used on system reliability analysis for kinematic performance of planar mechanisms, which relax the restrictions of probability distribution of design variables. Finally, the practicality, efficiency and accuracy of the proposed method are demonstrated by numerical examples.

  5. Reliability Estimations of Control Systems Effected by Several Interference Sources

    DengBei-xing; JiangMing-hu; LiXing


    In order to establish the sufficient and necessary condition that arbitrarily reliable systems can not be constructed with function elements under interference sources, it is very important to expand set of interference sources with the above property. In this paper, the models of two types of interference sources are raised respectively: interference source possessing real input vectors and constant reliable interferen cesource. We study the reliability of the systems effected by the interference sources, and the lower bound of the reliability is presented. The results show that it is impossible that arbitrarily reliable systems can not be constructed with the elements effected by above interference sources.

  6. Spatiotemporal Dynamics and Reliable Computations in Recurrent Spiking Neural Networks

    Pyle, Ryan; Rosenbaum, Robert


    Randomly connected networks of excitatory and inhibitory spiking neurons provide a parsimonious model of neural variability, but are notoriously unreliable for performing computations. We show that this difficulty is overcome by incorporating the well-documented dependence of connection probability on distance. Spatially extended spiking networks exhibit symmetry-breaking bifurcations and generate spatiotemporal patterns that can be trained to perform dynamical computations under a reservoir computing framework.

  7. System reliability analysis of layered soil slopes using fully specified slip surfaces and genetic algorithms

    Zeng, Peng; Jiménez Rodríguez, Rafael; Jurado Piña, Rafael


    This paper presents a new approach to identify the fully specified representative slip surfaces (RSSs) of layered soil slopes and to compute their system probability of failure, Pf,s. Spencer's method is used to compute the factors of safety of trial slip surfaces, and the First Order Reliability Method (FORM) is employed to efficiently evaluate their reliability. A custom-designed Genetic Algorithm (GA) is developed to search all the RSSs in only one GA optimization. Taking advantage of the ...

  8. Reliability of lower limb alignment measures using an established landmark-based method with a customized computer software program.

    Sled, Elizabeth A; Sheehy, Lisa M; Felson, David T; Costigan, Patrick A; Lam, Miu; Cooke, T Derek V


    The objective of the study was to evaluate the reliability of frontal plane lower limb alignment measures using a landmark-based method by (1) comparing inter- and intra-reader reliability between measurements of alignment obtained manually with those using a computer program, and (2) determining inter- and intra-reader reliability of computer-assisted alignment measures from full-limb radiographs. An established method for measuring alignment was used, involving selection of 10 femoral and tibial bone landmarks. (1) To compare manual and computer methods, we used digital images and matching paper copies of five alignment patterns simulating healthy and malaligned limbs drawn using AutoCAD. Seven readers were trained in each system. Paper copies were measured manually and repeat measurements were performed daily for 3 days, followed by a similar routine with the digital images using the computer. (2) To examine the reliability of computer-assisted measures from full-limb radiographs, 100 images (200 limbs) were selected as a random sample from 1,500 full-limb digital radiographs which were part of the Multicenter Osteoarthritis Study. Three trained readers used the software program to measure alignment twice from the batch of 100 images, with two or more weeks between batch handling. Manual and computer measures of alignment showed excellent agreement (intraclass correlations [ICCs] 0.977-0.999 for computer analysis; 0.820-0.995 for manual measures). The computer program applied to full-limb radiographs produced alignment measurements with high inter- and intra-reader reliability (ICCs 0.839-0.998). In conclusion, alignment measures using a bone landmark-based approach and a computer program were highly reliable between multiple readers.

  9. LMI approach to reliable H∞ control of linear systems

    Yao Bo; Wang Fuzhong


    The reliable design problem for linear systems is concerned with. A more practical model of actuator faults than outage is considered. An LMI approach of designing reliable controller is presented for the case of actuator faults that can be modeled by a scaling factor. The resulting control systems are reliable in that they provide guaranteed asymptotic stability and H∞ performance when some control component (actuator) faults occur. A numerical example is also given to illustrate the design procedure and their effectiveness. Furthermore, the optimal standard controller and the optimal reliable controller are compared to show the necessity of reliable control.

  10. Reliable Provisioning of Spot Instances for Compute-intensive Applications

    Voorsluys, William


    Cloud computing providers are now offering their unused resources for leasing in the spot market, which has been considered the first step towards a full-fledged market economy for computational resources. Spot instances are virtual machines (VMs) available at lower prices than their standard on-demand counterparts. These VMs will run for as long as the current price is lower than the maximum bid price users are willing to pay per hour. Spot instances have been increasingly used for executing compute-intensive applications. In spite of an apparent economical advantage, due to an intermittent nature of biddable resources, application execution times may be prolonged or they may not finish at all. This paper proposes a resource allocation strategy that addresses the problem of running compute-intensive jobs on a pool of intermittent virtual machines, while also aiming to run applications in a fast and economical way. To mitigate potential unavailability periods, a multifaceted fault-aware resource provisioning ...

  11. Fatigue Reliability of Offshore Wind Turbine Systems

    Marquez-Dominguez, Sergio; Sørensen, John Dalsgaard


    Optimization of the design of offshore wind turbine substructures with respect to fatigue loads is an important issue in offshore wind energy. A stochastic model is developed for assessing the fatigue failure reliability. This model can be used for direct probabilistic design and for calibration...... of appropriate partial safety factors / fatigue design factors (FDF) for steel substructures of offshore wind turbines (OWTs). The fatigue life is modeled by the SN approach. Design and limit state equations are established based on the accumulated fatigue damage. The acceptable reliability level for optimal...

  12. Computational area measurement of orbital floor fractures: Reliability, accuracy and rapidity

    Schouman, Thomas, E-mail: [Service of Oral and Maxillofacial Surgery, Department of Surgery, University Hospital and Faculty of Medicine of Geneva, 1211 Genève (Switzerland); Courvoisier, Delphine S., E-mail: [Biostatistician - Service of Clinical Epidemiology, University Hospital and Faculty of Medicine of Geneva 1211 Genève (Switzerland); Imholz, Benoit, E-mail: [Service of Oral and Maxillofacial Surgery, Department of Surgery, University Hospital and Faculty of Medicine of Geneva, 1211 Genève (Switzerland); Van Issum, Christopher, E-mail: [Service of Ophthalmology, University Hospital and Faculty of Medicine of Geneva, 1211 Genève (Switzerland); Scolozzi, Paolo, E-mail: [Service of Oral and Maxillofacial Surgery, Department of Surgery, University Hospital and Faculty of Medicine of Geneva, 1211 Genève (Switzerland)


    Objective: To evaluate the reliability, accuracy and rapidity of a specific computational method for assessing the orbital floor fracture area on a CT scan. Method: A computer assessment of the area of the fracture, as well as that of the total orbital floor, was determined on CT scans taken from ten patients. The ratio of the fracture's area to the orbital floor area was also calculated. The test–retest precision of measurement calculations was estimated using the Intraclass Correlation Coefficient (ICC) and Dahlberg's formula to assess the agreement across observers and across measures. The time needed for the complete assessment was also evaluated. Results: The Intraclass Correlation Coefficient across observers was 0.92 [0.85;0.96], and the precision of the measures across observers was 4.9%, according to Dahlberg's formula .The mean time needed to make one measurement was 2 min and 39 s (range, 1 min and 32 s to 4 min and 37 s). Conclusion: This study demonstrated that (1) the area of the orbital floor fracture can be rapidly and reliably assessed by using a specific computer system directly on CT scan images; (2) this method has the potential of being routinely used to standardize the post-traumatic evaluation of orbital fractures.

  13. An adaptive neuro fuzzy model for estimating the reliability of component-based software systems

    Kirti Tyagi


    Full Text Available Although many algorithms and techniques have been developed for estimating the reliability of component-based software systems (CBSSs, much more research is needed. Accurate estimation of the reliability of a CBSS is difficult because it depends on two factors: component reliability and glue code reliability. Moreover, reliability is a real-world phenomenon with many associated real-time problems. Soft computing techniques can help to solve problems whose solutions are uncertain or unpredictable. A number of soft computing approaches for estimating CBSS reliability have been proposed. These techniques learn from the past and capture existing patterns in data. The two basic elements of soft computing are neural networks and fuzzy logic. In this paper, we propose a model for estimating CBSS reliability, known as an adaptive neuro fuzzy inference system (ANFIS, that is based on these two basic elements of soft computing, and we compare its performance with that of a plain FIS (fuzzy inference system for different data sets.

  14. Reliability Lessons Learned From GPU Experience With The Titan Supercomputer at Oak Ridge Leadership Computing Facility

    Gallarno, George [Christian Brothers University; Rogers, James H [ORNL; Maxwell, Don E [ORNL


    The high computational capability of graphics processing units (GPUs) is enabling and driving the scientific discovery process at large-scale. The world s second fastest supercomputer for open science, Titan, has more than 18,000 GPUs that computational scientists use to perform scientific simu- lations and data analysis. Understanding of GPU reliability characteristics, however, is still in its nascent stage since GPUs have only recently been deployed at large-scale. This paper presents a detailed study of GPU errors and their impact on system operations and applications, describing experiences with the 18,688 GPUs on the Titan supercom- puter as well as lessons learned in the process of efficient operation of GPUs at scale. These experiences are helpful to HPC sites which already have large-scale GPU clusters or plan to deploy GPUs in the future.

  15. Reliability of digital reactor protection system based on extenics.

    Zhao, Jing; He, Ya-Nan; Gu, Peng-Fei; Chen, Wei-Hua; Gao, Feng


    After the Fukushima nuclear accident, safety of nuclear power plants (NPPs) is widespread concerned. The reliability of reactor protection system (RPS) is directly related to the safety of NPPs, however, it is difficult to accurately evaluate the reliability of digital RPS. The method is based on estimating probability has some uncertainties, which can not reflect the reliability status of RPS dynamically and support the maintenance and troubleshooting. In this paper, the reliability quantitative analysis method based on extenics is proposed for the digital RPS (safety-critical), by which the relationship between the reliability and response time of RPS is constructed. The reliability of the RPS for CPR1000 NPP is modeled and analyzed by the proposed method as an example. The results show that the proposed method is capable to estimate the RPS reliability effectively and provide support to maintenance and troubleshooting of digital RPS system.

  16. Nested Transactions: An Approach to Reliable Distributed Computing.


    Undoubtedly such universal use of computers and rapid exchange of information will have a dramatic impact: social , economic, and political. Distributed...level tiansaction, these committed inferiors are SLJ C.e’,ssfulI inferiors of the top-level transaction, too. Therefore q will indeed get a commint

  17. Integrated Software Architecture-Based Reliability Prediction for IT Systems

    Brosch, Franz


    With the increasing importance of reliability in business and industrial IT systems, new techniques for architecture-based software reliability prediction are becoming an integral part of the development process. This dissertation thesis introduces a novel reliability modelling and prediction technique that considers the software architecture with its component structure, control and data flow, recovery mechanisms, its deployment to distributed hardware resources and the system´s usage p...

  18. The Computational Sensorimotor Systems Laboratory

    Federal Laboratory Consortium — The Computational Sensorimotor Systems Lab focuses on the exploration, analysis, modeling and implementation of biological sensorimotor systems for both scientific...

  19. Intraobserver and intermethod reliability for using two different computer programs in preoperative lower limb alignment analysis

    Mohamed Kenawey


    Conclusion: Computer assisted lower limb alignment analysis is reliable whether using graphics editing program or specialized planning software. However slight higher variability for angles away from the knee joint can be expected.

  20. Test–retest reliability and validity of self-reported duration of computer use at work

    IJmker, S.; Leijssen, J.N.M.; Blatter, B.M.; Beek, A.J. van der; Mechelen, W. van; Bongers, P.M.


    This study evaluated the test–retest reliability and the validity of self-reported duration of computer use at work. Test–retest reliability was studied among 81 employees of a research department of a university medical center. The employees filled out a web-based questionnaire twice with an in-bet

  1. Computing the SKT Reliability of Acyclic Directed Networks Using Factoring Method

    KONG Fanjia; WANG Guangxing


    This paper presents a factoringalgorithm for computing source-to-K terminal (SKT) reliability, the probability that a source s can send message to a specified set of terminals K, in acyclic directed networks (AD-networks) in which bothnodes and edges can fail. Based on Pivotal decomposition theorem, a newformula is derived for computing the SKT reliability of AD-networks. By establishing a topological property of AD-networks, it is shown that the SKT reliability of AD-networks can be computed by recursively applying this formula. Two new Reliability-Preserving Reductions are alsointroduced. The recursion tree generated by the presented algorithm hasat most 2(|V| - |K|- |C|) leaf nodes, where |V| and |K| are the numbers of nodes and terminals, respectively, while |C| is the number of the nodes satisfying some specified conditions. The computation complexity of the new algorithm is O (|E||V|2(|V| -|K| -|C|)) in the worst case, where |E| is the number of edges. Forsource-to-all-terminal (SAT) reliability, its computation complexity is O (|E|). Comparison of the new algorithm with the existing ones indicates that the new algorithm is more efficient for computing the SKT reliability of AD-networks.

  2. Reliability models applicable to space telescope solar array assembly system

    Patil, S. A.


    A complex system may consist of a number of subsystems with several components in series, parallel, or combination of both series and parallel. In order to predict how well the system will perform, it is necessary to know the reliabilities of the subsystems and the reliability of the whole system. The objective of the present study is to develop mathematical models of the reliability which are applicable to complex systems. The models are determined by assuming k failures out of n components in a subsystem. By taking k = 1 and k = n, these models reduce to parallel and series models; hence, the models can be specialized to parallel, series combination systems. The models are developed by assuming the failure rates of the components as functions of time and as such, can be applied to processes with or without aging effects. The reliability models are further specialized to Space Telescope Solar Arrray (STSA) System. The STSA consists of 20 identical solar panel assemblies (SPA's). The reliabilities of the SPA's are determined by the reliabilities of solar cell strings, interconnects, and diodes. The estimates of the reliability of the system for one to five years are calculated by using the reliability estimates of solar cells and interconnects given n ESA documents. Aging effects in relation to breaks in interconnects are discussed.

  3. Reliability impact of solar electric generation upon electric utility systems

    Day, J. T.; Hobbs, W. J.


    The introduction of solar electric systems into an electric utility grid brings new considerations in the assessment of the utility's power supply reliability. This paper summarizes a methodology for estimating the reliability impact of solar electric technologies upon electric utilities for value assessment and planning purposes. Utility expansion and operating impacts are considered. Sample results from photovoltaic analysis show that solar electric plants can increase the reliable load-carrying capability of a utility system. However, the load-carrying capability of the incremental power tends to decrease, particularly at significant capacity penetration levels. Other factors influencing reliability impact are identified.

  4. Results of a Demonstration Assessment of Passive System Reliability Utilizing the Reliability Method for Passive Systems (RMPS)

    Bucknor, Matthew; Grabaskas, David; Brunett, Acacia; Grelle, Austin


    Advanced small modular reactor designs include many advantageous design features such as passively driven safety systems that are arguably more reliable and cost effective relative to conventional active systems. Despite their attractiveness, a reliability assessment of passive systems can be difficult using conventional reliability methods due to the nature of passive systems. Simple deviations in boundary conditions can induce functional failures in a passive system, and intermediate or unexpected operating modes can also occur. As part of an ongoing project, Argonne National Laboratory is investigating various methodologies to address passive system reliability. The Reliability Method for Passive Systems (RMPS), a systematic approach for examining reliability, is one technique chosen for this analysis. This methodology is combined with the Risk-Informed Safety Margin Characterization (RISMC) approach to assess the reliability of a passive system and the impact of its associated uncertainties. For this demonstration problem, an integrated plant model of an advanced small modular pool-type sodium fast reactor with a passive reactor cavity cooling system is subjected to a station blackout using RELAP5-3D. This paper discusses important aspects of the reliability assessment, including deployment of the methodology, the uncertainty identification and quantification process, and identification of key risk metrics.

  5. Computer systems a programmer's perspective

    Bryant, Randal E


    Computer systems: A Programmer’s Perspective explains the underlying elements common among all computer systems and how they affect general application performance. Written from the programmer’s perspective, this book strives to teach readers how understanding basic elements of computer systems and executing real practice can lead them to create better programs. Spanning across computer science themes such as hardware architecture, the operating system, and systems software, the Third Edition serves as a comprehensive introduction to programming. This book strives to create programmers who understand all elements of computer systems and will be able to engage in any application of the field--from fixing faulty software, to writing more capable programs, to avoiding common flaws. It lays the groundwork for readers to delve into more intensive topics such as computer architecture, embedded systems, and cybersecurity. This book focuses on systems that execute an x86-64 machine code, and recommends th...

  6. Reliability of a computer and Internet survey (Computer User Profile) used by adults with and without traumatic brain injury (TBI).

    Kilov, Andrea M; Togher, Leanne; Power, Emma


    To determine test-re-test reliability of the 'Computer User Profile' (CUP) in people with and without TBI. The CUP was administered on two occasions to people with and without TBI. The CUP investigated the nature and frequency of participants' computer and Internet use. Intra-class correlation coefficients and kappa coefficients were conducted to measure reliability of individual CUP items. Descriptive statistics were used to summarize content of responses. Sixteen adults with TBI and 40 adults without TBI were included in the study. All participants were reliable in reporting demographic information, frequency of social communication and leisure activities and computer/Internet habits and usage. Adults with TBI were reliable in 77% of their responses to survey items. Adults without TBI were reliable in 88% of their responses to survey items. The CUP was practical and valuable in capturing information about social, leisure, communication and computer/Internet habits of people with and without TBI. Adults without TBI scored more items with satisfactory reliability overall in their surveys. Future studies may include larger samples and could also include an exploration of how people with/without TBI use other digital communication technologies. This may provide further information on determining technology readiness for people with TBI in therapy programmes.

  7. Derivation of Reliability Index Vector Formula for Series System and Its Application

    KANG Hai-gui; ZHANG Jing; SUN Ying-wei; GUO Wei


    In this study,a reliability index vector formula is proposed for series system with two failure modes in term of the concept of reliability index vector and equivalent failure modes.Firstly,the reliability index vector is introduced to determine the correlation coefficient between two failure modes,and then,the reliability index vector of a series system can be obtained.Several numerical cases and an analysis on offshore platform are performed,and the results show that this scheme provided here has better computational accuracy,and its calculation process is simpler for the series systems reliability calculations compared with the other methods.Also this scheme is more convenient for the engineering applications.

  8. Central nervous system and computation.

    Guidolin, Diego; Albertin, Giovanna; Guescini, Michele; Fuxe, Kjell; Agnati, Luigi F


    Computational systems are useful in neuroscience in many ways. For instance, they may be used to construct maps of brain structure and activation, or to describe brain processes mathematically. Furthermore, they inspired a powerful theory of brain function, in which the brain is viewed as a system characterized by intrinsic computational activities or as a "computational information processor. "Although many neuroscientists believe that neural systems really perform computations, some are more cautious about computationalism or reject it. Thus, does the brain really compute? Answering this question requires getting clear on a definition of computation that is able to draw a line between physical systems that compute and systems that do not, so that we can discern on which side of the line the brain (or parts of it) could fall. In order to shed some light on the role of computational processes in brain function, available neurobiological data will be summarized from the standpoint of a recently proposed taxonomy of notions of computation, with the aim of identifying which brain processes can be considered computational. The emerging picture shows the brain as a very peculiar system, in which genuine computational features act in concert with noncomputational dynamical processes, leading to continuous self-organization and remodeling under the action of external stimuli from the environment and from the rest of the organism.

  9. Model for personal computer system selection.

    Blide, L


    Successful computer software and hardware selection is best accomplished by following an organized approach such as the one described in this article. The first step is to decide what you want to be able to do with the computer. Secondly, select software that is user friendly, well documented, bug free, and that does what you want done. Next, you select the computer, printer and other needed equipment from the group of machines on which the software will run. Key factors here are reliability and compatibility with other microcomputers in your facility. Lastly, you select a reliable vendor who will provide good, dependable service in a reasonable time. The ability to correctly select computer software and hardware is a key skill needed by medical record professionals today and in the future. Professionals can make quality computer decisions by selecting software and systems that are compatible with other computers in their facility, allow for future net-working, ease of use, and adaptability for expansion as new applications are identified. The key to success is to not only provide for your present needs, but to be prepared for future rapid expansion and change in your computer usage as technology and your skills grow.

  10. PV system field experience and reliability

    Durand, Steven; Rosenthal, Andrew; Thomas, Mike


    Hybrid power systems consisting of battery inverters coupled with diesel, propane, or gasoline engine-driven electrical generators, and photovoltaic arrays are being used in many remote locations. The potential cost advantages of hybrid systems over simple engine-driven generator systems are causing hybrid systems to be considered for numerous applications including single-family residential, communications, and village power. This paper discusses the various design constraints of such systems and presents one technique for reducing hybrid system losses. The Southwest Technology Development Institute under contract to the National Renewable Energy Laboratory and Sandia National Laboratories has been installing data acquisition systems (DAS) on a number of small and large hybrid PV systems. These systems range from small residential systems (1 kW PV - 7 kW generator), to medium sized systems (10 kW PV - 20 kW generator), to larger systems (100 kW PV - 200 kW generator). Even larger systems are being installed with hundreds of kilowatts of PV modules, multiple wind machines, and larger diesel generators.

  11. Reliability analysis and initial requirements for FC systems and stacks

    Åström, K.; Fontell, E.; Virtanen, S.

    In the year 2000 Wärtsilä Corporation started an R&D program to develop SOFC systems for CHP applications. The program aims to bring to the market highly efficient, clean and cost competitive fuel cell systems with rated power output in the range of 50-250 kW for distributed generation and marine applications. In the program Wärtsilä focuses on system integration and development. System reliability and availability are key issues determining the competitiveness of the SOFC technology. In Wärtsilä, methods have been implemented for analysing the system in respect to reliability and safety as well as for defining reliability requirements for system components. A fault tree representation is used as the basis for reliability prediction analysis. A dynamic simulation technique has been developed to allow for non-static properties in the fault tree logic modelling. Special emphasis has been placed on reliability analysis of the fuel cell stacks in the system. A method for assessing reliability and critical failure predictability requirements for fuel cell stacks in a system consisting of several stacks has been developed. The method is based on a qualitative model of the stack configuration where each stack can be in a functional, partially failed or critically failed state, each of the states having different failure rates and effects on the system behaviour. The main purpose of the method is to understand the effect of stack reliability, critical failure predictability and operating strategy on the system reliability and availability. An example configuration, consisting of 5 × 5 stacks (series of 5 sets of 5 parallel stacks) is analysed in respect to stack reliability requirements as a function of predictability of critical failures and Weibull shape factor of failure rate distributions.

  12. A Quantitative Risk Analysis Framework for Evaluating and Monitoring Operational Reliability of Cloud Computing

    Islam, Muhammad Faysal


    Cloud computing offers the advantage of on-demand, reliable and cost efficient computing solutions without the capital investment and management resources to build and maintain in-house data centers and network infrastructures. Scalability of cloud solutions enable consumers to upgrade or downsize their services as needed. In a cloud environment,…

  13. A Quantitative Risk Analysis Framework for Evaluating and Monitoring Operational Reliability of Cloud Computing

    Islam, Muhammad Faysal


    Cloud computing offers the advantage of on-demand, reliable and cost efficient computing solutions without the capital investment and management resources to build and maintain in-house data centers and network infrastructures. Scalability of cloud solutions enable consumers to upgrade or downsize their services as needed. In a cloud environment,…

  14. Assessment on reliability of water quality in water distribution systems

    伍悦滨; 田海; 王龙岩


    Water leaving the treatment works is usually of a high quality but its properties change during the transportation stage. Increasing awareness of the quality of the service provided within the water industry today and assessing the reliability of the water quality in a distribution system has become a major significance for decision on system operation based on water quality in distribution networks. Using together a water age model, a chlorine decay model and a model of acceptable maximum water age can assess the reliability of the water quality in a distribution system. First, the nodal water age values in a certain complex distribution system can be calculated by the water age model. Then, the acceptable maximum water age value in the distribution system is obtained based on the chlorine decay model. The nodes at which the water age values are below the maximum value are regarded as reliable nodes. Finally, the reliability index on the percentile weighted by the nodal demands reflects the reliability of the water quality in the distribution system. The approach has been applied in a real water distribution network. The contour plot based on the water age values determines a surface of the reliability of the water quality. At any time, this surface is used to locate high water age but poor reliability areas, which identify parts of the network that may be of poor water quality. As a result, the contour water age provides a valuable aid for a straight insight into the water quality in the distribution system.



    Recently, considerable emphasis has been laid to the reliability-based optimization model for water distribution systems. But considerable computational effort is needed to determine the reliability-based optimal design of large networks, even of mid-sized networks. In this paper, a new methodology is presented for the reliability analysis for water distribution systems. This methodology consists of two procedures. The first is that the optimal design is constrained only by the pressure heads at demand nodes, done in GRG2. Because the reliability constrains are removed from the optimal problem, a number of simulations do not need to be conducted, so the computer time is greatly decreased. Then, the second procedure is a linear optimal search procedure. In this linear procedure, the optimal results obtained by GRG2 are adjusted by the reliability constrains. The results are a group of commercial diameters of pipes and the constraints of pressure heads and reliability at nodes are satisfied. Therefore, the computer burden is significantly decreased, and the reliability-based optimization is of more practical use.

  16. Ultra-Reliable Communication in 5G Wireless Systems

    Popovski, Petar


    —Wireless 5G systems will not only be “4G, but faster”. One of the novel features discussed in relation to 5G is Ultra-Reliable Communication (URC), an operation mode not present in today’s wireless systems. URC refers to provision of certain level of communication service almost 100 % of the time....... Example URC applications include reliable cloud connectivity, critical connections for industrial automation and reliable wireless coordination among vehicles. This paper puts forward a systematic view on URC in 5G wireless systems. It starts by analyzing the fundamental mechanisms that constitute......-term URC (URC-S). The second dimension is represented by the type of reliability impairment that can affect the communication reliability in a given scenario. The main objective of this paper is to create the context for defining and solving the new engineering problems posed by URC in 5G....

  17. Reliability Estimations of Control Systems Effected by Several Interference Sources

    Deng Bei-xing; Jiang Ming-hu; Li Xing


    In order to estab lish the sufficient and necessary condition that arbitrarily reliable systems can not be construc-ted with function elements under interference sources, it is very important to expand set of interference sources with the above property. In this paper, the models of two types of in-terference sources are raised respectively: interference source possessing real input vectors and constant reliable interference source. We study the reliability of the systems effected by the interference sources, and the lower bound of the reliability is presented. The results show that it is impossible that arbi-trarily reliable systems can not be constructed with the ele-ments effected by above interference sources.

  18. Ubiquitous Computing Systems

    Bardram, Jakob Eyvind; Friday, Adrian


    First introduced two decades ago, the term ubiquitous computing is now part of the common vernacular. Ubicomp, as it is commonly called, has grown not just quickly but broadly so as to encompass a wealth of concepts and technology that serves any number of purposes across all of human endeavor......, an original ubicomp pioneer, Ubiquitous Computing Fundamentals brings together eleven ubiquitous computing trailblazers who each report on his or her area of expertise. Starting with a historical introduction, the book moves on to summarize a number of self-contained topics. Taking a decidedly human...... perspective, the book includes discussion on how to observe people in their natural environments and evaluate the critical points where ubiquitous computing technologies can improve their lives. Among a range of topics this book examines: How to build an infrastructure that supports ubiquitous computing...

  19. Seismic reliability analysis of large electric power systems

    何军; 李杰


    Based on the De. Morgan laws and Boolean simplification, a recursive decomposition method is introduced in this paper to identity the main exclusive safe paths and failed paths of a network. The reliability or the reliability bound of a network can be conveniently expressed as the summation of the joint probabilities of these paths. Under the multivariate normal distribution assumption, a conditioned reliability index method is developed to evaluate joint probabilities of various exclusive safe paths and failed paths, and, finally, the seismic reliability or the reliability bound of an electric power system.Examples given in thc paper show that the method is very simple and provides accurate results in the seismic reliability analysis.

  20. Reliability of Industrial Computer Management Platform%工控机可靠性管理平台

    李春霞; 唐怀斌; 贺孝珍; 刘兴莉; 隆萍


    论述了工控机可靠性的特征量,从产品设计、研发、生产、管理等方面提出了保证工控机可靠性实现和持续增长的技术、方法和管理体系,并就建立企业可靠性管理平台问题提出了看法.%The characteristic quantities of the industrial computer reliability are discussed. From product design, deveopment, producting, management and other aspects, the technology, methods and management system are put forward, which can ensure reliability to achieve and continuous growth of industrial computer. The establishment of enterprise reliability management platform is discussed.

  1. First Assessment of Reliability Data for the LHC Accelerator and Detector Cryogenic System Components

    Perinic, G; Alonso-Canella, I; Balle, C; Barth, K; Bel, J F; Benda, V; Bremer, J; Brodzinski, K; Casas-Cubillos, J; Cuccuru, G; Cugnet, M; Delikaris, D; Delruelle, N; Dufay-Chanat, L; Fabre, C; Ferlin, G; Fluder, C; Gavard, E; Girardot, R; Haug, F; Herblin, L; Junker, S; Klabi , T; Knoops, S; Lamboy, J P; Legrand, D; Metselaar, J; Park, A; Perin, A; Pezzetti, M; Penacoba-Fernandez, G; Pirotte, O; Rogez, E; Suraci, A; Stewart, L; Tavian, L J; Tovar-Gonzalez, A; Van Weelderen, R; Vauthier, N; Vullierme, B; Wagner, U


    The Large Hadron Collider (LHC) cryogenic system comprises eight independent refrigeration and distribution systems that supply the eight 3.3 km long accelerator sectors with cryogenic refrigeration power as well as four refrigeration systems for the needs of the detectors ATLAS and CMS. In order to ensure the highest possible reliability of the installations, it is important to apply a reliability centred approach for the maintenance. Even though large scale cryogenic refrigeration exists since the mid 20th century, very little third party reliability data is available today. CERN has started to collect data with its computer aided maintenance management system (CAMMS) in 2009, when the accelerator has gone into normal operation. This paper presents the reliability observations from the operation and the maintenance side, as well as statistical data collected by the means of the CAMMS system.

  2. Evaluation of nodal reliability risk in a deregulated power system with photovoltaic power penetration

    Zhao, Qian; Wang, Peng; Goel, Lalit


    and customer reliability requirements are correlated with energy and reserve prices. Therefore a new method should be developed to evaluate the impacts of PV power on customer reliability and system reserve deployment in the new environment. In this study, a method based on the pseudo-sequential Monte Carlo......Owing to the intermittent characteristic of solar radiation, power system reliability may be affected with high photovoltaic (PV) power penetration. To reduce large variation of PV power, additional system balancing reserve would be needed. In deregulated power systems, deployment of reserves...... simulation technique has been proposed to evaluate the reserve deployment and customers' nodal reliability with high PV power penetration. The proposed method can effectively model the chronological aspects and stochastic characteristics of PV power and system operation with high computation efficiency...

  3. Reliability Growth Analysis of Satellite Systems


    obtained. In addition, the Cumulative Intensity Function ( CIF ) of a family of satellite systems was analyzed to assess its similarity to that of a...parameters are obtained. In addition, the Cumulative Intensity Function ( CIF ) of a family of satellite systems was analyzed to assess its similarity to that...System Figures 7a through 7i display the real CIF for a variety of GOES missions. These cumulative intensity functions have shapes similar to the

  4. Reliable High Performance Processing System (RHPPS) Project

    National Aeronautics and Space Administration — NASA's exploration, science, and space operations systems are critically dependent on the hardware technologies used in their implementation. Specifically, the...

  5. Reliability of Structural Systems with Correlated Elements

    Thoft-Christensen, Palle; Sørensen, John Dalsgaard


    Calculation of the probability of failure of a system with correlation members is usually a difficult and time-consuming numerical problem. However, for some types of systems with equally correlated elements this calculation can be performed in a simple way. This has suggested two new methods based...

  6. Securely Data Forwarding and Maintaining Reliability of Data in Cloud Computing

    Sonali A.Wanjari


    Full Text Available Cloud works as an online storage servers and provides long term storage services over the internet. It is like a third party in whom we can store a data so they need data confidentiality, robustness and functionality. Encryption and encoding methods are used to solve such problems. After that divide proxy re-encryption scheme and integrating it with a decentralized erasure code such that a secure distributed storage system is formulated. The distributed storage system not only supports secure, robust data storage and retrieval but also lets the user forward his data to another user without retrieving the data. A concept of backup in same server allows users to retrieve failure data successfully in the storage server and also forward to another user without retrieving the data back. This is an attempt to provide light-weight approach which protects data access in distributed storage servers. User can implement all important concept i.e. Confidentiality for security, Robustness for healthy data, Reliability for flexible data, Availability for compulsory data will be achieved to another user which is store in cloud and easily overcome problem of “Securely data forwarding and maintaining, reliability of data in cloud computing “using different type of Methodology and Technology.

  7. Engineering systems reliability, safety, and maintenance an integrated approach

    Dhillon, B S


    Today, engineering systems are an important element of the world economy and each year billions of dollars are spent to develop, manufacture, operate, and maintain various types of engineering systems around the globe. Many of these systems are highly sophisticated and contain millions of parts. For example, a Boeing jumbo 747 is made up of approximately 4.5 million parts including fasteners. Needless to say, reliability, safety, and maintenance of systems such as this have become more important than ever before.  Global competition and other factors are forcing manufacturers to produce highly reliable, safe, and maintainable engineering products. Therefore, there is a definite need for the reliability, safety, and maintenance professionals to work closely during design and other phases. Engineering Systems Reliability, Safety, and Maintenance: An Integrated Approach eliminates the need to consult many different and diverse sources in the hunt for the information required to design better engineering syste...

  8. LeoTask: a fast, flexible and reliable framework for computational research

    Zhang, Changwang; Zhou, Shi; Chain, Benjamin M


    LeoTask is a Java library for computation-intensive and time-consuming research tasks. It automatically executes tasks in parallel on multiple CPU cores on a computing facility. It uses a configuration file to enable automatic exploration of parameter space and flexible aggregation of results, and therefore allows researchers to focus on programming the key logic of a computing task. It also supports reliable recovery from interruptions, dynamic and cloneable networks, and integration with th...

  9. Reliability analysis of ship structure system with multi-defects


    This paper analyzes the influence of multi-defects including the initial distortions,welding residual stresses,cracks and local dents on the ultimate strength of the plate element,and has worked out expressions of reliability calculation and sensitivity analysis of the plate element.Reliability analysis is made for the system with multi-defects plate elements.Failure mechanism,failure paths and the calculating approach to global reliability index are also worked out.After plate elements with multi-defects fail,the formula of reverse node forces which affect the residual structure is deduced,so are the sensitivity expressions of the system reliability index.This ensures calculating accuracy and rationality for reliability analysis,and makes it convenient to find weakness plate elements which affect the reliability of the structure system.Finally,for the validity of the approach proposed,we take the numerical example of a ship cabin to compare and contrast the reliability and the sensitivity analysis of the structure system with multi-defects with those of the structure system with no defects.The approach has implications for the structure design,rational maintenance and renewing strategy.

  10. A Passive System Reliability Analysis for a Station Blackout

    Brunett, Acacia; Bucknor, Matthew; Grabaskas, David; Sofu, Tanju; Grelle, Austin


    The latest iterations of advanced reactor designs have included increased reliance on passive safety systems to maintain plant integrity during unplanned sequences. While these systems are advantageous in reducing the reliance on human intervention and availability of power, the phenomenological foundations on which these systems are built require a novel approach to a reliability assessment. Passive systems possess the unique ability to fail functionally without failing physically, a result of their explicit dependency on existing boundary conditions that drive their operating mode and capacity. Argonne National Laboratory is performing ongoing analyses that demonstrate various methodologies for the characterization of passive system reliability within a probabilistic framework. Two reliability analysis techniques are utilized in this work. The first approach, the Reliability Method for Passive Systems, provides a mechanistic technique employing deterministic models and conventional static event trees. The second approach, a simulation-based technique, utilizes discrete dynamic event trees to treat time- dependent phenomena during scenario evolution. For this demonstration analysis, both reliability assessment techniques are used to analyze an extended station blackout in a pool-type sodium fast reactor (SFR) coupled with a reactor cavity cooling system (RCCS). This work demonstrates the entire process of a passive system reliability analysis, including identification of important parameters and failure metrics, treatment of uncertainties and analysis of results.

  11. Diverse Redundant Systems for Reliable Space Life Support

    Jones, Harry W.


    Reliable life support systems are required for deep space missions. The probability of a fatal life support failure should be less than one in a thousand in a multi-year mission. It is far too expensive to develop a single system with such high reliability. Using three redundant units would require only that each have a failure probability of one in ten over the mission. Since the system development cost is inverse to the failure probability, this would cut cost by a factor of one hundred. Using replaceable subsystems instead of full systems would further cut cost. Using full sets of replaceable components improves reliability more than using complete systems as spares, since a set of components could repair many different failures instead of just one. Replaceable components would require more tools, space, and planning than full systems or replaceable subsystems. However, identical system redundancy cannot be relied on in practice. Common cause failures can disable all the identical redundant systems. Typical levels of common cause failures will defeat redundancy greater than two. Diverse redundant systems are required for reliable space life support. Three, four, or five diverse redundant systems could be needed for sufficient reliability. One system with lower level repair could be substituted for two diverse systems to save cost.

  12. System reliability effects in wind turbine blades

    Dimitrov, Nikolay Krasimirov; Friis-Hansen, Peter; Berggreen, Christian


    Laminated composite sandwich panels have a layered structure, where individual layers have randomly varying stiffness and strength properties. The presence of multiple failure modes and load redistribution following partial failures are the reason for laminated composites to exhibit system behavior...

  13. Capability-based computer systems

    Levy, Henry M


    Capability-Based Computer Systems focuses on computer programs and their capabilities. The text first elaborates capability- and object-based system concepts, including capability-based systems, object-based approach, and summary. The book then describes early descriptor architectures and explains the Burroughs B5000, Rice University Computer, and Basic Language Machine. The text also focuses on early capability architectures. Dennis and Van Horn's Supervisor; CAL-TSS System; MIT PDP-1 Timesharing System; and Chicago Magic Number Machine are discussed. The book then describes Plessey System 25

  14. New computing systems and their impact on computational mechanics

    Noor, Ahmed K.


    Recent advances in computer technology that are likely to impact computational mechanics are reviewed. The technical needs for computational mechanics technology are outlined. The major features of new and projected computing systems, including supersystems, parallel processing machines, special-purpose computing hardware, and small systems are described. Advances in programming environments, numerical algorithms, and computational strategies for new computing systems are reviewed, and a novel partitioning strategy is outlined for maximizing the degree of parallelism on multiprocessor computers with a shared memory.

  15. Nonlinear system identification in offshore structural reliability

    Spanos, P.D. [Rice Univ., Houston, TX (United States); Lu, R. [Hudson Engineering Corporation, Houston, TX (United States)


    Nonlinear forces acting on offshore structures are examined from a system identification perspective. The nonlinearities are induced by ocean waves and may become significant in many situations. They are not necessarily in the form of Morison`s equation. Various wave force models are examined. The force function is either decomposed into a set of base functions or it is expanded in terms of the wave and structural kinematics. The resulting nonlinear system is decomposed into a number of parallel no-memory nonlinear systems, each followed by a finite-memory linear system. A conditioning procedure is applied to decouple these linear sub-systems; a frequency domain technique involving autospectra and cross-spectra is employed to identify the linear transfer functions. The structural properties and those force transfer parameters are determine with the aid of the coherence functions. The method is verified using simulated data. It provides a versatile and noniterative approach for dealing with nonlinear interaction problems encountered in offshore structural analysis and design.

  16. Reliability models of belt drive systems under slipping failure mode

    Peng Gao


    Full Text Available Conventional reliability assessment and reliability-based optimal design of belt drive are based on the stress–strength interference model. However, the stress–strength interference model is essentially a static model, and the sensitivity analysis of belt drive reliability with respect to design parameters needs further investigations. In this article, time-dependent factors that contribute the dynamic characteristics of reliability are pointed out. Moreover, dynamic reliability models and failure rate models of belt drive systems under the failure mode of slipping are developed. Furthermore, dynamic sensitivity models of belt drive reliability based on the proposed dynamic reliability models are proposed. In addition, numerical examples are given to illustrate the proposed models and analyze the influences of design parameters on dynamic characteristics of reliability, failure rate, and sensitivity functions. The results show that the statistical properties of design parameters have different influences on reliability and failure rate of belt drive in cases of different values of design parameters and different operational durations.

  17. Optimal bounded control for maximizing reliability of Duhem hysteretic systems

    Ming XU; Xiaoling JIN; Yong WANG; Zhilong HUANG


    The optimal bounded control of stochastic-excited systems with Duhem hysteretic components for maximizing system reliability is investigated. The Duhem hysteretic force is transformed to energy-depending damping and stiffness by the energy dissipation balance technique. The controlled system is transformed to the equivalent non-hysteretic system. Stochastic averaging is then implemented to obtain the Itˆo stochastic equation associated with the total energy of the vibrating system, appropriate for eval-uating system responses. Dynamical programming equations for maximizing system re-liability are formulated by the dynamical programming principle. The optimal bounded control is derived from the maximization condition in the dynamical programming equa-tion. Finally, the conditional reliability function and mean time of first-passage failure of the optimal Duhem systems are numerically solved from the Kolmogorov equations. The proposed procedure is illustrated with a representative example.

  18. A Method for Analyzing System Reliability of Existing Jacket Platforms

    HE Yong; GONG Shun-feng; JIN Wei-liang


    Owing to the ageing of the existing structures worldwide and the lack of codes for the continued safety management of structures during their lifetime, it is very necessary to develop a tool to evaluate their system reliability over a time interval. In this paper, a method is proposed to analyze system reliability of existing jacket platforms. The influences of dint, crack and corrosion are considered. The mechanics characteristics of the existing jacket platforms to extreme loads are analyzed by use of the nonlinear mechanical analysis. The nonlinear interaction of pile-soil-structure is taken into consideration in the analysis. By use of FEM method and Monte Carlo simulation, the system reliability of the existing jacket platform can be obtained. The method has been illustrated through application to BZ28-1 three jacket platforms which have operated for sixteen years. Advantages of the proposed method for analyzing the system reliability of the existing jacket platform is also highlighted.

  19. Bayesian approach in the power electric systems study of reliability ...

    During the applications change to all fields of engineering, the discipline has, over the years, ... Characterization of the failure rate as a random variable .... the reliability performance is defined as the conditional probability that the system ...

  20. Quantifiable and Reliable Structural Health Management Systems Project

    National Aeronautics and Space Administration — Major concerns for implementing a practical built-in structural health monitoring system are prediction accuracy and data reliability. It is proposed to develop...

  1. Allocating SMART Reliability and Maintainability Goals to NASA Ground Systems

    Gillespie, Amanda; Monaghan, Mark


    This paper will describe the methodology used to allocate Reliability and Maintainability (R&M) goals to Ground Systems Development and Operations (GSDO) subsystems currently being designed or upgraded.

  2. Lifeline system network reliability calculation based on GIS and FTA

    TANG Ai-ping; OU Jin-ping; LU Qin-nian; ZHANG Ke-xu


    Lifelines, such as pipeline, transportation, communication, electric transmission and medical rescue systems, are complicated networks that always distribute spatially over large geological and geographic units.The quantification of their reliability under an earthquake occurrence should be highly regarded, because the performance of these systems during a destructive earthquake is vital in order to estimate direct and indirect economic losses from lifeline failures, and is also related to laying out a rescue plan. The research in this paper aims to develop a new earthquake reliability calculation methodology for lifeline systems. The methodology of the network reliability for lifeline systems is based on fault tree analysis (FTA) and geological information system(GIS). The interactions existing in a lifeline system are considered herein. The lifeline systems are idealized as equivalent networks, consisting of nodes and links, and are described by network analysis in GIS. Firstly, the node is divided into two types: simple node and complicated node, where the reliability of the complicated node is calculated by FTA and interaction is regarded as one factor to affect performance of the nodes. The reliability of simple node and link is evaluated by code. Then, the reliability of the entire network is assessed based on GIS and FTA. Lastly, an illustration is given to show the methodology.

  3. Interaction Entropy: A New Paradigm for Highly Efficient and Reliable Computation of Protein-Ligand Binding Free Energy.

    Duan, Lili; Liu, Xiao; Zhang, John Z H


    Efficient and reliable calculation of protein-ligand binding free energy is a grand challenge in computational biology and is of critical importance in drug design and many other molecular recognition problems. The main challenge lies in the calculation of entropic contribution to protein-ligand binding or interaction systems. In this report, we present a new interaction entropy method which is theoretically rigorous, computationally efficient, and numerically reliable for calculating entropic contribution to free energy in protein-ligand binding and other interaction processes. Drastically different from the widely employed but extremely expensive normal mode method for calculating entropy change in protein-ligand binding, the new method calculates the entropic component (interaction entropy or -TΔS) of the binding free energy directly from molecular dynamics simulation without any extra computational cost. Extensive study of over a dozen randomly selected protein-ligand binding systems demonstrated that this interaction entropy method is both computationally efficient and numerically reliable and is vastly superior to the standard normal mode approach. This interaction entropy paradigm introduces a novel and intuitive conceptual understanding of the entropic effect in protein-ligand binding and other general interaction systems as well as a practical method for highly efficient calculation of this effect.

  4. Software Technology for Adaptable, Reliable Systems (STARS) Technical Program Plan,


    TPP (8/06186) * SOFTWARE TECHNOLOGY IE c0 FOR r ADAPTABLE, RELIABLE SYSTEMS (STARS) TECHNICAL PROGRAM PLAN 6 AUGUST 1986 DTIC ELECTEri NOV 141986WI...NONE NONE 11. TITLE (Include Security Classification) %. Software Technology for Adaptable, Reliable Systems (STARS) Technical Program Plan 12. PERSONAL...document is the top-level technical program plan for the STARS program. It describes the objectives of the program, the technical approach to achieve

  5. Interpretive Reliability of Six Computer-Based Test Interpretation Programs for the Minnesota Multiphasic Personality Inventory-2.

    Deskovitz, Mark A; Weed, Nathan C; McLaughlan, Joseph K; Williams, John E


    The reliability of six Minnesota Multiphasic Personality Inventory-Second edition (MMPI-2) computer-based test interpretation (CBTI) programs was evaluated across a set of 20 commonly appearing MMPI-2 profile codetypes in clinical settings. Evaluation of CBTI reliability comprised examination of (a) interrater reliability, the degree to which raters arrive at similar inferences based on the same CBTI profile and (b) interprogram reliability, the level of agreement across different CBTI systems. Profile inferences drawn by four raters were operationalized using q-sort methodology. Results revealed no significant differences overall with regard to interrater and interprogram reliability. Some specific CBTI/profile combinations (e.g., the CBTI by Automated Assessment Associates on a within normal limits profile) and specific profiles (e.g., the 4/9 profile displayed greater interprogram reliability than the 2/4 profile) were interpreted with variable consensus (α range = .21-.95). In practice, users should consider that certain MMPI-2 profiles are interpreted more or less consensually and that some CBTIs show variable reliability depending on the profile.

  6. Reliability Modeling of Microelectromechanical Systems Using Neural Networks

    Perera. J. Sebastian


    Microelectromechanical systems (MEMS) are a broad and rapidly expanding field that is currently receiving a great deal of attention because of the potential to significantly improve the ability to sense, analyze, and control a variety of processes, such as heating and ventilation systems, automobiles, medicine, aeronautical flight, military surveillance, weather forecasting, and space exploration. MEMS are very small and are a blend of electrical and mechanical components, with electrical and mechanical systems on one chip. This research establishes reliability estimation and prediction for MEMS devices at the conceptual design phase using neural networks. At the conceptual design phase, before devices are built and tested, traditional methods of quantifying reliability are inadequate because the device is not in existence and cannot be tested to establish the reliability distributions. A novel approach using neural networks is created to predict the overall reliability of a MEMS device based on its components and each component's attributes. The methodology begins with collecting attribute data (fabrication process, physical specifications, operating environment, property characteristics, packaging, etc.) and reliability data for many types of microengines. The data are partitioned into training data (the majority) and validation data (the remainder). A neural network is applied to the training data (both attribute and reliability); the attributes become the system inputs and reliability data (cycles to failure), the system output. After the neural network is trained with sufficient data. the validation data are used to verify the neural networks provided accurate reliability estimates. Now, the reliability of a new proposed MEMS device can be estimated by using the appropriate trained neural networks developed in this work.

  7. Operator adaptation to changes in system reliability under adaptable automation.

    Chavaillaz, Alain; Sauer, Juergen


    This experiment examined how operators coped with a change in system reliability between training and testing. Forty participants were trained for 3 h on a complex process control simulation modelling six levels of automation (LOA). In training, participants either experienced a high- (100%) or low-reliability system (50%). The impact of training experience on operator behaviour was examined during a 2.5 h testing session, in which participants either experienced a high- (100%) or low-reliability system (60%). The results showed that most operators did not often switch between LOA. Most chose an LOA that relieved them of most tasks but maintained their decision authority. Training experience did not have a strong impact on the outcome measures (e.g. performance, complacency). Low system reliability led to decreased performance and self-confidence. Furthermore, complacency was observed under high system reliability. Overall, the findings suggest benefits of adaptable automation because it accommodates different operator preferences for LOA. Practitioner Summary: The present research shows that operators can adapt to changes in system reliability between training and testing sessions. Furthermore, it provides evidence that each operator has his/her preferred automation level. Since this preference varies strongly between operators, adaptable automation seems to be suitable to accommodate these large differences.

  8. Final Report for the Virtual Reliability Realization System LDRD



    Current approaches to reliability are not adequate to keep pace with the need for faster, better and cheaper products and systems. This is especially true in high consequence of failure applications. The original proposal for the LDRD was to look at this challenge and see if there was a new paradigm that could make reliability predictions, along with a quantitative estimate of the risk in that prediction, in a way that was faster, better and cheaper. Such an approach would be based on the underlying science models that are the backbone of reliability predictions. The new paradigm would be implemented in two software tools: the Virtual Reliability Realization System (VRRS) and the Reliability Expert System (REX). The three-year LDRD was funded at a reduced level for the first year ($120K vs. $250K) and not renewed. Because of the reduced funding, we concentrated on the initial development of the expertise system. We developed an interactive semiconductor calculation tool needed for reliability analyses. We also were able to generate a basic functional system using Microsoft Siteserver Commerce Edition and Microsoft Sequel Server. The base system has the capability to store Office documents from multiple authors, and has the ability to track and charge for usage. The full outline of the knowledge model has been incorporated as well as examples of various types of content.

  9. Assuring long-term reliability of concentrator PV systems

    McConnell, R.; Garboushian, V.; Brown, J.; Crawford, C.; Darban, K.; Dutra, D.; Geer, S.; Ghassemian, V.; Gordon, R.; Kinsey, G.; Stone, K.; Turner, G.


    Concentrator PV (CPV) systems have attracted significant interest because these systems incorporate the world's highest efficiency solar cells and they are targeting the lowest cost production of solar electricity for the world's utility markets. Because these systems are just entering solar markets, manufacturers and customers need to assure their reliability for many years of operation. There are three general approaches for assuring CPV reliability: 1) field testing and development over many years leading to improved product designs, 2) testing to internationally accepted qualification standards (especially for new products) and 3) extended reliability tests to identify critical weaknesses in a new component or design. Amonix has been a pioneer in all three of these approaches. Amonix has an internal library of field failure data spanning over 15 years that serves as the basis for its seven generations of CPV systems. An Amonix product served as the test CPV module for the development of the world's first qualification standard completed in March 2001. Amonix staff has served on international standards development committees, such as the International Electrotechnical Commission (IEC), in support of developing CPV standards needed in today's rapidly expanding solar markets. Recently Amonix employed extended reliability test procedures to assure reliability of multijunction solar cell operation in its seventh generation high concentration PV system. This paper will discuss how these three approaches have all contributed to assuring reliability of the Amonix systems.

  10. Reliability Analysis of Structural Timber Systems

    Sørensen, John Dalsgaard; Hoffmeyer, P.


    characteristics of the load-bearing capacity is estimated in the form of a characteristic value and a coefficient of variation. These two values are of primary importance for codes of practice based on the partial safety factor format since the partial safety factor is closely related to the coefficient...... the above stochastic models, statistical characteristics (distribution function, 5% quantile and coefficient of variation) are determined. Generally, the results show that taking the system effects into account the characteristic load bearing capacity can be increased and the partial safety factor decreased...... of variation. In the paper a stochastic model is described for the strength of a single piece of timber taking into account the stochastic variation of the strength and stiffness with length. Also stochastic models for different types of loads are formulated. First, simple representative systems with different...

  11. Computer Security Systems Enable Access.

    Riggen, Gary


    A good security system enables access and protects information from damage or tampering, but the most important aspects of a security system aren't technical. A security procedures manual addresses the human element of computer security. (MLW)

  12. Data driven CAN node reliability assessment for manufacturing system

    Zhang, Leiming; Yuan, Yong; Lei, Yong


    The reliability of the Controller Area Network(CAN) is critical to the performance and safety of the system. However, direct bus-off time assessment tools are lacking in practice due to inaccessibility of the node information and the complexity of the node interactions upon errors. In order to measure the mean time to bus-off(MTTB) of all the nodes, a novel data driven node bus-off time assessment method for CAN network is proposed by directly using network error information. First, the corresponding network error event sequence for each node is constructed using multiple-layer network error information. Then, the generalized zero inflated Poisson process(GZIP) model is established for each node based on the error event sequence. Finally, the stochastic model is constructed to predict the MTTB of the node. The accelerated case studies with different error injection rates are conducted on a laboratory network to demonstrate the proposed method, where the network errors are generated by a computer controlled error injection system. Experiment results show that the MTTB of nodes predicted by the proposed method agree well with observations in the case studies. The proposed data driven node time to bus-off assessment method for CAN networks can successfully predict the MTTB of nodes by directly using network error event data.

  13. Distribution System Reliability Analysis for Smart Grid Applications

    Aljohani, Tawfiq Masad

    Reliability of power systems is a key aspect in modern power system planning, design, and operation. The ascendance of the smart grid concept has provided high hopes of developing an intelligent network that is capable of being a self-healing grid, offering the ability to overcome the interruption problems that face the utility and cost it tens of millions in repair and loss. To address its reliability concerns, the power utilities and interested parties have spent extensive amount of time and effort to analyze and study the reliability of the generation and transmission sectors of the power grid. Only recently has attention shifted to be focused on improving the reliability of the distribution network, the connection joint between the power providers and the consumers where most of the electricity problems occur. In this work, we will examine the effect of the smart grid applications in improving the reliability of the power distribution networks. The test system used in conducting this thesis is the IEEE 34 node test feeder, released in 2003 by the Distribution System Analysis Subcommittee of the IEEE Power Engineering Society. The objective is to analyze the feeder for the optimal placement of the automatic switching devices and quantify their proper installation based on the performance of the distribution system. The measures will be the changes in the reliability system indices including SAIDI, SAIFI, and EUE. The goal is to design and simulate the effect of the installation of the Distributed Generators (DGs) on the utility's distribution system and measure the potential improvement of its reliability. The software used in this work is DISREL, which is intelligent power distribution software that is developed by General Reliability Co.

  14. A computational method for reliable gait event detection and abnormality detection for feedback in rehabilitation.

    Senanayake, Chathuri; Senanayake, S M N Arosha


    In this paper, a gait event detection algorithm is presented that uses computer intelligence (fuzzy logic) to identify seven gait phases in walking gait. Two inertial measurement units and four force-sensitive resistors were used to obtain knee angle and foot pressure patterns, respectively. Fuzzy logic is used to address the complexity in distinguishing gait phases based on discrete events. A novel application of the seven-dimensional vector analysis method to estimate the amount of abnormalities detected was also investigated based on the two gait parameters. Experiments were carried out to validate the application of the two proposed algorithms to provide accurate feedback in rehabilitation. The algorithm responses were tested for two cases, normal and abnormal gait. The large amount of data required for reliable gait-phase detection necessitate the utilisation of computer methods to store and manage the data. Therefore, a database management system and an interactive graphical user interface were developed for the utilisation of the overall system in a clinical environment.

  15. The reliable solution and computation time of variable parameters Logistic model

    Pengfei, Wang


    The reliable computation time (RCT, marked as Tc) when applying a double precision computation of a variable parameters logistic map (VPLM) is studied. First, using the method proposed, the reliable solutions for the logistic map are obtained. Second, for a time-dependent non-stationary parameters VPLM, 10000 samples of reliable experiments are constructed, and the mean Tc is then computed. The results indicate that for each different initial value, the Tcs of the VPLM are generally different. However, the mean Tc trends to a constant value once the sample number is large enough. The maximum, minimum and probable distribution function of Tc is also obtained, which can help us to identify the robustness of applying a nonlinear time series theory to forecasting while using the VPLM output. In addition, the Tc of the fixed parameter experiments of the logistic map was obtained, and the results suggested that this Tc matches the theoretical formula predicted value.

  16. A Newly Developed Method for Computing Reliability Measures in a Water Supply Network

    Jacek Malinowski


    Full Text Available A reliability model of a water supply network has beens examined. Its main features are: a topology that can be decomposed by the so-called state factorization into a (relativelysmall number of derivative networks, each having a series-parallel structure (1, binary-state components (either operative or failed with given flow capacities (2, a multi-state character of the whole network and its sub-networks - a network state is defined as the maximal flow between a source (sources and a sink (sinks (3, all capacities (component, network, and sub-network have integer values (4. As the network operates, its state changes due to component failures, repairs, and replacements. A newly developed method of computing the inter-state transition intensities has been presented. It is based on the so-called state factorization and series-parallel aggregation. The analysis of these intensities shows that the failure-repair process of the considered system is an asymptotically homogenous Markov process. It is also demonstrated how certain reliability parameters useful for the network maintenance planning can be determined on the basis of the asymptotic intensities. For better understanding of the presented method, an illustrative example is given. (original abstract

  17. SOFC Systems with Improved Reliability and Endurance

    Ghezel-Ayagh, Hossein [Fuelcell Energy, Incorporated, Danbury, CT (United States)


    The overall goal of this U.S. Department of Energy (DOE) sponsored project was the development of Solid Oxide Fuel Cell (SOFC) technology suitable for ultra-efficient central power generation systems utilizing coal and natural gas fuels and featuring greater than 90% carbon dioxide capture. The specific technical objective of this project was to demonstrate, via analyses and testing, progress towards adequate stack life (≥ 4 years) and stack performance stability (degradation rate ≤ 0.2% per 1000 hours) in a low-cost SOFC stack design. This final technical report summarizes the progress made during the project period of 27 months. Significant progress was made in the areas of cell and stack technology development, stack module development, sub-scale module tests, and Proof-of-Concept Module unit design, fabrication and testing. The work focused on cell and stack materials and designs, balance-of-plant improvements, and performance evaluation covering operating conditions and fuel compositions anticipated for commercially-deployed systems. In support of performance evaluation under commercial conditions, this work included the design, fabrication, siting, commissioning, and operation of a ≥ 50 kWe proof-of-concept module (PCM) power plant, based upon SOFC cell and stack technology developed to date by FuelCell Energy, Inc. (FCE) under the Office of Fossil Energy’s Solid Oxide Fuel Cells program. The PCM system was operated for at least 1000 hours on natural gas fuel at FCE’s facility. The factory cost of the SOFC stack was estimated to be at or below the DOE’s high-volume production cost target (2011 $).

  18. Seismic reliability assessment of electric power systems

    Singhal, A. [Stanford Univ., CA (United States); Bouabid, J. [Risk Management Solutions, Menlo Park, CA (United States)


    This paper presents a methodology for the seismic risk assessment of electric power systems. In evaluating damage and loss of functionality to the electric power components, fragility curves and restoration functions are used. These vulnerability parameters are extracted from the GIS-based regional loss estimation methodology being developed for the US. Observed damage in electric power components during the Northridge earthquake is used to benchmark the methodology. The damage predicted using these vulnerability parameters is found to be in good agreement with the damage observed during the earthquake.

  19. Efficient Structural System Reliability Updating with Subspace-Based Damage Detection Information

    Döhler, Michael; Thöns, Sebastian

    modelling is introduced building upon the non-destructive testing reliability which applies to structural systems and DDS containing a strategy to overcome the high computational efforts for the pre-determination of the DDS reliability. This approach takes basis in the subspace-based damage detection method......Damage detection systems and algorithms (DDS and DDA) provide information of the structural system integrity in contrast to e.g. local information by inspections or non-destructive testing techniques. However, the potential of utilizing DDS information for the structural integrity assessment...... and prognosis is hardly exploited nor treated in scientific literature up to now. In order to utilize the information provided by DDS for the structural performance, usually high computational efforts for the pre-determination of DDS reliability are required. In this paper, an approach for the DDS performance...

  20. Designing incentive market mechanisms for improving restructured power system reliabilities

    Ding, Yi; Østergaard, Jacob; Wu, Qiuwei


    In a restructured power system, the monopoly generation utility is replaced by different electricity producers. There exists extreme price volatility caused by random failures by generation or/and transmission systems. In these cases, producers' profits can be much higher than those in the normal...... mechanisms for improving the restructured power system reliabilities have been designed in this paper. In the proposed incentive mechanisms, penalty will be implemented on a producer if the failures of its generator(s) result in the variation of electricity prices. Incentive market mechanisms can motivate...... state. The reliability management of producers usually cannot be directly controlled by the system operators in a restructured power system. Producers may have no motivation to improve their reliabilities, which can result in serious system unreliability issues in the new environment. Incentive market...

  1. Reliability of redundant ductile structures with uncertain system failure criteria

    Baidurya Bhattacharya; Qiang Lu; Jinquan Zhong


    Current reliability based approaches to structural design are typically element based: they commonly include uncertainties in the structural resistance, applied loads and geometric parameters, and in some cases in the idealized structural model. Nevertheless, the true measure of safety is the structural systems reliability which must consider multiple failure paths, load sharing and load redistribution after member failures, and is beyond the domain of element reliability analysis. Identification of system failure is often subjective, and a crisp definition of system failure arises naturally only in a few idealized instances equally important. We analyse the multi-girder steel highway bridge as a out of active parallel system. System failure is defined as gross inelastic deformation of the bridge deck; the subjectivity in the failure criterion is accounted for by generalizing as a random variable. Randomness in arises from a non-unique relation between number of failed girders and maximum deflection and from randomness in the definition of the failure deflection. We show how uncertain failure criteria and structural systems analyses can be decoupled. Randomness in the transverse location of trucks is considered and elastic perfectly plastic material response is assumed. The role of the system factor modifying the element-reliability based design equation to achieve a target system reliability is also demonstrated.

  2. Reliable Software Development for Machine Protection Systems

    Anderson, D; Dragu, M; Fuchsberger, K; Garnier, JC; Gorzawski, AA; Koza, M; Krol, K; Misiowiec, K; Stamos, K; Zerlauth, M


    The Controls software for the Large Hadron Collider (LHC) at CERN, with more than 150 millions lines of code, resides amongst the largest known code bases in the world1. Industry has been applying Agile software engineering techniques for more than two decades now, and the advantages of these techniques can no longer be ignored to manage the code base for large projects within the accelerator community. Furthermore, CERN is a particular environment due to the high personnel turnover and manpower limitations, where applying Agile processes can improve both, the codebase management as well as its quality. This paper presents the successful application of the Agile software development process Scrum for machine protection systems at CERN, the quality standards and infrastructure introduced together with the Agile process as well as the challenges encountered to adapt it to the CERN environment.

  3. Reliable regulation in decentralised control systems

    Locatelli, Arturo; Schiavoni, Nicola


    This article addresses the design of decentralised regulators which supply the control systems with signal tracking and disturbance rejection. This property has to be attained, to the maximum possible extent, even when instrumentation faults occur, thus causing the opening of some feedback loops. The problem is tackled for LTI asymptotically stable plants, subject to perturbations, under the assumption that the Laplace transforms of the exogenous signals have multiple poles on the imaginary axis. The proposed regulator is composed of an LTI nominal controller supervised by a reconfiguration block. Once the actions of the reconfiguration block have been settled, the synthesis of the nominal controller is reformulated as a suitable regulation problem. A constructive sufficient condition for its solvability is established. This condition turns out to be also necessary if the exogenous signals are polynomial in time.

  4. Energy-Efficient Reliability-Aware Scheduling Algorithm on Heterogeneous Systems

    Xiaoyong Tang


    Full Text Available The amount of energy needed to operate high-performance computing systems increases regularly since some years at a high pace, and the energy consumption has attracted a great deal of attention. Moreover, high energy consumption inevitably contains failures and reduces system reliability. However, there has been considerably less work of simultaneous management of system performance, reliability, and energy consumption on heterogeneous systems. In this paper, we first build the precedence-constrained parallel applications and energy consumption model. Then, we deduce the relation between reliability and processor frequencies and get their parameters approximation value by least squares curve fitting method. Thirdly, we establish a task execution reliability model and formulate this reliability and energy aware scheduling problem as a linear programming. Lastly, we propose a heuristic Reliability-Energy Aware Scheduling (REAS algorithm to solve this problem, which can get good tradeoff among system performance, reliability, and energy consumption with lower complexity. Our extensive simulation performance evaluation study clearly demonstrates the tradeoff performance of our proposed heuristic algorithm.

  5. Development of a Functional Platform for System Reliability Monitoring of Nuclear Power Plants

    Yang, Ming; Zhang, Zhijian; Yoshikawa, Hidekazu [Harbin Engineering University, Harbin (China)


    This paper presents MFM builder, a platform based on Multilevel Flow Modeling (MFM), which provides a graphical interface for modeling functions of complex artificial systems such as nuclear power plant with emphasizing the designed purposes of systems. Several algorithms based on MFM have been developed for dynamic system reliability analysis, fault diagnosis and quantitative software reliability analysis. A Reliability Monitoring System (RMS) of PWR nuclear power plant was developed by integrating above algorithms. Experiments by connecting RMS with a full scale PWR simulator showed that it took 16 seconds for RMS calculating the reliability changes over time of safety-related systems according to given system configurations in the 31 days by one computer run. The proposed reliability monitoring system can be used not only offline as a reliability analysis tool to assist the plant maintenance staffs in maintenance plan making, but also online as a operator support system to assist the operators in Main Control Room (MCR) in their various tasks such as configuration management, fault diagnosis and operational decision making.

  6. Energy efficient distributed computing systems

    Lee, Young-Choon


    The energy consumption issue in distributed computing systems raises various monetary, environmental and system performance concerns. Electricity consumption in the US doubled from 2000 to 2005.  From a financial and environmental standpoint, reducing the consumption of electricity is important, yet these reforms must not lead to performance degradation of the computing systems.  These contradicting constraints create a suite of complex problems that need to be resolved in order to lead to 'greener' distributed computing systems.  This book brings together a group of outsta

  7. MCTSSA Software Reliability Handbook, Volume II: Data Collection Demonstration and Software Reliability Modeling for a Multi-Function Distributed System

    Schneidewind, Norman F.


    The purpose of this handbook is threefold. Specifically, it: Serves as a reference guide for implementing standard software reliability practices at Marine Corps Tactical Systems Support Activity and aids in applying the software reliability model; Serves as a tool for managing the software reliability program; and Serves as a training aid. U.S. Marine Corps Tactical Systems Support Activity, Camp Pendleton, CA. RLACH

  8. 76 FR 7187 - Priorities for Addressing Risks to the Reliability of the Bulk-Power System; Reliability...


    ... Energy Regulatory Commission Priorities for Addressing Risks to the Reliability of the Bulk- Power System; Reliability Technical Conference Panel February 2, 2011. As announced in the Notice of Technical Conference..., to discuss policy issues related to reliability of the Bulk-Power System, including priorities...

  9. Dynamical Systems Some Computational Problems

    Guckenheimer, J; Guckenheimer, John; Worfolk, Patrick


    We present several topics involving the computation of dynamical systems. The emphasis is on work in progress and the presentation is informal -- there are many technical details which are not fully discussed. The topics are chosen to demonstrate the various interactions between numerical computation and mathematical theory in the area of dynamical systems. We present an algorithm for the computation of stable manifolds of equilibrium points, describe the computation of Hopf bifurcations for equilibria in parametrized families of vector fields, survey the results of studies of codimension two global bifurcations, discuss a numerical analysis of the Hodgkin and Huxley equations, and describe some of the effects of symmetry on local bifurcation.

  10. Incorporating Cyber Layer Failures in Composite Power System Reliability Evaluations

    Yuqi Han


    Full Text Available This paper proposes a novel approach to analyze the impacts of cyber layer failures (i.e., protection failures and monitoring failures on the reliability evaluation of composite power systems. The reliability and availability of the cyber layer and its protection and monitoring functions with various topologies are derived based on a reliability block diagram method. The availability of the physical layer components are modified via a multi-state Markov chain model, in which the component protection and monitoring strategies, as well as the cyber layer topology, are simultaneously considered. Reliability indices of composite power systems are calculated through non-sequential Monte-Carlo simulation. Case studies demonstrate that operational reliability downgrades in cyber layer function failure situations. Moreover, protection function failures have more significant impact on the downgraded reliability than monitoring function failures do, and the reliability indices are especially sensitive to the change of the cyber layer function availability in the range from 0.95 to 1.

  11. Evaluation of anthropometric accuracy and reliability using different three-dimensional scanning systems

    Fourie, Zacharias; Damstra, Janalt; Gerrits, Peter O.; Ren, Yijin


    The aim of this study was to evaluate the accuracy and reliability of standard anthropometric linear measurements made with three different three-dimensional scanning systems namely laser surface scanning (Minolta Vivid 900), cone beam computed tomography (CBCT), 3D stereo-photogrammetry (Di3D syste

  12. Optimal redundancy allocation for reliability systems with imperfect switching

    Lun Ran; Jinlin Li; Xujie Jia; Hongrui Chu


    The problem of stochastical y al ocating redundant com-ponents to increase the system lifetime is an important topic of reliability. An optimal redundancy al ocation is proposed, which maximizes the expected lifetime of a reliability system with sub-systems consisting of components in paral el. The constraints are minimizing the total resources and the sizes of subsystems. In this system, each switching is independent with each other and works with probability p. Two optimization problems are studied by an incremental algorithm and dynamic programming technique respectively. The incremental algorithm proposed could obtain an approximate optimal solution, and the dynamic programming method could generate the optimal solution.

  13. Reliability modeling and analysis of smart power systems

    Karki, Rajesh; Verma, Ajit Kumar


    The volume presents the research work in understanding, modeling and quantifying the risks associated with different ways of implementing smart grid technology in power systems in order to plan and operate a modern power system with an acceptable level of reliability. Power systems throughout the world are undergoing significant changes creating new challenges to system planning and operation in order to provide reliable and efficient use of electrical energy. The appropriate use of smart grid technology is an important drive in mitigating these problems and requires considerable research acti


    MA Xiao-ning; L(U) Zhen-zhou; YUE Zhu-feng


    An advanced reliability growth model, i. e. exponential model, was presented to estimate the model parameters for multi-systems, which was synchronously tested, synchronously censored, and synchronously improved. In the presented method,the data during the reliability growth process were taken into consideration sufficiently,including the failure numbers, safety numbers and failure time at each censored time. If the multi-systems were synchronously improved for many times, and the reliability growth of each system fitted AMSAA (Army Material Systems Analysis Activity)model, the failure time of each system could be considered rationally as an exponential distribution between two adjoining censored times. The nonparametric method was employed to obtain the reliability at each censored time of the synchronous multisystems. The point estimations of the model parameters, a and b, were given by the least square method. The confidence interval for the parameter b was given as well. An engineering illustration was used to compare the result of the presented method with those of the available models. The result shows that the presented exponential growth model fits AMSAA-BISE ( Army Material Systems Analysis Activity-Beijing Institute of Structure and Environment) model rather well, and two models are suitable to estimate the reliability growth for the synchronously developed multi-systems.

  15. Software metrics: Software quality metrics for distributed systems. [reliability engineering

    Post, J. V.


    Software quality metrics was extended to cover distributed computer systems. Emphasis is placed on studying embedded computer systems and on viewing them within a system life cycle. The hierarchy of quality factors, criteria, and metrics was maintained. New software quality factors were added, including survivability, expandability, and evolvability.

  16. Bayesian method for system reliability assessment of overlapping pass/fail data

    Zhipeng Hao; Shengkui Zeng; Jianbin Guo


    For high reliability and long life systems, system pass/fail data are often rare. Integrating lower-level data, such as data drawn from the subsystem or component pass/fail testing, the Bayesian analysis can improve the precision of the system reli-ability assessment. If the multi-level pass/fail data are overlapping, one chal enging problem for the Bayesian analysis is to develop a likelihood function. Since the computation burden of the existing methods makes them infeasible for multi-component systems, this paper proposes an improved Bayesian approach for the system reliability assessment in light of overlapping data. This approach includes three steps: fristly searching for feasible paths based on the binary decision diagram, then screening feasible points based on space partition and constraint decomposition, and final y sim-plifying the likelihood function. An example of a satel ite rol ing control system demonstrates the feasibility and the efficiency of the proposed approach.

  17. Precision of lumbar intervertebral measurements: does a computer-assisted technique improve reliability?

    Pearson, Adam M; Spratt, Kevin F; Genuario, James; McGough, William; Kosman, Katherine; Lurie, Jon; Sengupta, Dilip K


    Comparison of intra- and interobserver reliability of digitized manual and computer-assisted intervertebral motion measurements and classification of "instability." To determine if computer-assisted measurement of lumbar intervertebral motion on flexion-extension radiographs improves reliability compared with digitized manual measurements. Many studies have questioned the reliability of manual intervertebral measurements, although few have compared the reliability of computer-assisted and manual measurements on lumbar flexion-extension radiographs. Intervertebral rotation, anterior-posterior (AP) translation, and change in anterior and posterior disc height were measured with a digitized manual technique by three physicians and by three other observers using computer-assisted quantitative motion analysis (QMA) software. Each observer measured 30 sets of digital flexion-extension radiographs (L1-S1) twice. Shrout-Fleiss intraclass correlation coefficients for intra- and interobserver reliabilities were computed. The stability of each level was also classified (instability defined as >4 mm AP translation or 10° rotation), and the intra- and interobserver reliabilities of the two methods were compared using adjusted percent agreement (APA). Intraobserver reliability intraclass correlation coefficients were substantially higher for the QMA technique THAN the digitized manual technique across all measurements: rotation 0.997 versus 0.870, AP translation 0.959 versus 0.557, change in anterior disc height 0.962 versus 0.770, and change in posterior disc height 0.951 versus 0.283. The same pattern was observed for interobserver reliability (rotation 0.962 vs. 0.693, AP translation 0.862 vs. 0.151, change in anterior disc height 0.862 vs. 0.373, and change in posterior disc height 0.730 vs. 0.300). The QMA technique was also more reliable for the classification of "instability." Intraobserver APAs ranged from 87 to 97% for QMA versus 60% to 73% for digitized manual

  18. Computational Systems Chemical Biology

    Oprea, Tudor I.; Elebeoba E. May; Leitão, Andrei; Tropsha, Alexander


    There is a critical need for improving the level of chemistry awareness in systems biology. The data and information related to modulation of genes and proteins by small molecules continue to accumulate at the same time as simulation tools in systems biology and whole body physiologically-based pharmacokinetics (PBPK) continue to evolve. We called this emerging area at the interface between chemical biology and systems biology systems chemical biology, SCB (Oprea et al., 2007).

  19. Reliable Global Navigation System using Flower Constellation

    Daniele Mortari


    Full Text Available For many space missions using satellite constellations, symmetry of satellites distribution plays usually a key role. Symmetry may be considered in space and/or in time distribution. Examples of required symmetry in space distribution are in Earth observation missions (either, for local or global as well as in navigation systems. It is intuitive that to optimally observe the Earth a satellite constellation should be synchronized with the Earth rotation rate. If a satellite constellation must be designed to constitute a communication network between Earth and Jupiter, then the orbital period of the constellation satellites should be synchronized with both Earth and Jupiter periods of revolution around the Sun. Another example is to design satellite constellations to optimally observe specific Earth sites or regions. Again, this satellites constellation should be synchronized with Earth’s rotational period and (since the time gap between two subsequent observations of the site should be constant also implies time symmetry in satellites distribution. Obtaining this result will allow to design operational constellations for observing targets (sites, borders, regions with persistence or assigned revisit times, while minimizing the number of satellites required. Constellations of satellites for continuous global or zonal Earth coverage have been well studied over the last twenty years, are well known and have been well documented [1], [2], [7], [8], [11], [13]. A symmetrical, inclined constellation, such as a Walker constellation [1], [2] provides excellent global coverage for remote sensing missions; however, applications where target revisit time or persistent observation are important lead to required variations of traditional designs [7], [8]. Also, few results are available that affect other figures of merit, such as continuous regional coverage and the systematic use of eccentric orbit constellations to optimize“hang time” over regions of

  20. An Overview of the Reliability and Availability Data System (RADS)

    T. E. Wierman; K. J. Kvarfordt; S. A. Eide; D. M. Rasmuson


    The Reliability and Availability Data System (RADS) is a database and analysis code, developed by the Idaho National Engineering and Environmental Laboratory (INEEL) for the U.S. Nuclear Regulatory Commission (USNRC). The code is designed to estimate industry and plant-specific reliability and availability parameters for selected components in risk-important systems and initiating events for use in risk-informed applications. The RADS tool contains data and information based on actual operating experience from U.S. commercial nuclear power plants. The data contained in RADS is kept up-to-date by loading the most current quarter's Equipment Performance and Information Exchange (EPIX) data and by yearly lods of initiating event data from licensee event reports (LERS). The reliability parameters estimated by RADS are (1) probability of failure on demand, (2) failure rate during operation (used to calculate failure to run probability) and (3) time trends in reliability parameters.

  1. Grey System Judgment on Reliability of Mechanical Equipment


    The Grey system theory was applied in reliability analysis of mechanical equip-ment. It is a new theory and method in reliability engineering of mechanical engineering of mechanical equipment. Through the Grey forecast of reliability parameters and the relia-bility forecast of parts and systems, decisions were made in the real operative state of e-quipment in real time. It replaced the old method that required mathematics and physical statistics in a large base of test data to obtain a pre-check, and it was used in a practical problem. Because of applying the data of practical operation state in real time, it could much more approach the real condition of equipment; it was applied to guide the procedure and had rather considerable economic and social benefits.

  2. Dynamic evidential reasoning algorithm for systems reliability prediction

    Hu, Chang-Hua; Si, Xiao-Sheng; Yang, Jian-Bo


    In this article, dynamic evidential reasoning (DER) algorithm is applied to forecast reliability in turbochargers engine systems and a reliability prediction model is developed. The focus of this study is to examine the feasibility and validity of DER algorithm in systems reliability prediction by comparing it with some existing approaches. To build an effective DER forecasting model, the parameters of prediction model must be set carefully. To solve this problem, a generic nonlinear optimisation model is investigated to search for the optimal parameters of forecasting model, and then the optimal parameters are adopted to construct the DER forecasting model. Finally, a numerical example is provided to demonstrate the detailed implementation procedures and the validity of the proposed approach in the areas of reliability prediction.

  3. Hybridity in Embedded Computing Systems

    虞慧群; 孙永强


    An embedded system is a system that computer is used as a component in a larger device.In this paper,we study hybridity in embedded systems and present an interval based temporal logic to express and reason about hybrid properties of such kind of systems.

  4. System Reliability Assessment of Existing Jacket Platforms in Malaysian Waters

    V.J. Kurian


    Full Text Available Reliability of offshore platforms has become a very important issue in the Malaysian Oil and Gas Industry as, majority of the jacket platforms in Malaysian waters to date, have exceeded their design life. Reliability of a jacket platform can be assessed through reliability index and probability of failure. Conventional metocean consideration uses 100 year return period wave height associated with 100 year return period current velocity and wind speed. However, recent study shows that for Malaysian waters, the proposed metocean consideration should be 100 year return period wave height associated with 10 year return period current velocity and wind speed. Hence, this research investigated the effect of different metocean consideration, to system-based reliability of jacket platforms in Malaysian waters. Prior to that, the effect of different metocean consideration to the pushover analysis has also been studied. Besides, the significance of Pile Soil Interaction (PSI, wave direction and platform geometry were analyzed in a sensitivity study. Pushover analysis was performed on three jacket platforms representing three water regions in Malaysia to obtain Reserve Strength Ratio (RSR as an indicator of the reliability of the jackets. Utilizing sensitivity study parameters mentioned above, seven different case studies were undertaken to study their significance on RSR. The RSR values of each case study were compared and incorporated as resistance model of reliability analysis. Besides, platform specific response model of each jacket has been generated using response surface technique which was later incorporated into the limit state function for reliability analysis. Reliability analysis using First Order Reliability Method (FORM has been conducted in MATLAB to obtain the reliability index and probability of failure. Results from the reliability analysis were compared to analyze the effect of different metocean consideration. In this study, an updated

  5. Tera-Op Reliable Intelligently Adaptive Processing System (TRIPS)


    AFRL-IF-WP-TR-2004-1514 TERA -OP RELIABLE INTELLIGENTLY ADAPTIVE PROCESSING SYSTEM (TRIPS) Stephen W. Keckler, Doug Burger, Michael Dahlin...03/31/2004 5a. CONTRACT NUMBER F33615-01-C-1892 5b. GRANT NUMBER 4. TITLE AND SUBTITLE TERA -OP RELIABLE INTELLIGENTLY ADAPTIVE PROCESSING...influence beyond the scope of this project; the influence is expected to increase with the fabrication of the prototype in phase 2. 1 2 Introduction The Tera

  6. Reliability analysis of large, complex systems using ASSIST

    Johnson, Sally C.


    The SURE reliability analysis program is discussed as well as the ASSIST model generation program. It is found that semi-Markov modeling using model reduction strategies with the ASSIST program can be used to accurately solve problems at least as complex as other reliability analysis tools can solve. Moreover, semi-Markov analysis provides the flexibility needed for modeling realistic fault-tolerant systems.

  7. General reliability and safety methodology and its application to wind energy conversion systems

    Edesess, M.; McConnell, R. D.


    In conventional system reliability calculations, each component may be in the Operable state or the Under Repair state. These calculations derive system unavailability, or the probability of the system's being down for repairs. By introducing a third component state between Operable and Under Repair - namely, Defective, But Defect Undetected - the methods developed in this report enable system safety projections to be made in addition to availability projections. Also provided is a mechanism for computing the effect of inspection schedules on both safety and availability. A Reliability and Safety Program (RASP) is detailed which performs these computations and also calculates costs for system inspections and repairs. RASP is applied to a simplified wind energy conversion system example.

  8. Computer algebra in systems biology

    Laubenbacher, Reinhard


    Systems biology focuses on the study of entire biological systems rather than on their individual components. With the emergence of high-throughput data generation technologies for molecular biology and the development of advanced mathematical modeling techniques, this field promises to provide important new insights. At the same time, with the availability of increasingly powerful computers, computer algebra has developed into a useful tool for many applications. This article illustrates the use of computer algebra in systems biology by way of a well-known gene regulatory network, the Lac Operon in the bacterium E. coli.

  9. A Scalable and Reliable Message Transport Service for the ATLAS Trigger and Data Acquisition System

    Kazarov, A; The ATLAS collaboration; Kolos, S; Lehmann Miotto, G; Soloviev, I


    The ATLAS Trigger and Data Acquisition (TDAQ) is a large distributed computing system composed of several thousands of interconnected computers and tens of thousands applications. During a run, TDAQ applications produce a lot of control and information messages with variable rates, addressed to TDAQ operators or to other applications. Reliable, fast and accurate delivery of the messages is important for the functioning of the whole TDAQ system. The Message Transport Service (MTS) provides facilities for the reliable transport, the filtering and the routing of the messages, basing on publish-subscribe-notify communication pattern with content-based message filtering. During the ongoing LHC shutdown, the MTS was re-implemented, taking into account important requirements like reliability, scalability and performance, handling of slow subscribers case and also simplicity of the design and the implementation. MTS uses CORBA middleware, a common layer for TDAQ infrastructure, and provides sending/subscribing APIs i...

  10. Selected Methods For Increases Reliability The Of Electronic Systems Security

    Paś Jacek


    Full Text Available The article presents the issues related to the different methods to increase the reliability of electronic security systems (ESS for example, a fire alarm system (SSP. Reliability of the SSP in the descriptive sense is a property preservation capacity to implement the preset function (e.g. protection: fire airport, the port, logistics base, etc., at a certain time and under certain conditions, e.g. Environmental, despite the possible non-compliance by a specific subset of elements this system. Analyzing the available literature on the ESS-SSP is not available studies on methods to increase the reliability (several works similar topics but moving with respect to the burglary and robbery (Intrusion. Based on the analysis of the set of all paths in the system suitability of the SSP for the scenario mentioned elements fire events (device critical because of security.

  11. Reliability analysis of two unit parallel repairable industrial system

    Mohit Kumar Kakkar


    Full Text Available The aim of this work is to present a reliability and profit analysis of a two-dissimilar parallel unit system under the assumption that operative unit cannot fail after post repair inspection and replacement and there is only one repair facility. Failure and repair times of each unit are assumed to be uncorrelated. Using regenerative point technique various reliability characteristics are obtained which are useful to system designers and industrial managers. Graphical behaviors of mean time to system failure (MTSF and profit function have also been studied. In this paper, some important measures of reliability characteristics of a two non-identical unit standby system model with repair, inspection and post repair are obtained using regenerative point technique.

  12. Discrete event simulation versus conventional system reliability analysis approaches

    Kozine, Igor


    Discrete Event Simulation (DES) environments are rapidly developing and appear to be promising tools for building reliability and risk analysis models of safety-critical systems and human operators. If properly developed, they are an alternative to the conventional human reliability analysis models...... and systems analysis methods such as fault and event trees and Bayesian networks. As one part, the paper describes briefly the author’s experience in applying DES models to the analysis of safety-critical systems in different domains. The other part of the paper is devoted to comparing conventional approaches...

  13. Embedded mechatronic systems 1 analysis of failures, predictive reliability

    El Hami, Abdelkhalak


    In operation, mechatronics embedded systems are stressed by loads of different causes: climate (temperature, humidity), vibration, electrical and electromagnetic. These stresses in components which induce failure mechanisms should be identified and modeled for better control. AUDACE is a collaborative project of the cluster Mov'eo that address issues specific to mechatronic reliability embedded systems. AUDACE means analyzing the causes of failure of components of mechatronic systems onboard. The goal of the project is to optimize the design of mechatronic devices by reliability. The projec

  14. Students "Hacking" School Computer Systems

    Stover, Del


    This article deals with students hacking school computer systems. School districts are getting tough with students "hacking" into school computers to change grades, poke through files, or just pit their high-tech skills against district security. Dozens of students have been prosecuted recently under state laws on identity theft and unauthorized…

  15. Students "Hacking" School Computer Systems

    Stover, Del


    This article deals with students hacking school computer systems. School districts are getting tough with students "hacking" into school computers to change grades, poke through files, or just pit their high-tech skills against district security. Dozens of students have been prosecuted recently under state laws on identity theft and unauthorized…


    J. Gogoi


    Full Text Available This paper deals with the stress vs. strength problem incorporating multi-componentsystems viz. standby redundancy. The models developed have been illustrated assuming that allthe components in the system for both stress and strength are independent and follow differentprobability distributions viz. Exponential, Gamma and Lindley. Four different conditions forstress and strength have been considered for this investigation. Under these assumptions thereliabilities of the system have been obtained with the help of the particular forms of densityfunctions of n-standby system when all stress-strengths are random variables. The expressions forthe marginal reliabilities R(1, R(2, R(3 etc. have been derived based on its stress- strengthmodels. Then the corresponding system reliabilities Rn have been computed numerically andpresented in tabular forms for different stress-strength distributions with different values of theirparameters. Here we consider n  3 for estimating the system reliability R3.

  17. Reliability assessment for components of large scale photovoltaic systems

    Ahadi, Amir; Ghadimi, Noradin; Mirabbasi, Davar


    Photovoltaic (PV) systems have significantly shifted from independent power generation systems to a large-scale grid-connected generation systems in recent years. The power output of PV systems is affected by the reliability of various components in the system. This study proposes an analytical approach to evaluate the reliability of large-scale, grid-connected PV systems. The fault tree method with an exponential probability distribution function is used to analyze the components of large-scale PV systems. The system is considered in the various sequential and parallel fault combinations in order to find all realistic ways in which the top or undesired events can occur. Additionally, it can identify areas that the planned maintenance should focus on. By monitoring the critical components of a PV system, it is possible not only to improve the reliability of the system, but also to optimize the maintenance costs. The latter is achieved by informing the operators about the system component's status. This approach can be used to ensure secure operation of the system by its flexibility in monitoring system applications. The implementation demonstrates that the proposed method is effective and efficient and can conveniently incorporate more system maintenance plans and diagnostic strategies.

  18. Reliable multicast for the Grid: a case study in experimental computer science.

    Nekovee, Maziar; Barcellos, Marinho P; Daw, Michael


    In its simplest form, multicast communication is the process of sending data packets from a source to multiple destinations in the same logical multicast group. IP multicast allows the efficient transport of data through wide-area networks, and its potentially great value for the Grid has been highlighted recently by a number of research groups. In this paper, we focus on the use of IP multicast in Grid applications, which require high-throughput reliable multicast. These include Grid-enabled computational steering and collaborative visualization applications, and wide-area distributed computing. We describe the results of our extensive evaluation studies of state-of-the-art reliable-multicast protocols, which were performed on the UK's high-speed academic networks. Based on these studies, we examine the ability of current reliable multicast technology to meet the Grid's requirements and discuss future directions.

  19. The Validity and Reliability Studies of the Computer Anxiety Scale on Educational Administrators (CAS-EA)

    Agaoglu, Esmahan; Ceyhan, Esra; Ceyhan, Aykut; Simsek, Yucel


    This study aims at investigating the validity and reliability studies of the "Computer Anxiety Scale" (Ceyhan & Gurcan Namlu, 2000) on educational administrators. The data gathered from 143 educational administrators of state schools located in Eskisehir show that the scale consists of 2 factors. The first of these factors, affective anxiety…

  20. Recent Advances in System Reliability Signatures, Multi-state Systems and Statistical Inference

    Frenkel, Ilia


    Recent Advances in System Reliability discusses developments in modern reliability theory such as signatures, multi-state systems and statistical inference. It describes the latest achievements in these fields, and covers the application of these achievements to reliability engineering practice. The chapters cover a wide range of new theoretical subjects and have been written by leading experts in reliability theory and its applications.  The topics include: concepts and different definitions of signatures (D-spectra),  their  properties and applications  to  reliability of coherent systems and network-type structures; Lz-transform of Markov stochastic process and its application to multi-state system reliability analysis; methods for cost-reliability and cost-availability analysis of multi-state systems; optimal replacement and protection strategy; and statistical inference. Recent Advances in System Reliability presents many examples to illustrate the theoretical results. Real world multi-state systems...

  1. Windfarm generation assessment for reliability analysis of power systems

    Negra, N.B.; Holmstrøm, O.; Bak-Jensen, B.;


    Due to the fast development of wind generation in the past ten years, increasing interest has been paid to techniques for assessing different aspects of power systems with a large amount of installed wind generation. One of these aspects concerns power system reliability. Windfarm modelling plays...

  2. Improving Reliability and Durability of Efficient and Clean Energy Systems

    Singh, Prabhakar [Univ. of Connecticut, Storrs, CT (United States)


    Overall objective of the research program was to develop an in-depth understanding of the degradation processes in advanced electrochemical energy conversion systems. It was also the objective of the research program to transfer the technology to participating industries for implementation in manufacturing of cost effective and reliable integrated systems.

  3. Reliability and validity of emergency department triage systems

    van der Wulp, I.


    Reliability and validity of triage systems is important because this can affect patient safety. In this thesis, these aspects of two emergency department (ED) triage systems were studied as well as methodological aspects in these types of studies. The consistency, reproducibility, and criterion vali

  4. On the reliability of a renewable multiple cold standby system

    Vanderperre E. J.


    Full Text Available We present a general reliability analysis of a renewable multiple cold standby system attended by a single repairman. Our analysis is based on a refined methodology of queuing theory. The particular case of deterministic failures provides an explicit exact result for the survival function of the duplex system.

  5. Operational reliability evaluation of restructured power systems with wind power penetration utilizing reliability network equivalent and time-sequential simulation approaches

    Ding, Yi; Cheng, Lin; Zhang, Yonghong


    and reserve provides, fast reserve providers and transmission network in restructured power systems. A contingency management schema for real time operation considering its coupling with the day-ahead market is proposed. The time-sequential Monte Carlo simulation is used to model the chronological...... with high wind power penetration. The proposed technique is based on the combination of the reliability network equivalent and time-sequential simulation approaches. The operational reliability network equivalents are developed to represent reliability models of wind farms, conventional generation...... characteristics of corresponding reliability network equivalents. A simplified method is also developed in the simulation procedures for improving the computational efficiency. The proposed technique can be used to evaluate customers’ reliabilities considering high penetration of wind power during the power...

  6. 基于支持向量机的高墩大跨桥梁体系可靠度计算%Computation of Structural System Reliability of High-Rise Pier and Long Span Bridge Based on Support Vector Machine

    张士福; 康海贵; 郑元勋; 李玉刚


    为解决高墩大跨桥梁结构体系可靠度求解过程中失效模式复杂、极限状态方程无法显式表达等问题,提出一种基于支持向量机(SVM)分类技术的体系可靠度计算方法.该方法采用拉丁超立方抽样法产生样本库,通过重复筛选构造一个精确的SVM分类器函数(而不是构造功能函数本身的响应面函数),然后采用蒙特卡罗法进行数值模拟计算体系失效概率.以济邵高速公路逢石河特大桥为例验证了采用该方法计算高墩大跨桥梁体系可靠度的实用性,计算结果表明该桥体系可靠度满足设计要求.%To cope with the problems that the failure modes in the solving process of structural system reliability of high-rise pier and long span bridge are complicated and the limit state e-quations can not be displayed and expressed, a method for computation of the system reliability based on the support vector machine (SVM) classification technique is proposed. The proposed method can generalize the sample bank, using the Latin hypercube sampling method and can construct a precise SVM classifier function (rather than the response surface function of the structure function itself) by repeated screening and the numerical simulation and computation of the system failure probability are accordingly carried out, using the Monte Carlo method. By way of example of the Fengshi River Bridge on Jiyuan-Shaoyuan Expressway, the practicability of the method for computation of the system reliability of the high-rise pier and long span bridge is verified and the results of the computation prove that the system reliability of the bridge can meet the design requirements

  7. Reliability and maintainability analysis of electrical system of drum shearers

    SEYED Hadi Hoseinie; MOHAMMAD Ataei; REZA Khalokakaie; UDAY Kumar


    The reliability and maintainability of electrical system of drum shearer at Parvade.l Coal Mine in central Iran was analyzed. The maintenance and failure data were collected during 19 months of shearer operation. According to trend and serial correlation tests, the data were independent and identically distributed (iid) and therefore the statistical techniques were used for modeling. The data analysis show that the time between failures (TBF) and time to repair (TTR) data obey the lognormal and Weibull 3 parameters distribution respectively. Reliability-based preventive maintenance time intervals for electrical system of the drum shearer were calculated with regard to reliability plot. The reliability-based maintenance intervals for 90%, 80%, 70% and 50% reliability level are respectively 9.91, 17.96, 27.56 and 56.1 h. Also the calculations show that time to repair (TTR) of this system varies in range 0.17-4 h with 1.002 h as mean time to repair (MTTR). There is a 80% chance that the electrical system of shearer of Parvade.l mine repair will be accomplished within 1.45 h.

  8. Aviation Fuel System Reliability and Fail-Safety Analysis. Promising Alternative Ways for Improving the Fuel System Reliability

    I. S. Shumilov


    Full Text Available The paper deals with design requirements for an aviation fuel system (AFS, AFS basic design requirements, reliability, and design precautions to avoid AFS failure. Compares the reliability and fail-safety of AFS and aircraft hydraulic system (AHS, considers the promising alternative ways to raise reliability of fuel systems, as well as elaborates recommendations to improve reliability of the pipeline system components and pipeline systems, in general, based on the selection of design solutions.It is extremely advisable to design the AFS and AHS in accordance with Aviation Regulations АП25 and Accident Prevention Guidelines, ICAO (International Civil Aviation Association, which will reduce risk of emergency situations, and in some cases even avoid heavy disasters.ATS and AHS designs should be based on the uniform principles to ensure the highest reliability and safety. However, currently, this principle is not enough kept, and AFS looses in reliability and fail-safety as compared with AHS. When there are the examined failures (single and their combinations the guidelines to ensure the AFS efficiency should be the same as those of norm-adopted in the Regulations АП25 for AHS. This will significantly increase reliability and fail-safety of the fuel systems and aircraft flights, in general, despite a slight increase in AFS mass.The proposed improvements through the use of components redundancy of the fuel system will greatly raise reliability of the fuel system of a passenger aircraft, which will, without serious consequences for the flight, withstand up to 2 failures, its reliability and fail-safety design will be similar to those of the AHS, however, above improvement measures will lead to a slightly increasing total mass of the fuel system.It is advisable to set a second pump on the engine in parallel with the first one. It will run in case the first one fails for some reasons. The second pump, like the first pump, can be driven from the

  9. Robot computer problem solving system

    Becker, J. D.; Merriam, E. W.


    The conceptual, experimental, and practical aspects of the development of a robot computer problem solving system were investigated. The distinctive characteristics were formulated of the approach taken in relation to various studies of cognition and robotics. Vehicle and eye control systems were structured, and the information to be generated by the visual system is defined.

  10. Power Electronics and Reliability in Renewable Energy Systems

    Blaabjerg, Frede; Ma, Ke; Zhou, Dao


    Power Electronics are needed in almost all kind of renewable energy systems. It is used both for controlling the renewable source and also for interfacing to the load, which can be grid-connected or working in stand-alone mode. More and more efforts are put into making renewable energy systems...... better in terms of reliability in order to ensure a high availability of the power sources, in this case the knowledge of mission profile of a certain application is crucial for the reliability evaluation/design of power electronics. In this paper an overview on the power electronic circuits behind...... the most common converter configurations for wind turbine and photovoltaic is done. Next different aspects of improving the system reliability are mapped. Further on examples of how to control the chip temperature in different power electronic configurations as well as operation modes for wind power...

  11. A General Approach to Study the Reliability of Complex Systems

    G. M. Repici


    Full Text Available In recent years new complex systems have been developed in the automotive field to increase safety and comfort. These systems integrate hardware and software to guarantee the best results in vehicle handling and make products competitive on the market.However, the increase in technical details and the utilization and integration of these complicated systems require a high level of dynamic control system reliability. In order to improve this fundamental characteristic methods can be extracted from methods used in the aeronautical field to deal with reliability and these can be integrated into one simplified method for application in the automotive field.Firstly, as a case study, we decided to analyse VDC (the Vehicle Dynamics Control system by defining a possible approach to reliability techniques. A VDC Fault Tree Analysis represents the first step in this activity: FTA enables us to recognize the critical components in all possible working conditions of a car, including cranking, during 'key-on'-'key-off ' phases, which is particularly critical for the electrical on-board system (because of voltage reduction.By associating FA (Functional Analysis and FTA results with a good FFA (Functional Failure Analysis, it is possible to define the best architecture for the general system to achieve the aim of a high reliability structure.The paper will show some preliminary results from the application of this methodology, taken from various typical handling conditions from well established test procedures for vehicles.

  12. Power system reliability memento; Memento de la surete du systeme electrique



    The reliability memento of the French power system (national power transmission grid) is an educational document which purpose is to point out the role of each one as regards power system operating reliability. This memento was first published in 1999. Extensive changes have taken place since then. The new 2002 edition shows that system operating reliability is as an important subject as ever: 1 - foreword; 2 - system reliability: the basics; 3 - equipment measures taken in order to guarantee the reliability of the system; 4 - organisational and human measures taken to guarantee the reliability of the system; appendix 1 - system operation: basic concepts; appendix 2 - guiding principles governing the reliability of the power system; appendix 3 - international associations of transmission system operators; appendix 4 - description of major incidents.

  13. Windfarm Generation Assessment for Reliability Analysis of Power Systems

    Barberis Negra, Nicola; Bak-Jensen, Birgitte; Holmstrøm, O.


    Due to the fast development of wind generation in the past ten years, increasing interest has been paid to techniques for assessing different aspects of power systems with a large amount of installed wind generation. One of these aspects concerns power system reliability. Windfarm modelling plays...... in a reliability model and the generation of a windfarm is evaluated by means of sequential Monte Carlo simulation. Results are used to analyse how each mentioned Factor influences the assessment, and why and when they should be included in the model....

  14. Windfarm Generation Assessment for ReliabilityAnalysis of Power Systems

    Negra, Nicola Barberis; Holmstrøm, Ole; Bak-Jensen, Birgitte


    Due to the fast development of wind generation in the past ten years, increasing interest has been paid to techniques for assessing different aspects of power systems with a large amount of installed wind generation. One of these aspects concerns power system reliability. Windfarm modelling plays...... in a reliability model and the generation of a windfarm is evaluated by means of sequential Monte Carlo simulation. Results are used to analyse how each mentioned Factor influences the assessment, and why and when they should be included in the model....


    A. Bessarab


    All over the world safety of maintained vehicles has the major value. For motor vehicles of the Republic of Belarus this problem is also actual. Maintenance of high reliability of brake systems of cars in operation is one of ways of the decision of a problem of increase of traffic safety.The analysis of reliability of brake systems of buses MAZ is carried out following the results of the state maintenance service in 2010 and the analysis of premature returns from routes of movement of buses M...

  16. Operating systems. [of computers

    Denning, P. J.; Brown, R. L.


    A counter operating system creates a hierarchy of levels of abstraction, so that at a given level all details concerning lower levels can be ignored. This hierarchical structure separates functions according to their complexity, characteristic time scale, and level of abstraction. The lowest levels include the system's hardware; concepts associated explicitly with the coordination of multiple tasks appear at intermediate levels, which conduct 'primitive processes'. Software semaphore is the mechanism controlling primitive processes that must be synchronized. At higher levels lie, in rising order, the access to the secondary storage devices of a particular machine, a 'virtual memory' scheme for managing the main and secondary memories, communication between processes by way of a mechanism called a 'pipe', access to external input and output devices, and a hierarchy of directories cataloguing the hardware and software objects to which access must be controlled.

  17. Analysis of the Reliability of the "Alternator- Alternator Belt" System

    Ivan Mavrin


    Full Text Available Before starting and also during the exploitation of va1ioussystems, it is vety imp011ant to know how the system and itsparts will behave during operation regarding breakdowns, i.e.failures. It is possible to predict the service behaviour of a systemby determining the functions of reliability, as well as frequencyand intensity of failures.The paper considers the theoretical basics of the functionsof reliability, frequency and intensity of failures for the twomain approaches. One includes 6 equal intetvals and the other13 unequal intetvals for the concrete case taken from practice.The reliability of the "alternator- alternator belt" system installedin the buses, has been analysed, according to the empiricaldata on failures.The empitical data on failures provide empirical functionsof reliability and frequency and intensity of failures, that arepresented in tables and graphically. The first analysis perfO!med by dividing the mean time between failures into 6 equaltime intervals has given the forms of empirical functions of fa ilurefrequency and intensity that approximately cotTespond totypical functions. By dividing the failure phase into 13 unequalintetvals with two failures in each interval, these functions indicateexplicit transitions from early failure inte1val into the randomfailure interval, i.e. into the ageing intetval. Functions thusobtained are more accurate and represent a better solution forthe given case.In order to estimate reliability of these systems with greateraccuracy, a greater number of failures needs to be analysed.

  18. Task analysis and computer aid development for human reliability analysis in nuclear power plants

    Yoon, W. C.; Kim, H.; Park, H. S.; Choi, H. H.; Moon, J. M.; Heo, J. Y.; Ham, D. H.; Lee, K. K.; Han, B. T. [Korea Advanced Institute of Science and Technology, Taejeon (Korea)


    Importance of human reliability analysis (HRA) that predicts the error's occurrence possibility in a quantitative and qualitative manners is gradually increased by human errors' effects on the system's safety. HRA needs a task analysis as a virtue step, but extant task analysis techniques have the problem that a collection of information about the situation, which the human error occurs, depends entirely on HRA analyzers. The problem makes results of the task analysis inconsistent and unreliable. To complement such problem, KAERI developed the structural information analysis (SIA) that helps to analyze task's structure and situations systematically. In this study, the SIA method was evaluated by HRA experts, and a prototype computerized supporting system named CASIA (Computer Aid for SIA) was developed for the purpose of supporting to perform HRA using the SIA method. Additionally, through applying the SIA method to emergency operating procedures, we derived generic task types used in emergency and accumulated the analysis results in the database of the CASIA. The CASIA is expected to help HRA analyzers perform the analysis more easily and consistently. If more analyses will be performed and more data will be accumulated to the CASIA's database, HRA analyzers can share freely and spread smoothly his or her analysis experiences, and there by the quality of the HRA analysis will be improved. 35 refs., 38 figs., 25 tabs. (Author)

  19. Reliability modeling of hydraulic system of drum shearer machine

    SEYED HADI Hoseinie; MOHAMMAD Ataie; REZA Khalookakaei; UDAY Kumar


    The hydraulic system plays an important role in supplying power and its transition to other working parts of a coal shearer machine.In this paper,the reliability of the hydraulic system of a drum shearer was analyzed.A case study was done in the Tabas Coal Mine in Iran for failure data collection.The results of the statistical analysis show that the time between failures (TBF)data of this system followed the 3-parameters Weibull distribution.There is about a 54% chance that the hydraulic system of the drum shearer will not fail for the first 50 h of operation.The developed model shows that the reliability of the hydraulic system reduces to a zero value after approximately 1 650 hours of operation.The failure rate of this system decreases when time increases.Therefore,corrective maintenance(run-to-failure)was selected as the best maintenance strategy for it.

  20. Interactive computer-enhanced remote viewing system

    Tourtellott, J.A.; Wagner, J.F. [Mechanical Technology Incorporated, Latham, NY (United States)


    Remediation activities such as decontamination and decommissioning (D&D) typically involve materials and activities hazardous to humans. Robots are an attractive way to conduct such remediation, but for efficiency they need a good three-dimensional (3-D) computer model of the task space where they are to function. This model can be created from engineering plans and architectural drawings and from empirical data gathered by various sensors at the site. The model is used to plan robotic tasks and verify that selected paths are clear of obstacles. This report describes the development of an Interactive Computer-Enhanced Remote Viewing System (ICERVS), a software system to provide a reliable geometric description of a robotic task space, and enable robotic remediation to be conducted more effectively and more economically.

  1. Computer System Design System-on-Chip

    Flynn, Michael J


    The next generation of computer system designers will be less concerned about details of processors and memories, and more concerned about the elements of a system tailored to particular applications. These designers will have a fundamental knowledge of processors and other elements in the system, but the success of their design will depend on the skills in making system-level tradeoffs that optimize the cost, performance and other attributes to meet application requirements. This book provides a new treatment of computer system design, particularly for System-on-Chip (SOC), which addresses th

  2. Reliability optimization of a redundant system with failure dependencies

    Yu Haiyang [Institute Charles Delaunay (ICD, FRE CNRS 2848), Troyes University of Technology, Rue Marie Curie, BP 2060, 10010 Troyes (France)]. E-mail:; Chu Chengbin [Institute Charles Delaunay (ICD, FRE CNRS 2848), Troyes University of Technology, Rue Marie Curie, BP 2060, 10010 Troyes (France); Management School, Hefei University of Technology, 193 Tunxi Road, Hefei (China); Chatelet, Eric [Institute Charles Delaunay (ICD, FRE CNRS 2848), Troyes University of Technology, Rue Marie Curie, BP 2060, 10010 Troyes (France); Yalaoui, Farouk [Institute Charles Delaunay (ICD, FRE CNRS 2848), Troyes University of Technology, Rue Marie Curie, BP 2060, 10010 Troyes (France)


    In a multi-component system, the failure of one component can reduce the system reliability in two aspects: loss of the reliability contribution of this failed component, and the reconfiguration of the system, e.g., the redistribution of the system loading. The system reconfiguration can be triggered by the component failures as well as by adding redundancies. Hence, dependency is essential for the design of a multi-component system. In this paper, we study the design of a redundant system with the consideration of a specific kind of failure dependency, i.e., the redundant dependency. The dependence function is introduced to quantify the redundant dependency. With the dependence function, the redundant dependencies are further classified as independence, weak, linear, and strong dependencies. In addition, this classification is useful in that it facilitates the optimization resolution of the system design. Finally, an example is presented to illustrate the concept of redundant dependency and its application in system design. This paper thus conveys the significance of failure dependencies in the reliability optimization of systems.

  3. A PC program to optimize system configuration for desired reliability at minimum cost

    Hills, Steven W.; Siahpush, Ali S.


    High reliability is desired in all engineered systems. One way to improve system reliability is to use redundant components. When redundant components are used, the problem becomes one of allocating them to achieve the best reliability without exceeding other design constraints such as cost, weight, or volume. Systems with few components can be optimized by simply examining every possible combination but the number of combinations for most systems is prohibitive. A computerized iteration of the process is possible but anything short of a super computer requires too much time to be practical. Many researchers have derived mathematical formulations for calculating the optimum configuration directly. However, most of the derivations are based on continuous functions whereas the real system is composed of discrete entities. Therefore, these techniques are approximations of the true optimum solution. This paper describes a computer program that will determine the optimum configuration of a system of multiple redundancy of both standard and optional components. The algorithm is a pair-wise comparative progression technique which can derive the true optimum by calculating only a small fraction of the total number of combinations. A designer can quickly analyze a system with this program on a personal computer.

  4. Effective Measurement of Reliability of Repairable USAF Systems


    Hansen presented a course, Concepts and Models for Repairable Systems Reliability, at the 2009 Centro de Investigacion en Mathematicas (CIMAT). The...defines MTBF in Technical Order 00-2-2, Maintenance Documentation ., Mean Time Between Failure (Inherent). Inherent refers to a Type 1 failure or...The USAF uses maintenance data to document the system failures. There is no method within that data system to define specific failure modes

  5. Availability, reliability and downtime of systems with repairable components

    Kiureghian, Armen Der; Ditlevsen, Ove Dalager; Song, J.


    Closed-form expressions are derived for the steady-state availability, mean rate of failure, mean duration of downtime and lower bound reliability of a general system with randomly and independently failing repairable components. Component failures are assumed to be homogeneous Poisson events in ......, or reducing the mean duration of system downtime. Example applications to an electrical substation system demonstrate the use of the formulas developed in the paper....

  6. Power system reliability enhancement by using PowerformerTM

    Rahmat-Allah Hooshmand


    Full Text Available A high-voltage generator PowerformerTM is a new generation of the AC generators. The most significant advantages of these PowerformerTM are their direct connection to high-voltage grid, higher availability, and more reactive power margin, short term overloading capacity and removing the power transformer from the structure of the power plant. In this paper, the installation effect of these generators on the power system reliability is investigated. The amount of the effects depends on the type and location of the power plant, location of the PowerformerTM, the size of load and network topology. For this purpose, in the 6-bus IEEE RBTS system, the conventional generators are replaced by these new PowerformerTM and then, the reliability indices are evaluated. The simulation results show that the reliability indices such as the expected duration of load curtailment (EDLC and the expected energy not served (EENS are improved. .

  7. The Development of a Demonstration Passive System Reliability Assessment

    Bucknor, Matthew; Grabaskas, David; Brunett, Acacia


    In this paper, the details of the development of a demonstration problem to assess the reliability of a passive safety system are presented. An advanced small modular reactor (advSMR) design, which is a pool-type sodium fast reactor (SFR) coupled with a passive reactor cavity cooling system (RCCS) is described. The RELAP5-3D models of the advSMR and RCCS that will be used to simulate a long-term station blackout (SBO) accident scenario are presented. Proposed benchmarking techniques for both the reactor and the RCCS are discussed, which includes utilization of experimental results from the Natural convection Shutdown heat removal Test Facility (NSTF) at the Argonne National Laboratory. Details of how mechanistic methods, specifically the Reliability Method for Passive Systems (RMPS) approach, will be utilized to determine passive system reliability are presented. The results of this mechanistic analysis will ultimately be compared to results from dynamic methods in future work. This work is part of an ongoing project at Argonne to demonstrate methodologies for assessing passive system reliability.



    The optimum design method based on the reliability is presented to the stochastic structure systems (i. e., the sectional area, length, elastic module and strength of the structural member are random variables) under the random loads. The sensitivity expression of system reliability index and the safety margins were presented in the stochastic structure systems. The optimum vector method was given. First, the expressions of the reliability index of the safety margins with the improved first-order second-moment and the stochastic finite element method were deduced, and then the expressions of the systemic failure probability by probabilistic network evaluation technique(PNET) method were obtained. After derivation calculus, the expressions of the sensitivity analysis for the system reliability were obtained. Moreover, the optimum design with the optimum vector algorithm was undertaken. In the optimum iterative procedure, the gradient step and the optimum vector step were adopted to calculate. At the last, a numerical example was provided to illustrate that the method is efficient in the calculation, stably converges and fits the application in engineering.

  9. Assuring Quality and Reliability in Complex Avionics Systems hardware & Software

    V. Haridas


    Full Text Available It is conventional wisdom in defence systems that electronic brains are where much of the present and future weapons system capability is developed. Electronic hardware advances, particularly in microprocessor, allow highly complex and sophisticated software to provide high degree of system autonomy and customisation to mission at hand. Since modern military systems are so much dependent on the proper functioning of electronics, the quality and reliability of electronic hardware and software have a profound impact on defensive capability and readiness. At the hardware level, due to the advances in microelectronics, functional capabilities of today's systems have increased. The advances in the hardware field have an impact on software also. Now a days, it is possible to incorporate more and more system functions through software, rather than going for a pure hardware solution. On the other hand complexities the systems are increasing, working energy levels of the systems are decreasing and the areas of reliability and quality assurance are becoming more and more wide. This paper covers major failure modes in microelectronic devices. The various techniques used to improve component and system reliability are described. The recent trends in expanding the scope of traditional quality assurance techniques are also discussed, considering both hardware and software.

  10. Modelling Reliability-adaptive Multi-system Operation

    Uwe K. Rakowsky


    This contribution discusses the concept of Reliability-Adaptive Systems (RAS) to multi-system operation. A fleet of independently operating systems and a single maintenance unit are considered. It is the objective in this paper to increase overall performance or workload respectively by avoiding delay due to busy maintenance units. This is achieved by concerted and coordinated derating of individual system performance, which increases reliability. Quantification is carried out by way of a convolution-based approach. The approach is tailored to fleets of ships, aeroplanes, spacecraft, and vehicles (trains, trams, buses, cars, trucks, etc.) - Finally, the effectiveness of derating is validated using different criteria. The RAS concept makes sense if average system output loss due to lowered performance level (yielding longer time to failure) is smaller than average loss due to waiting for maintenance in a non-adaptive case.

  11. NERF - A Computer Program for the Numerical Evaluation of Reliability Functions - Reliability Modelling, Numerical Methods and Program Documentation,


    Industry Australian Atomic Energy Commission, Director CSIROj Materials Science Division, Library Trans-Australia Airlines, Library Qantas Airways ...designed to evaluate the reliability functions that result from the application of reliability analysis to the fatigue of aircraft structures, in particular...Messages 60+ A.4. Program Assembly 608 DISTRIBUTION DOCUMENT CONTROL DATA II 1. INTRODUCTION The application of reliability analysis to the fatigue

  12. Automated Energy Distribution and Reliability System Status Report

    Buche, D. L.; Perry, S.


    This report describes Northern Indiana Public Service Co. project efforts to develop an automated energy distribution and reliability system. The purpose of this project was to implement a database-driven GIS solution that would manage all of the company's gas, electric, and landbase objects.

  13. Reliability analysis of flood defence systems in the Netherlands

    Lassing, B.L.; Vrouwenvelder, A.C.W.M.; Waarts, P.H.


    In recent years an advanced program for reliability analysis of dike systems has been under de-velopment in the Netherlands. This paper describes the global data requirements for application and the set-up of the models in the Netherlands. The analysis generates an estimate of the probability of sys

  14. Importance Sampling Simulations of Markovian Reliability Systems using Cross Entropy

    Ridder, Ad


    This paper reports simulation experiments, applying the cross entropy method suchas the importance sampling algorithm for efficient estimation of rare event probabilities in Markovian reliability systems. The method is compared to various failurebiasing schemes that have been proved to give estimato

  15. Importance Sampling Simulations of Markovian Reliability Systems using Cross Entropy

    Ridder, Ad


    This paper reports simulation experiments, applying the cross entropy method suchas the importance sampling algorithm for efficient estimation of rare event probabilities in Markovian reliability systems. The method is compared to various failurebiasing schemes that have been proved to give estimato

  16. Reliability-Based Inspection Planning for Structural Systems

    Sørensen, John Dalsgaard


    A general model for reliability-based optimal inspection and repair strategies for structural systems is described. The total expected costs in the design lifetime is minimized with the number of inspections, the inspection times and efforts as decision variables. The equivalence of this model wi...

  17. Automated Energy Distribution and Reliability System (AEDR): Final Report

    Buche, D. L.


    This report describes Northern Indiana Public Service Co. project efforts to develop an automated energy distribution and reliability system. The purpose of this project was to implement a database-driven GIS solution that would manage all of the company's gas, electric, and landbase objects.

  18. Computational Intelligence for Engineering Systems

    Madureira, A; Vale, Zita


    "Computational Intelligence for Engineering Systems" provides an overview and original analysis of new developments and advances in several areas of computational intelligence. Computational Intelligence have become the road-map for engineers to develop and analyze novel techniques to solve problems in basic sciences (such as physics, chemistry and biology) and engineering, environmental, life and social sciences. The contributions are written by international experts, who provide up-to-date aspects of the topics discussed and present recent, original insights into their own experien

  19. A Report on Simulation-Driven Reliability and Failure Analysis of Large-Scale Storage Systems

    Wan, Lipeng [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Wang, Feiyi [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Oral, H. Sarp [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Vazhkudai, Sudharshan S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Cao, Qing [Univ. of Tennessee, Knoxville, TN (United States)


    High-performance computing (HPC) storage systems provide data availability and reliability using various hardware and software fault tolerance techniques. Usually, reliability and availability are calculated at the subsystem or component level using limited metrics such as, mean time to failure (MTTF) or mean time to data loss (MTTDL). This often means settling on simple and disconnected failure models (such as exponential failure rate) to achieve tractable and close-formed solutions. However, such models have been shown to be insufficient in assessing end-to-end storage system reliability and availability. We propose a generic simulation framework aimed at analyzing the reliability and availability of storage systems at scale, and investigating what-if scenarios. The framework is designed for an end-to-end storage system, accommodating the various components and subsystems, their interconnections, failure patterns and propagation, and performs dependency analysis to capture a wide-range of failure cases. We evaluate the framework against a large-scale storage system that is in production and analyze its failure projections toward and beyond the end of lifecycle. We also examine the potential operational impact by studying how different types of components affect the overall system reliability and availability, and present the preliminary results

  20. Social sensing building reliable systems on unreliable data

    Wang, Dong; Kaplan, Lance


    Increasingly, human beings are sensors engaging directly with the mobile Internet. Individuals can now share real-time experiences at an unprecedented scale. Social Sensing: Building Reliable Systems on Unreliable Data looks at recent advances in the emerging field of social sensing, emphasizing the key problem faced by application designers: how to extract reliable information from data collected from largely unknown and possibly unreliable sources. The book explains how a myriad of societal applications can be derived from this massive amount of data collected and shared by average individu

  1. Computers in Information Sciences: On-Line Systems.


  2. A Correlated Model for Evaluating Performance and Energy of Cloud System Given System Reliability

    Hongli Zhang


    Full Text Available The serious issue of energy consumption for high performance computing systems has attracted much attention. Performance and energy-saving have become important measures of a computing system. In the cloud computing environment, the systems usually allocate various resources (such as CPU, Memory, Storage, etc. on multiple virtual machines (VMs for executing tasks. Therefore, the problem of resource allocation for running VMs should have significant influence on both system performance and energy consumption. For different processor utilizations assigned to the VM, there exists the tradeoff between energy consumption and task completion time when a given task is executed by the VMs. Moreover, the hardware failure, software failure and restoration characteristics also have obvious influences on overall performance and energy. In this paper, a correlated model is built to analyze both performance and energy in the VM execution environment given the reliability restriction, and an optimization model is presented to derive the most effective solution of processor utilization for the VM. Then, the tradeoff between energy-saving and task completion time is studied and balanced when the VMs execute given tasks. Numerical examples are illustrated to build the performance-energy correlated model and evaluate the expected values of task completion time and consumed energy.

  3. Safety and reliability of Radio Frequency Identification Devices in Magnetic Resonance Imaging and Computed Tomography

    Fretz Christian


    Full Text Available Abstract Background Radio Frequency Identification (RFID devices are becoming more and more essential for patient safety in hospitals. The purpose of this study was to determine patient safety, data reliability and signal loss wearing on skin RFID devices during magnetic resonance imaging (MRI and computed tomography (CT scanning. Methods Sixty RFID tags of the type I-Code SLI, 13.56 MHz, ISO 18000-3.1 were tested: Thirty type 1, an RFID tag with a 76 × 45 mm aluminum-etched antenna and 30 type 2, a tag with a 31 × 14 mm copper-etched antenna. The signal loss, material movement and heat tests were performed in a 1.5 T and a 3 T MR system. For data integrity, the tags were tested additionally during CT scanning. Standardized function tests were performed with all transponders before and after all imaging studies. Results There was no memory loss or data alteration in the RFID tags after MRI and CT scanning. Concerning heating (a maximum of 3.6°C and device movement (below 1 N/kg no relevant influence was found. Concerning signal loss (artifacts 2 - 4 mm, interpretability of MR images was impaired when superficial structures such as skin, subcutaneous tissues or tendons were assessed. Conclusions Patients wearing RFID wristbands are safe in 1.5 T and 3 T MR scanners using normal operation mode for RF-field. The findings are specific to the RFID tags that underwent testing.

  4. Reliability of emergency ac power systems at nuclear power plants

    Battle, R E; Campbell, D J


    Reliability of emergency onsite ac power systems at nuclear power plants has been questioned within the Nuclear Regulatory Commission (NRC) because of the number of diesel generator failures reported by nuclear plant licensees and the reactor core damage that could result from diesel failure during an emergency. This report contains the results of a reliability analysis of the onsite ac power system, and it uses the results of a separate analysis of offsite power systems to calculate the expected frequency of station blackout. Included is a design and operating experience review. Eighteen plants representative of typical onsite ac power systems and ten generic designs were selected to be modeled by fault trees. Operating experience data were collected from the NRC files and from nuclear plant licensee responses to a questionnaire sent out for this project.

  5. Power Aware Reliable Virtual Machine Coordinator Election Algorithm in Service Oriented Systems



    Full Text Available Service oriented systems such as cloud computing are emerging widely even in people’s daily life due to its magnificent advantages for enterprise and clients. However these computing paradigms are challenged in many aspects such as power usage, availability, reliability and especially security. Hence a central controller existence is crucial in order to coordinate Virtual Machines (VM placed on physical resources. In this paper an algorithm is proposed to elect this controller among various VM which is able to tolerate multiple numbers of faults in the system and reduce power usage as well. Moreover the algorithm exchanges dramatically fewer messages than other relevant proposed algorithms.

  6. Reliability Assessment of IGBT Modules Modeled as Systems with Correlated Components

    Kostandyan, Erik; Sørensen, John Dalsgaard


    configuration. The estimated system reliability by the proposed method is a conservative estimate. Application of the suggested method could be extended for reliability estimation of systems composing of welding joints, bolts, bearings, etc. The reliability model incorporates the correlation between...


    G. Sankaraiah


    Full Text Available

    ENGLISH ABSTRACT: The reliability of a system is generally treated as a function of cost; but in many real-life situations reliability will depend on a variety of factors. It is therefore interesting to probe the hidden impact of constraints apart from cost – such as weight, volume, and space. This paper attempts to study the impact of multiple constraints on system reliability. For the purposes of analysis, an integrated redundant reliability system is considered, modelled and solved by applying a Lagrangian multiplier that gives a real valued solution for the number of components, for its reliability at each stage, and for the system. The problem is further studied by using a heuristic algorithm and an integer programming method, and is validated by sensitivity analysis to present an integer solution.

    AFRIKAANSE OPSOMMING: Die betroubaarheid van ‘n sisteem word normaalweg as ‘n funksie van koste beskou, alhoewel dit in baie gevalle afhang van ‘n verskeidenheid faktore. Dit is dus interessant om die verskuilde impak van randvoorwaardes soos massa, volume en ruimte te ondersoek. Hierdie artikel poog om die impak van meervoudige randvoorwaardes op sisteem-betroubaarheid te bestudeer. Vir die ontleding, word ‘n geïntegreerde betroubaarheid-sisteem met oortolligheid beskou, gemodelleer en opgelos aan die hand van ‘n Lagrange-vermenigvuldiger. Die problem word verder bestudeer deur gebruik te maak van ‘n heuristiese algoritme en heeltalprogrammering asook gevalideer by wyse van ‘n sensitiwiteitsanalise sodat ‘n heeltaloplossing voorgehou kan word.

  8. Reliability assessment of power distribution systems using disjoint path-set algorithm

    Bourezg, Abdrabbi; Meglouli, H.


    Finding the reliability expression of different substation configurations can help design a distribution system with the best overall reliability. This paper presents a computerized a nd implemented algorithm, based on Disjoint Sum of Product (DSOP) algorithm. The algorithm was synthesized and applied for the first time to the determination of reliability expression of a substation to determine reliability indices and costs of different substation arrangements. It deals with the implementation and synthesis of a new designed algorithm for DSOP implemented using C/C++, incorporating parallel problem solving capability and overcoming the disadvantage of Monte Carlo simulation which is the lengthy computational time to achieve satisfactory statistical convergence of reliability index values. The major highlight of this research being that the time consuming procedures of the DSOP solution generated for different substation arrangements using the proposed method is found to be significantly lower in comparison with the time consuming procedures of Monte Carlo-simulation solution or any other method used for the reliability evaluation of substations in the existing literature such as meta-heuristic and soft computing algorithms. This implementation gives the possibility of RBD simulation for different substation configurations in C/C++ using their path-set Boolean expressions mapped to probabilistic domain and result in simplest Sum of Disjoint Product which is on a one-to-one correspondence with reliability expression. This software tool is capable of handling and modeling a large, repairable system. Additionally, through its intuitive interface it can be easily used for industrial and commercial power systems. With simple Boolean expression for a configuration's RBD inputted, users can define a power system utilizing a RBD and, through a fast and efficient built-in simulation engine, the required reliability expressions and indices can be obtained. Two case studies

  9. Design and implementation of a reliable and cost-effective cloud computing infrastructure: the INFN Napoli experience

    Capone, V.; Esposito, R.; Pardi, S.; Taurino, F.; Tortone, G.


    Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.

  10. Improvement of level-1 PSA computer code package - Modeling and analysis for dynamic reliability of nuclear power plants

    Lee, Chang Hoon; Baek, Sang Yeup; Shin, In Sup; Moon, Shin Myung; Moon, Jae Phil; Koo, Hoon Young; Kim, Ju Shin [Seoul National University, Seoul (Korea, Republic of); Hong, Jung Sik [Seoul National Polytechnology University, Seoul (Korea, Republic of); Lim, Tae Jin [Soongsil University, Seoul (Korea, Republic of)


    The objective of this project is to develop a methodology of the dynamic reliability analysis for NPP. The first year`s research was focused on developing a procedure for analyzing failure data of running components and a simulator for estimating the reliability of series-parallel structures. The second year`s research was concentrated on estimating the lifetime distribution and PM effect of a component from its failure data in various cases, and the lifetime distribution of a system with a particular structure. Computer codes for performing these jobs were also developed. The objectives of the third year`s research is to develop models for analyzing special failure types (CCFs, Standby redundant structure) that were nor considered in the first two years, and to complete a methodology of the dynamic reliability analysis for nuclear power plants. The analysis of failure data of components and related researches for supporting the simulator must be preceded for providing proper input to the simulator. Thus this research is divided into three major parts. 1. Analysis of the time dependent life distribution and the PM effect. 2. Development of a simulator for system reliability analysis. 3. Related researches for supporting the simulator : accelerated simulation analytic approach using PH-type distribution, analysis for dynamic repair effects. 154 refs., 5 tabs., 87 figs. (author)

  11. Application of Support Vector Machine to Reliability Analysis of Engine Systems

    Zhang Xinfeng


    Full Text Available Reliability analysis plays a very important role for assessing the performance and making maintenance plans of engine systems. This research presents a comparative study of the predictive performances of support vector machines (SVM , least square support vector machine (LSSVM and neural network time series models for forecasting failures and reliability in engine systems. Further, the reliability indexes of engine systems are computed by the weibull probability paper programmed with Matlab. The results shows that the probability distribution of the forecasting outcomes is consistent to the distribution of the actual data, which all follow weibull distribution and the predictions by SVM and LSSVM can provide accurate predictions of the characteristic life. So SVM and LSSVM are both another choice of engine system reliability analysis. Moreover, the predictive precise of the method based on LSSVM is higher than that of SVM. In small samples, the prediction by LSSVM will be more popular, because its compution cost is lower and the precise can be more satisfied.

  12. Reliability of Semiautomated Computational Methods for Estimating Tibiofemoral Contact Stress in the Multicenter Osteoarthritis Study

    Donald D. Anderson


    Full Text Available Recent findings suggest that contact stress is a potent predictor of subsequent symptomatic osteoarthritis development in the knee. However, much larger numbers of knees (likely on the order of hundreds, if not thousands need to be reliably analyzed to achieve the statistical power necessary to clarify this relationship. This study assessed the reliability of new semiautomated computational methods for estimating contact stress in knees from large population-based cohorts. Ten knees of subjects from the Multicenter Osteoarthritis Study were included. Bone surfaces were manually segmented from sequential 1.0 Tesla magnetic resonance imaging slices by three individuals on two nonconsecutive days. Four individuals then registered the resulting bone surfaces to corresponding bone edges on weight-bearing radiographs, using a semi-automated algorithm. Discrete element analysis methods were used to estimate contact stress distributions for each knee. Segmentation and registration reliabilities (day-to-day and interrater for peak and mean medial and lateral tibiofemoral contact stress were assessed with Shrout-Fleiss intraclass correlation coefficients (ICCs. The segmentation and registration steps of the modeling approach were found to have excellent day-to-day (ICC 0.93–0.99 and good inter-rater reliability (0.84–0.97. This approach for estimating compartment-specific tibiofemoral contact stress appears to be sufficiently reliable for use in large population-based cohorts.

  13. Reliability study of complex physical systems using SysML

    David, Pierre, E-mail: pierre.david@ensi-bourges.f [Institut PRISME - ENSIB, 88 Boulevard Lahitolle, 18020 Bourges Cedex (France); Idasiak, Vincent, E-mail: vincent.idasiak@ensi-bourges.f [Institut PRISME - ENSIB, 88 Boulevard Lahitolle, 18020 Bourges Cedex (France); Kratz, Frederic, E-mail: frederic.kratz@ensi-bourges.f [Institut PRISME - ENSIB, 88 Boulevard Lahitolle, 18020 Bourges Cedex (France)


    The development of safety critical systems becomes even harder since the complexity of these systems grows continuously. Moreover, this kind of process involves the use of powerful design methods and precise reliability techniques that utilize dissimilar models and construction policy. In this article we propose a method to unify and enhance this process by linking functional design phase using SysML with commonly used reliability techniques such as FMEA and dysfunctional models construction in AltaRica Data Flow. We present how SysML models can be analyzed automatically in order to produce an FMEA and expose a parallel between SysML models and AltaRica Data Flow ones. The given approach is structured around a database of dysfunctional behaviors that supports the studies and is updated by the obtained results. We exemplify the approach to analyze a system of level controlling of a tank.


    A. Bessarab


    Full Text Available All over the world safety of maintained vehicles has the major value. For motor vehicles of the Republic of Belarus this problem is also actual. Maintenance of high reliability of brake systems of cars in operation is one of ways of the decision of a problem of increase of traffic safety.The analysis of reliability of brake systems of buses MAZ is carried out following the results of the state maintenance service in 2010 and the analysis of premature returns from routes of movement of buses MAZ-103 and МАZ-104 one of the motor transportation enterprises of a city of Minsk. Principal causes of structural parameters modification of brake pneumatic system of buses, the brake mechanism and elements АBS are considered.


    A.C. Rooney


    Full Text Available

    ENGLISH ABSTRACT: This paper proposes a reliability management process for the development of complex electromechanical systems. Specific emphasis is the development of these systems in an environment of limited development resources, and where small production quantities are envisaged.
    The results of this research provides a management strategy for reliability engineering activities, within a systems engineering environment, where concurrent engineering techniques are used to reduce development cycles and costs.

    AFRIKAANSE OPSOMMING: Hierdie artikel stel 'n proses, vir die bestuur van die betroubaarheid gedurende die ontwikkeling van komplekse elektromeganiese stelsels voor. Die omgewing van beperkte ontwikkelingshulpbronne en klein produksie hoeveelhede word beklemtoon.
    Die resultate van hierdie navorsing stel 'n bestuurstrategie, vir betroubaarheidsbestuur in n stelselsingenieurswese omgewing waar gelyktydige ingenieurswese tegnieke gebruik word am die ontwikkelingsiklus en -kostes te beperk, voor.

  16. Computer control system for sup 6 sup 0 Co industrial DR nondestructive testing system

    Chen Hai Jun


    The author presents the application of sup 6 sup 0 Co industrial DR nondestructive testing system, which including the control of step-motor, electrical protection, computer monitor program. The computer control system has good performance, high reliability and cheap expense

  17. Offshore compression system design for low cost high and reliability

    Castro, Carlos J. Rocha de O.; Carrijo Neto, Antonio Dias; Cordeiro, Alexandre Franca [Chemtech Engineering Services and Software Ltd., Rio de Janeiro, RJ (Brazil). Special Projects Div.], Emails:,,


    In the offshore oil fields, the oil streams coming from the wells usually have significant amounts of gas. This gas is separated at low pressure and has to be compressed to the export pipeline pressure, usually at high pressure to reduce the needed diameter of the pipelines. In the past, this gases where flared, but nowadays there are a increasing pressure for the energy efficiency improvement of the oil rigs and the use of this gaseous fraction. The most expensive equipment of this kind of plant are the compression and power generation systems, being the second a strong function of the first, because the most power consuming equipment are the compressors. For this reason, the optimization of the compression system in terms of efficiency and cost are determinant to the plant profit. The availability of the plants also have a strong influence in the plant profit, specially in gas fields where the products have a relatively low aggregated value, compared to oil. Due this, the third design variable of the compression system becomes the reliability. As high the reliability, larger will be the plant production. The main ways to improve the reliability of compression system are the use of multiple compression trains in parallel, in a 2x50% or 3x50% configuration, with one in stand-by. Such configurations are possible and have some advantages and disadvantages, but the main side effect is the increase of the cost. This is the offshore common practice, but that does not always significantly improve the plant availability, depending of the previous process system. A series arrangement and a critical evaluation of the overall system in some cases can provide a cheaper system with equal or better performance. This paper shows a case study of the procedure to evaluate a compression system design to improve the reliability but without extreme cost increase, balancing the number of equipment, the series or parallel arrangement, and the driver selection. Two cases studies will be

  18. Bringing the CMS distributed computing system into scalable operations

    Belforte, S; Fisk, I; Flix, J; Hernández, J M; Kress, T; Letts, J; Magini, N; Miccio, V; Sciabà, A


    Establishing efficient and scalable operations of the CMS distributed computing system critically relies on the proper integration, commissioning and scale testing of the data and workload management tools, the various computing workflows and the underlying computing infrastructure, located at more than 50 computing centres worldwide and interconnected by the Worldwide LHC Computing Grid. Computing challenges periodically undertaken by CMS in the past years with increasing scale and complexity have revealed the need for a sustained effort on computing integration and commissioning activities. The Processing and Data Access (PADA) Task Force was established at the beginning of 2008 within the CMS Computing Program with the mandate of validating the infrastructure for organized processing and user analysis including the sites and the workload and data management tools, validating the distributed production system by performing functionality, reliability and scale tests, helping sites to commission, configure an...

  19. Aging and computational systems biology.

    Mooney, Kathleen M; Morgan, Amy E; Mc Auley, Mark T


    Aging research is undergoing a paradigm shift, which has led to new and innovative methods of exploring this complex phenomenon. The systems biology approach endeavors to understand biological systems in a holistic manner, by taking account of intrinsic interactions, while also attempting to account for the impact of external inputs, such as diet. A key technique employed in systems biology is computational modeling, which involves mathematically describing and simulating the dynamics of biological systems. Although a large number of computational models have been developed in recent years, these models have focused on various discrete components of the aging process, and to date no model has succeeded in completely representing the full scope of aging. Combining existing models or developing new models may help to address this need and in so doing could help achieve an improved understanding of the intrinsic mechanisms which underpin aging.

  20. Computational Systems for Multidisciplinary Applications

    Soni, Bharat; Haupt, Tomasz; Koomullil, Roy; Luke, Edward; Thompson, David


    In this paper, we briefly describe our efforts to develop complex simulation systems. We focus first on four key infrastructure items: enterprise computational services, simulation synthesis, geometry modeling and mesh generation, and a fluid flow solver for arbitrary meshes. We conclude by presenting three diverse applications developed using these technologies.

  1. Design for Verification: Using Design Patterns to Build Reliable Systems

    Mehlitz, Peter C.; Penix, John; Koga, Dennis (Technical Monitor)


    Components so far have been mainly used in commercial software development to reduce time to market. While some effort has been spent on formal aspects of components, most of this was done in the context of programming language or operating system framework integration. As a consequence, increased reliability of composed systems is mainly regarded as a side effect of a more rigid testing of pre-fabricated components. In contrast to this, Design for Verification (D4V) puts the focus on component specific property guarantees, which are used to design systems with high reliability requirements. D4V components are domain specific design pattern instances with well-defined property guarantees and usage rules, which are suitable for automatic verification. The guaranteed properties are explicitly used to select components according to key system requirements. The D4V hypothesis is that the same general architecture and design principles leading to good modularity, extensibility and complexity/functionality ratio can be adapted to overcome some of the limitations of conventional reliability assurance measures, such as too large a state space or too many execution paths.

  2. RGCA: A Reliable GPU Cluster Architecture for Large-Scale Internet of Things Computing Based on Effective Performance-Energy Optimization.

    Fang, Yuling; Chen, Qingkui; Xiong, Neal N; Zhao, Deyu; Wang, Jingjuan


    This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes' diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services.

  3. Reliability Analysis of Penetration Systems Using Nondeterministic Methods



    Device penetration into media such as metal and soil is an application of some engineering interest. Often, these devices contain internal components and it is of paramount importance that all significant components survive the severe environment that accompanies the penetration event. In addition, the system must be robust to perturbations in its operating environment, some of which exhibit behavior which can only be quantified to within some level of uncertainty. In the analysis discussed herein, methods to address the reliability of internal components for a specific application system are discussed. The shock response spectrum (SRS) is utilized in conjunction with the Advanced Mean Value (AMV) and Response Surface methods to make probabilistic statements regarding the predicted reliability of internal components. Monte Carlo simulation methods are also explored.


    Grigorash O. V.


    Full Text Available The level of technical development today requires the creation of highly effective, including reliable, uninter-rupted power supply systems. We have shown modern requirements and design features of modern systems of uninterruptible power supply, which should be built on a modular principle. It is shown that the problem of synthesis of systems in a modular approach is addressing three issues: development of the structure of the system subject to the requirements of consumers to quality of power and the allowable time of power outage; determining the required level redundancy of major functional units (blocks, elements to ensure the required reliability of the system; - ensuring the most effective interconnection of modules, including electromagnetic compatibility, and the rational use during normal and emergency operation of the system. We have proposed new structural solution of the main functional units and uninterrupted power supply systems in modular design. To reduce EMI and improve efficiency uninterruptible power supply systems in the design of static converters we need to use a transformer with a rotating magnetic field. In addition, the prospective current is to be used as a source of renewable energy. Another promising approach is the use of direct frequency converters as voltage stabilizers and frequency of the current

  5. An immunological basis for high-reliability systems control.

    Somayaji, Anil B. (Carleton University, Ottawa, ON, Canada); Amai, Wendy A.; Walther, Eleanor A.


    This reports describes the successful extension of artificial immune systems from the domain of computer security to the domain of real time control systems for robotic vehicles. A biologically-inspired computer immune system was added to the control system of two different mobile robots. As an additional layer in a multi-layered approach, the immune system is complementary to traditional error detection and error handling techniques. This can be thought of as biologically-inspired defense in depth. We demonstrated an immune system can be added with very little application developer effort, resulting in little to no performance impact. The methods described here are extensible to any system that processes a sequence of data through a software interface.

  6. Computing Inter-Rater Reliability for Observational Data: An Overview and Tutorial

    Kevin A. Hallgren


    Full Text Available Many research designs require the assessment of inter-rater reliability (IRR to demonstrate consistency among observational ratings provided by multiple coders. However, many studies use incorrect statistical procedures, fail to fully report the information necessary to interpret their results, or do not address how IRR affects the power of their subsequent analyses for hypothesis testing. This paper provides an overview of methodological issues related to the assessment of IRR with a focus on study design, selection of appropriate statistics, and the computation, interpretation, and reporting of some commonly-used IRR statistics. Computational examples include SPSS and R syntax for computing Cohen’s kappa and intra-class correlations to assess IRR.

  7. The modernisation of the hoisting machine control systems in terms of the problems related to the operation reliability

    Pytel, B. [Knurow Colliery, Knurow (Poland)


    After outlining the importance of hoisting reliability, the paper presents methods for improving reliability by implementing redundancy techniques. The study covers software redundancy as well as static, dynamic and hybrid redundancy. On the basis of presented material, some of the issues that should be taken into consideration at the initial stage of the negotiations with companies which offer these systems to the potential user of the computer control system of hoisting machines are indicated. 5 refs., 6 figs.

  8. Reliability of System Identification Techniques to Assess Standing Balance in Healthy Elderly.

    Jantsje H Pasma

    Full Text Available System identification techniques have the potential to assess the contribution of the underlying systems involved in standing balance by applying well-known disturbances. We investigated the reliability of standing balance parameters obtained with multivariate closed loop system identification techniques.In twelve healthy elderly balance tests were performed twice a day during three days. Body sway was measured during two minutes of standing with eyes closed and the Balance test Room (BalRoom was used to apply four disturbances simultaneously: two sensory disturbances, to the proprioceptive and the visual system, and two mechanical disturbances applied at the leg and trunk segment. Using system identification techniques, sensitivity functions of the sensory disturbances and the neuromuscular controller were estimated. Based on the generalizability theory (G theory, systematic errors and sources of variability were assessed using linear mixed models and reliability was assessed by computing indexes of dependability (ID, standard error of measurement (SEM and minimal detectable change (MDC.A systematic error was found between the first and second trial in the sensitivity functions. No systematic error was found in the neuromuscular controller and body sway. The reliability of 15 of 25 parameters and body sway were moderate to excellent when the results of two trials on three days were averaged. To reach an excellent reliability on one day in 7 out of 25 parameters, it was predicted that at least seven trials must be averaged.This study shows that system identification techniques are a promising method to assess the underlying systems involved in standing balance in elderly. However, most of the parameters do not appear to be reliable unless a large number of trials are collected across multiple days. To reach an excellent reliability in one third of the parameters, a training session for participants is needed and at least seven trials of two

  9. Reliability of System Identification Techniques to Assess Standing Balance in Healthy Elderly.

    Pasma, Jantsje H; Engelhart, Denise; Maier, Andrea B; Aarts, Ronald G K M; van Gerven, Joop M A; Arendzen, J Hans; Schouten, Alfred C; Meskers, Carel G M; van der Kooij, Herman


    System identification techniques have the potential to assess the contribution of the underlying systems involved in standing balance by applying well-known disturbances. We investigated the reliability of standing balance parameters obtained with multivariate closed loop system identification techniques. In twelve healthy elderly balance tests were performed twice a day during three days. Body sway was measured during two minutes of standing with eyes closed and the Balance test Room (BalRoom) was used to apply four disturbances simultaneously: two sensory disturbances, to the proprioceptive and the visual system, and two mechanical disturbances applied at the leg and trunk segment. Using system identification techniques, sensitivity functions of the sensory disturbances and the neuromuscular controller were estimated. Based on the generalizability theory (G theory), systematic errors and sources of variability were assessed using linear mixed models and reliability was assessed by computing indexes of dependability (ID), standard error of measurement (SEM) and minimal detectable change (MDC). A systematic error was found between the first and second trial in the sensitivity functions. No systematic error was found in the neuromuscular controller and body sway. The reliability of 15 of 25 parameters and body sway were moderate to excellent when the results of two trials on three days were averaged. To reach an excellent reliability on one day in 7 out of 25 parameters, it was predicted that at least seven trials must be averaged. This study shows that system identification techniques are a promising method to assess the underlying systems involved in standing balance in elderly. However, most of the parameters do not appear to be reliable unless a large number of trials are collected across multiple days. To reach an excellent reliability in one third of the parameters, a training session for participants is needed and at least seven trials of two minutes must be

  10. Advanced computer technology - An aspect of the Terminal Configured Vehicle program. [air transportation capacity, productivity, all-weather reliability and noise reduction improvements

    Berkstresser, B. K.


    NASA is conducting a Terminal Configured Vehicle program to provide improvements in the air transportation system such as increased system capacity and productivity, increased all-weather reliability, and reduced noise. A typical jet transport has been equipped with highly flexible digital display and automatic control equipment to study operational techniques for conventional takeoff and landing aircraft. The present airborne computer capability of this aircraft employs a multiple computer simple redundancy concept. The next step is to proceed from this concept to a reconfigurable computer system which can degrade gracefully in the event of a failure, adjust critical computations to remaining capacity, and reorder itself, in the case of transients, to the highest order of redundancy and reliability.

  11. Computational Aeroacoustic Analysis System Development

    Hadid, A.; Lin, W.; Ascoli, E.; Barson, S.; Sindir, M.


    Many industrial and commercial products operate in a dynamic flow environment and the aerodynamically generated noise has become a very important factor in the design of these products. In light of the importance in characterizing this dynamic environment, Rocketdyne has initiated a multiyear effort to develop an advanced general-purpose Computational Aeroacoustic Analysis System (CAAS) to address these issues. This system will provide a high fidelity predictive capability for aeroacoustic design and analysis. The numerical platform is able to provide high temporal and spatial accuracy that is required for aeroacoustic calculations through the development of a high order spectral element numerical algorithm. The analysis system is integrated with well-established CAE tools, such as a graphical user interface (GUI) through PATRAN, to provide cost-effective access to all of the necessary tools. These include preprocessing (geometry import, grid generation and boundary condition specification), code set up (problem specification, user parameter definition, etc.), and postprocessing. The purpose of the present paper is to assess the feasibility of such a system and to demonstrate the efficiency and accuracy of the numerical algorithm through numerical examples. Computations of vortex shedding noise were carried out in the context of a two-dimensional low Mach number turbulent flow past a square cylinder. The computational aeroacoustic approach that is used in CAAS relies on coupling a base flow solver to the acoustic solver throughout a computational cycle. The unsteady fluid motion, which is responsible for both the generation and propagation of acoustic waves, is calculated using a high order flow solver. The results of the flow field are then passed to the acoustic solver through an interpolator to map the field values into the acoustic grid. The acoustic field, which is governed by the linearized Euler equations, is then calculated using the flow results computed

  12. Reliability Assessment of Distribution System Based on Discrete-event System

    丁屹峰; 程浩忠; 陈春霖; 江峰青; 房龄峰


    Discrete-event system simulation technology is used to analyze distribution system reliability in this paper. A simulation model, including entity state models, system state models, state transition models, reliability criterion model, is ciple of simulator clock to determine the sequence of random event occurrence dynamically. The results show this method is feasible.

  13. Implementation of Reliable Open Source IRIS Recognition System

    Dhananjay Ikhar


    Full Text Available RELIABLE automatic recognition of persons has long been an attractive goal. As in all pattern recognition problems, the key issue is the relation between inter-class and intra-class variability: objects can be reliably classified only if the variability among different instances of a given class is less than the variability between different classes.The objective of this paper is to implement an open-source iris recognition system in order to verify the claimed performance of the technology. The development tool used will be MATLAB, and emphasis will be only on the software for performing recognition and not hardware for capturing an eye image. A reliable application development approach will be employed in order to produce results quickly. MATLAB provides an excellent environment, with its image processing toolbox. To test the system, a database of 756 grayscale eye images courtesy of Chinese Academy of Sciences-Institute of Automation (CASIA is used. The system is to be composed of a number of sub-systems, which correspond to each stage of iris recognition. These stages are- image acquisition, segmentation, normalization and feature encoding. The input to the system will be an eye image, and the output will be an iris template, which will provide a mathematical representation of the iris region. Which conclude the objectives to design recognition system are- study of different biometrics and their features? Study of different recognition systems and their steps, selection of simple and efficient recognition algorithm for implementation, selection of fast and efficient tool for processing, apply the implemented algorithm to different database and find out performance factors.

  14. The reliable solution and computation time of variable parameters logistic model

    Wang, Pengfei; Pan, Xinnong


    The study investigates the reliable computation time (RCT, termed as T c) by applying a double-precision computation of a variable parameters logistic map (VPLM). Firstly, by using the proposed method, we obtain the reliable solutions for the logistic map. Secondly, we construct 10,000 samples of reliable experiments from a time-dependent non-stationary parameters VPLM and then calculate the mean T c. The results indicate that, for each different initial value, the T cs of the VPLM are generally different. However, the mean T c trends to a constant value when the sample number is large enough. The maximum, minimum, and probable distribution functions of T c are also obtained, which can help us to identify the robustness of applying a nonlinear time series theory to forecasting by using the VPLM output. In addition, the T c of the fixed parameter experiments of the logistic map is obtained, and the results suggest that this T c matches the theoretical formula-predicted value.

  15. A highly reliable, autonomous data communication subsystem for an advanced information processing system

    Nagle, Gail; Masotto, Thomas; Alger, Linda


    The need to meet the stringent performance and reliability requirements of advanced avionics systems has frequently led to implementations which are tailored to a specific application and are therefore difficult to modify or extend. Furthermore, many integrated flight critical systems are input/output intensive. By using a design methodology which customizes the input/output mechanism for each new application, the cost of implementing new systems becomes prohibitively expensive. One solution to this dilemma is to design computer systems and input/output subsystems which are general purpose, but which can be easily configured to support the needs of a specific application. The Advanced Information Processing System (AIPS), currently under development has these characteristics. The design and implementation of the prototype I/O communication system for AIPS is described. AIPS addresses reliability issues related to data communications by the use of reconfigurable I/O networks. When a fault or damage event occurs, communication is restored to functioning parts of the network and the failed or damage components are isolated. Performance issues are addressed by using a parallelized computer architecture which decouples Input/Output (I/O) redundancy management and I/O processing from the computational stream of an application. The autonomous nature of the system derives from the highly automated and independent manner in which I/O transactions are conducted for the application as well as from the fact that the hardware redundancy management is entirely transparent to the application.

  16. Computational models of complex systems

    Dabbaghian, Vahid


    Computational and mathematical models provide us with the opportunities to investigate the complexities of real world problems. They allow us to apply our best analytical methods to define problems in a clearly mathematical manner and exhaustively test our solutions before committing expensive resources. This is made possible by assuming parameter(s) in a bounded environment, allowing for controllable experimentation, not always possible in live scenarios. For example, simulation of computational models allows the testing of theories in a manner that is both fundamentally deductive and experimental in nature. The main ingredients for such research ideas come from multiple disciplines and the importance of interdisciplinary research is well recognized by the scientific community. This book provides a window to the novel endeavours of the research communities to present their works by highlighting the value of computational modelling as a research tool when investigating complex systems. We hope that the reader...

  17. Redundant computing for exascale systems.

    Stearley, Jon R.; Riesen, Rolf E.; Laros, James H., III; Ferreira, Kurt Brian; Pedretti, Kevin Thomas Tauke; Oldfield, Ron A.; Brightwell, Ronald Brian


    Exascale systems will have hundred thousands of compute nodes and millions of components which increases the likelihood of faults. Today, applications use checkpoint/restart to recover from these faults. Even under ideal conditions, applications running on more than 50,000 nodes will spend more than half of their total running time saving checkpoints, restarting, and redoing work that was lost. Redundant computing is a method that allows an application to continue working even when failures occur. Instead of each failure causing an application interrupt, multiple failures can be absorbed by the application until redundancy is exhausted. In this paper we present a method to analyze the benefits of redundant computing, present simulation results of the cost, and compare it to other proposed methods for fault resilience.

  18. Chest computed tomography-based scoring of thoracic sarcoidosis: Inter-rater reliability of CT abnormalities

    Heuvel, D.A.V. den; Es, H.W. van; Heesewijk, J.P. van; Spee, M. [St. Antonius Hospital Nieuwegein, Department of Radiology, Nieuwegein (Netherlands); Jong, P.A. de [University Medical Center Utrecht, Department of Radiology, Utrecht (Netherlands); Zanen, P.; Grutters, J.C. [University Medical Center Utrecht, Division Heart and Lungs, Utrecht (Netherlands); St. Antonius Hospital Nieuwegein, Center of Interstitial Lung Diseases, Department of Pulmonology, Nieuwegein (Netherlands)


    To determine inter-rater reliability of sarcoidosis-related computed tomography (CT) findings that can be used for scoring of thoracic sarcoidosis. CT images of 51 patients with sarcoidosis were scored by five chest radiologists for various abnormal CT findings (22 in total) encountered in thoracic sarcoidosis. Using intra-class correlation coefficient (ICC) analysis, inter-rater reliability was analysed and reported according to the Guidelines for Reporting Reliability and Agreement Studies (GRRAS) criteria. A pre-specified sub-analysis was performed to investigate the effect of training. Scoring was trained in a distinct set of 15 scans in which all abnormal CT findings were represented. Median age of the 51 patients (36 men, 70 %) was 43 years (range 26 - 64 years). All radiographic stages were present in this group. ICC ranged from 0.91 for honeycombing to 0.11 for nodular margin (sharp versus ill-defined). The ICC was above 0.60 in 13 of the 22 abnormal findings. Sub-analysis for the best-trained observers demonstrated an ICC improvement for all abnormal findings and values above 0.60 for 16 of the 22 abnormalities. In our cohort, reliability between raters was acceptable for 16 thoracic sarcoidosis-related abnormal CT findings. (orig.)

  19. Reliability of a method to conduct upper airway analysis in cone-beam computed tomography

    Karen Regina Siqueira de Souza


    Full Text Available The aim of this study was to assess the reliability of a method to measure the following upper airway dimensions: total volume (TV, the nasopharyngeal narrowest areas (NNA, and the oropharyngeal narrowest areas (ONA. The sample consisted of 60 cone-beam computed tomography (CBCT scans, evaluated by two observers twice, using the Dolphin 3D software (Dolphin Imaging & Management solutions, Chatsworth, California, USA, which afforded image reconstruction, and measurement of the aforementioned dimensions. The data was submitted to reliability tests, by the intraclass correlation coefficient (ICC, and the Bland & Altman agreement tests, with their respective confidence intervals (CI set at 95%. Excellent intra- and interobserver reliability values were found for all variables assessed (TV, NNA and ONA, with ICC values ranging from 0.88 to 0.99. The data demonstrated an agreement between the two assessments of each observer and between the first evaluations of both observers, thus confirming the reliability of this methodology. The results suggest that this methodology can be used in further studies to investigate upper airway dimensions (TV, NNA, and ONA, thereby contributing to the diagnosis of upper airway obstructions.

  20. Modeling service time reliability in urban ferry system

    Chen, Yifan; Luo, Sida; Zhang, Mengke; Shen, Hanxia; Xin, Feifei; Luo, Yujie


    The urban ferry system can carry a large number of travelers, which may alleviate the pressure on road traffic. As an indicator of its service quality, service time reliability (STR) plays an essential part in attracting travelers to the ferry system. A wide array of studies have been conducted to analyze the STR of land transportation. However, the STR of ferry systems has received little attention in the transportation literature. In this study, a model was established to obtain the STR in urban ferry systems. First, the probability density function (PDF) of the service time provided by ferry systems was constructed. Considering the deficiency of the queuing theory, this PDF was determined by Bayes’ theorem. Then, to validate the function, the results of the proposed model were compared with those of the Monte Carlo simulation. With the PDF, the reliability could be determined mathematically by integration. Results showed how the factors including the frequency, capacity, time schedule and ferry waiting time affected the STR under different degrees of congestion in ferry systems. Based on these results, some strategies for improving the STR were proposed. These findings are of great significance to increasing the share of ferries among various urban transport modes.

  1. Computer-aided system design

    Walker, Carrie K.


    A technique has been developed for combining features of a systems architecture design and assessment tool and a software development tool. This technique reduces simulation development time and expands simulation detail. The Architecture Design and Assessment System (ADAS), developed at the Research Triangle Institute, is a set of computer-assisted engineering tools for the design and analysis of computer systems. The ADAS system is based on directed graph concepts and supports the synthesis and analysis of software algorithms mapped to candidate hardware implementations. Greater simulation detail is provided by the ADAS functional simulator. With the functional simulator, programs written in either Ada or C can be used to provide a detailed description of graph nodes. A Computer-Aided Software Engineering tool developed at the Charles Stark Draper Laboratory (CSDL CASE) automatically generates Ada or C code from engineering block diagram specifications designed with an interactive graphical interface. A technique to use the tools together has been developed, which further automates the design process.

  2. Reduced Expanding Load Method for Simulation-Based Structural System Reliability Analysis

    远方; 宋丽娜; 方江生


    The current situation and difficulties of the structural system reliability analysis are mentioned. Then on the basis of Monte Carlo method and computer simulation, a new analysis method reduced expanding load method ( RELM ) is presented, which can be used to solve structural reliability problems effectively and conveniently. In this method, the uncertainties of loads, structural material properties and dimensions can be fully considered. If the statistic parameters of stochastic variables are known, by using this method, the probability of failure can be estimated rather accurately. In contrast with traditional approaches, RELM method gives a much better understanding of structural failure frequency and its reliability indexβ is more meaningful. To illustrate this new idea, a specific example is given.

  3. Reliability in automotive and mechanical engineering determination of component and system reliability

    Bertsche, Bernd


    In the present contemporary climate of global competition in every branch of engineering and manufacture it has been shown from extensive customer surveys that above every other attribute, reliability stands as the most desired feature in a finished product. To survive this relentless fight for survival any organisation, which neglect the plea of attaining to excellence in reliability, will do so at a serious cost Reliability in Automotive and Mechanical Engineering draws together a wide spectrum of diverse and relevant applications and analyses on reliability engineering. This is distilled into this attractive and well documented volume and practising engineers are challenged with the formidable task of simultaneously improving reliability and reducing the costs and down-time due to maintenance. The volume brings together eleven chapters to highlight the importance of the interrelated reliability and maintenance disciplines. They represent the development trends and progress resulting in making this book ess...

  4. An operating system for future aerospace vehicle computer systems

    Foudriat, E. C.; Berman, W. J.; Will, R. W.; Bynum, W. L.


    The requirements for future aerospace vehicle computer operating systems are examined in this paper. The computer architecture is assumed to be distributed with a local area network connecting the nodes. Each node is assumed to provide a specific functionality. The network provides for communication so that the overall tasks of the vehicle are accomplished. The O/S structure is based upon the concept of objects. The mechanisms for integrating node unique objects with node common objects in order to implement both the autonomy and the cooperation between nodes is developed. The requirements for time critical performance and reliability and recovery are discussed. Time critical performance impacts all parts of the distributed operating system; e.g., its structure, the functional design of its objects, the language structure, etc. Throughout the paper the tradeoffs - concurrency, language structure, object recovery, binding, file structure, communication protocol, programmer freedom, etc. - are considered to arrive at a feasible, maximum performance design. Reliability of the network system is considered. A parallel multipath bus structure is proposed for the control of delivery time for time critical messages. The architecture also supports immediate recovery for the time critical message system after a communication failure.

  5. Bitwise identical compiling setup: prospective for reproducibility and reliability of earth system modeling

    R. Li


    Full Text Available Reproducibility and reliability are fundamental principles of scientific research. A compiling setup that includes a specific compiler version and compiler flags is essential technical supports for Earth system modeling. With the fast development of computer software and hardware, compiling setup has to be updated frequently, which challenges the reproducibility and reliability of Earth system modeling. The existing results of a simulation using an original compiling setup may be irreproducible by a newer compiling setup because trivial round-off errors introduced by the change of compiling setup can potentially trigger significant changes in simulation results. Regarding the reliability, a compiler with millions of lines of codes may have bugs that are easily overlooked due to the uncertainties or unknowns in Earth system modeling. To address these challenges, this study shows that different compiling setups can achieve exactly the same (bitwise identical results in Earth system modeling, and a set of bitwise identical compiling setups of a model can be used across different compiler versions and different compiler flags. As a result, the original results can be more easily reproduced; for example, the original results with an older compiler version can be reproduced exactly with a newer compiler version. Moreover, this study shows that new test cases can be generated based on the differences of bitwise identical compiling setups between different models, which can help detect software bugs or risks in the codes of models and compilers and finally improve the reliability of Earth system modeling.

  6. Information systems and computing technology

    Zhang, Lei


    Invited papersIncorporating the multi-cross-sectional temporal effect in Geographically Weighted Logit Regression K. Wu, B. Liu, B. Huang & Z. LeiOne shot learning human actions recognition using key posesW.H. Zou, S.G. Li, Z. Lei & N. DaiBand grouping pansharpening for WorldView-2 satellite images X. LiResearch on GIS based haze trajectory data analysis system Y. Wang, J. Chen, J. Shu & X. WangRegular papersA warning model of systemic financial risks W. Xu & Q. WangResearch on smart mobile phone user experience with grounded theory J.P. Wan & Y.H. ZhuThe software reliability analysis based on

  7. Design for Reliability of Power Electronics in Renewable Energy Systems

    Ma, Ke; Yang, Yongheng; Wang, Huai


    Power electronics is the enabling technology for maximizing the power captured from renewable electrical generation, e.g., the wind and solar technology, and also for an efficient integration into the grid. Therefore, it is important that the power electronics are reliable and do not have too many...... failures during operation which otherwise will increase cost for operation, maintenance and reputation. Typically, power electronics in renewable electrical generation has to be designed for 20–30 years of operation, and in order to do that, it is crucial to know about the mission profile of the power...... electronics technology as well as to know how the power electronics technology is loaded in terms of temperature and other stressors relevant, to reliability. Hence, this chapter will show the basics of power electronics technology for renewable energy systems, describe the mission profile of the technology...

  8. Computer controlled vent and pressurization system

    Cieslewicz, E. J.


    The Centaur space launch vehicle airborne computer, which was primarily used to perform guidance, navigation, and sequencing tasks, was further used to monitor and control inflight pressurization and venting of the cryogenic propellant tanks. Computer software flexibility also provided a failure detection and correction capability necessary to adopt and operate redundant hardware techniques and enhance the overall vehicle reliability.

  9. Fault-tolerant clock synchronization validation methodology. [in computer systems

    Butler, Ricky W.; Palumbo, Daniel L.; Johnson, Sally C.


    A validation method for the synchronization subsystem of a fault-tolerant computer system is presented. The high reliability requirement of flight-crucial systems precludes the use of most traditional validation methods. The method presented utilizes formal design proof to uncover design and coding errors and experimentation to validate the assumptions of the design proof. The experimental method is described and illustrated by validating the clock synchronization system of the Software Implemented Fault Tolerance computer. The design proof of the algorithm includes a theorem that defines the maximum skew between any two nonfaulty clocks in the system in terms of specific system parameters. Most of these parameters are deterministic. One crucial parameter is the upper bound on the clock read error, which is stochastic. The probability that this upper bound is exceeded is calculated from data obtained by the measurement of system parameters. This probability is then included in a detailed reliability analysis of the system.

  10. Reliable and reproducible classification system for scoliotic radiograph using image processing techniques.

    Anitha, H; Prabhu, G K; Karunakar, A K


    Scoliosis classification is useful for guiding the treatment and testing the clinical outcome. State-of-the-art classification procedures are inherently unreliable and non-reproducible due to technical and human judgmental error. In the current diagnostic system each examiner will have diagrammatic summary of classification procedure, number of scoliosis curves, apex level, etc. It is very difficult to define the required anatomical parameters in the noisy radiographs. The classification system demands automatic image understanding system. The proposed automated classification procedures extracts the anatomical features using image processing and applies classification procedures based on computer assisted algorithms. The reliability and reproducibility of the proposed computerized image understanding system are compared with manual and computer assisted system using Kappa values.

  11. Hierarchical nanoreinforced composites for highly reliable large wind turbines: Computational modelling and optimization

    Mishnaevsky, Leon


    , with modified, hybridor nanomodified structures. In this project, we seek to explore the potential of hybrid (carbon/glass),nanoreinforced and hierarchical composites (with secondary CNT, graphene or nanoclay reinforcement) as future materials for highly reliable large wind turbines. Using 3D multiscale...... computational models ofthe composites, we study the effect of hybrid structure and of nanomodifications on the strength, lifetime and service properties of the materials (see Figure 1). As a result, a series of recommendations toward the improvement of composites for structural applications under long term...

  12. System reliability in design and maintenance of fixed offshore structures

    Dalane, J.I.


    Offshore structures are usually redundant, and several components must fail before structural collapse occurs. This work deals with system reliability for fatigue and overload of typical steel truss and frame structure such as jackets and jack-ups. The application of system reliability in design and maintenance is addressed. A formulation for a failure sequence of fatigue and overload is presented. The basic failure event for fatigue, is defined in terms of time to section failure. For overload, realistic failure criteria are applied. After an overload failure, the member replacement technique is used to account for the post collapse behavior. The branch and bound algorithm is used to identify important failure sequences, and system failure occur if any of these collapse sequences occur. The first order reliability methods are efficiently used to calculate the failure probabilities. In this work the importance of uncertainties in load and ultimate strength for a jacket and a jack-up structure is discussed. A joint environmental model for wave and current loading is applied. Ultimate capacity is calculated by nonlinear pushover analysis, and the uncertainties identified by Monte Carlo simulation. Fatigue may be an important failure mode for offshore structure, and in-service inspections are frequently performed. Underwater inspections are, however, very expensive and it is therefore important that the inspections are performed in such a way that they significantly increase our knowledge about the safety of the structure. In this work an inspection importance factor is defined. This importance factor can be used to identify the most important members in the system from an inspection point of view. Inspection for unexpected damages are also discussed. 122 refs., 72 figs., 50 tabs.

  13. A Reliable Identification System for Red Palm Weevil

    Saleh Mufleh Al-Saqer


    Full Text Available Problem statement: Red Palm Weevil (RPW is a widely found pest among palm trees and is known to cause significant losses every year to palm growers. Existing identification techniques for RPW comprise of using traps with pheromones to detect these pests. However, these traditional methods are labor-intensive, expensive to implement and unreliable for early detection of RPW infestation. Early detection of these pests would provide the best opportunity to eradicate them and minimize the potential losses of palm trees. Approach: In this study, a reliable identification system is developed to identify RPW by using only a small number of image descriptors in combination with neural network models. The neural networks were developed by using between three to nine image descriptors as inputs and a large database of insects’ images was used for training. Three different training ratios ranging from 25-75% were used and the network was trained by two different algorithms. Further, several scenarios were formulated to test the efficacy and reliability of the newly developed identification system. Results: The results indicate that the identification system developed in this study is capable of 100% recognition of RPW and 93% recognition of other insects in the database by taking as input only three easily-calculable image descriptors. Further, the average training times for these networks was 13 sec and the testing time for a single image was only 0.015 sec. Conclusion: The new system developed in this study provided reliable identification for RPW and was found to be up to 14 times faster in training and three times faster in testing of insects’ images.

  14. Computer Networks A Systems Approach

    Peterson, Larry L


    This best-selling and classic book teaches you the key principles of computer networks with examples drawn from the real world of network and protocol design. Using the Internet as the primary example, the authors explain various protocols and networking technologies. Their systems-oriented approach encourages you to think about how individual network components fit into a larger, complex system of interactions. Whatever your perspective, whether it be that of an application developer, network administrator, or a designer of network equipment or protocols, you will come away with a "big pictur

  15. Reliability of automotive and mechanical engineering. Determination of component and system reliability

    Bertsche, Bernd [Stuttgart Univ. (Germany). Inst. fuer Maschinenelemente


    In the present contemporary climate of global competition in every branch of engineering and manufacture it has been shown from extensive customer surveys that above every other attribute, reliability stands as the most desired feature in a finished product. To survive this relentless fight for survival any organisation, which neglect the plea of attaining to excellence in reliability, will do so at a serious cost Reliability in Automotive and Mechanical Engineering draws together a wide spectrum of diverse and relevant applications and analyses on reliability engineering. This is distilled into this attractive and well documented volume and practising engineers are challenged with the formidable task of simultaneously improving reliability and reducing the costs and down-time due to maintenance. The volume brings together eleven chapters to highlight the importance of the interrelated reliability and maintenance disciplines. They represent the development trends and progress resulting in making this book essential basic material for all research academics, planners maintenance executives, who have the responsibility to implement the findings and maintenance audits into a cohesive reliability policy. Although, the book is centred on automotive engineering nevertheless, the examples and overall treatise can be applied to a wide range of professional practices. The book will be a valuable source of information for those concerned with improved manufacturing performance and the formidable task of optimising reliability. (orig.)

  16. Structural Reliability Methods for Wind Power Converter System Component Reliability Assessment

    Kostandyan, Erik; Sørensen, John Dalsgaard


    is defined by the threshold model. The attention is focused on crack propagation in solder joints of electrical components due to the temperature loadings. Structural Reliability approaches are used to incorporate model, physical and statistical uncertainties. Reliability estimation by means of structural...

  17. Electrical utility generating system reliability analysis code, SYSREL. Social cost studies program

    Hub, K.; Conley, L.; Buehring, W.; Rowland, B.; Stephenson, M.


    The system reliability code, SYSREL, is a system planning tool that can be used to assess the reliability and economic performance of alternative expansion patterns of electric utility generation systems. Given input information such as capacity, forced-outage rate, number of weeks of annual scheduled maintenance, and economic data for individual units along with the expected load characteristics, the code produces estimates of the mean time between system failures, required reserve capacity to meet a specified system-failure-frequency criterion, expected energy generation from each unit, and system energy cost. The categories of calculations performed by the code are maintenance scheduling, reliability, capacity requirement, energy production allocation, and energy cost. The code is designed to examine alternative generating units and system expansion patterns based on the constraints and general economic conditions imposed by the investigator. The computer running time to execute a study is short and many system alternatives can be examined at a relatively low cost. The report contains a technical description of the code, list of input data requirements, program listing, sample execution, and parameter studies. (auth)

  18. The Reliability of Wireless Sensor Network on Pipeline Monitoring System

    Hafizh Prihtiadi


    Full Text Available The wireless sensor network (WSN is an attractive technology, which combines embedded systems and communication networks making them more efficient and effective. Currently, WSNs have been developed for various monitoring applications. In this research, a wireless mesh network for a pipeline monitoring system was designed and developed. Sensor nodes were placed at each branch in the pipe system. Some router fails were simulated and the response of each node in the network was evaluated. Three different scenarios were examined to test the data transmission performance. The results proved that the wireless mesh network was reliable and robust. The system is able to perform link reconfiguration, automatic routing and safe data transmission from the beginning node to the end node.

  19. A Model of Ship Auxiliary System for Reliable Ship Propulsion

    Dragan Martinović


    Full Text Available The main purpose of a vessel is to transport goods and passengers at minimum cost. Out of the analysis of relevant global databases on ship machinery failures, it is obvious that the most frequent failures occur precisely on the generator-running diesel engines. Any failure in the electrical system can leave the ship without propulsion, even if the main engine is working properly. In that case, the consequences could be devastating: higher running expenses, damage to the ship, oil spill or substantial marine pollution. These are the reasons why solutions that will prevent the ship being unable to manoeuvre during her exploitation should be implemented. Therefore, it is necessary to define a propulsion restoration model which would not depend on the primary electrical energy. The paper provides a model of the marine auxiliary system for more reliable propulsion. This includes starting, reversing and stopping of the propulsion engine. The proposed solution of reliable propulsion model based on the use of a shaft generator and an excitation engine enables the restoration of propulsion following total failure of the electrical energy primary production system, and the self-propelled ship navigation. A ship is an important factor in the Technology of Transport, and the implementation of this model increases safety, reduces downtime, and significantly decreases hazards of pollution damage.KEYWORDSreliable propulsion, failure, ship auxiliary system, control, propulsion restoration

  20. Reliable iterative methods for solving ill-conditioned algebraic systems

    Padiy, Alexander


    The finite element method is one of the most popular techniques for numerical solution of partial differential equations. The rapid performance increase of modern computer systems makes it possible to tackle increasingly more difficult finite-element models arising in engineering practice. However,

  1. The Application of the Distribution Supervisory system of Computer in Substation of 500 KV

    Qi,Xinbo; Chang,Wenping


    This paper put forward a kind of new supervisory system of computer - the distribution supervisory system of computer. The system is arranged through scattering, stratified control, which is developmental, and with powerful mutual operating ness etc. The practical experience of the system for three years in Huajia (Henan) indicated that the system is reliable,safety, real-time and economy.

  2. Reliability-Based Inspection Planning for Structural Systems

    Sørensen, John Dalsgaard


    A general model for reliability-based optimal inspection and repair strategies for structural systems is described. The total expected costs in the design lifetime is minimized with the number of inspections, the inspection times and efforts as decision variables. The equivalence of this model...... with a preposterior analysis from statistical decision theory is discussed. It is described how information obtained by an inspection can be used in a repair decision. Stochastic models for inspection, measurement and repair actions are presented. The general model is applied for inspection and repair planning...

  3. Measurement of transplanted pancreatic volume using computed tomography: reliability by intra- and inter-observer variability

    Lundqvist, Eva; Segelsjoe, Monica; Magnusson, Anders [Uppsala Univ., Dept. of Radiology, Oncology and Radiation Science, Section of Radiology, Uppsala (Sweden)], E-mail:; Andersson, Anna; Biglarnia, Ali-Reza [Dept. of Surgical Sciences, Section of Transplantation Surgery, Uppsala Univ. Hospital, Uppsala (Sweden)


    Background Unlike other solid organ transplants, pancreas allografts can undergo a substantial decrease in baseline volume after transplantation. This phenomenon has not been well characterized, as there are insufficient data on reliable and reproducible volume assessments. We hypothesized that characterization of pancreatic volume by means of computed tomography (CT) could be a useful method for clinical follow-up in pancreas transplant patients. Purpose To evaluate the feasibility and reliability of pancreatic volume assessment using CT scan in transplanted patients. Material and Methods CT examinations were performed on 21 consecutive patients undergoing pancreas transplantation. Volume measurements were carried out by two observers tracing the pancreatic contours in all slices. The observers performed the measurements twice for each patient. Differences in volume measurement were used to evaluate intra- and inter-observer variability. Results The intra-observer variability for the pancreatic volume measurements of Observers 1 and 2 was found to be in almost perfect agreement, with an intraclass correlation coefficient (ICC) of 0.90 (0.77-0.96) and 0.99 (0.98-1.0), respectively. Regarding inter-observer validity, the ICCs for the first and second measurements were 0.90 (range, 0.77-0.96) and 0.95 (range, 0.85-0.98), respectively. Conclusion CT volumetry is a reliable and reproducible method for measurement of transplanted pancreatic volume.

  4. Methods for reliability evaluation of trust and reputation systems

    Janiszewski, Marek B.


    Trust and reputation systems are a systematic approach to build security on the basis of observations of node's behaviour. Exchange of node's opinions about other nodes is very useful to indicate nodes which act selfishly or maliciously. The idea behind trust and reputation systems gets significance because of the fact that conventional security measures (based on cryptography) are often not sufficient. Trust and reputation systems can be used in various types of networks such as WSN, MANET, P2P and also in e-commerce applications. Trust and reputation systems give not only benefits but also could be a thread itself. Many attacks aim at trust and reputation systems exist, but such attacks still have not gain enough attention of research teams. Moreover, joint effects of many of known attacks have been determined as a very interesting field of research. Lack of an acknowledged methodology of evaluation of trust and reputation systems is a serious problem. This paper aims at presenting various approaches of evaluation such systems. This work also contains a description of generalization of many trust and reputation systems which can be used to evaluate reliability of such systems in the context of preventing various attacks.

  5. Improving risk assessment by defining consistent and reliable system scenarios

    B. Mazzorana


    Full Text Available During the entire procedure of risk assessment for hydrologic hazards, the selection of consistent and reliable scenarios, constructed in a strictly systematic way, is fundamental for the quality and reproducibility of the results. However, subjective assumptions on relevant impact variables such as sediment transport intensity on the system loading side and weak point response mechanisms repeatedly cause biases in the results, and consequently affect transparency and required quality standards. Furthermore, the system response of mitigation measures to extreme event loadings represents another key variable in hazard assessment, as well as the integral risk management including intervention planning. Formative Scenario Analysis, as a supplement to conventional risk assessment methods, is a technique to construct well-defined sets of assumptions to gain insight into a specific case and the potential system behaviour. By two case studies, carried out (1 to analyse sediment transport dynamics in a torrent section equipped with control measures, and (2 to identify hazards induced by woody debris transport at hydraulic weak points, the applicability of the Formative Scenario Analysis technique is presented. It is argued that during scenario planning in general and with respect to integral risk management in particular, Formative Scenario Analysis allows for the development of reliable and reproducible scenarios in order to design more specifically an application framework for the sustainable assessment of natural hazards impact. The overall aim is to optimise the hazard mapping and zoning procedure by methodologically integrating quantitative and qualitative knowledge.


    Amanuel Ayde Ergado


    In computer domain the professionals were limited in number but the numbers of institutions looking for computer professionals were high. The aim of this study is developing self learning expert system which is providing troubleshooting information about problems occurred in the computer system for the information and communication technology technicians and computer users to solve problems effectively and efficiently to utilize computer and computer related resources. Domain know...

  7. Reliability analysis of repairable systems using system dynamics modeling and simulation

    Srinivasa Rao, M.; Naikan, V. N. A.


    Repairable standby system's study and analysis is an important topic in reliability. Analytical techniques become very complicated and unrealistic especially for modern complex systems. There have been attempts in the literature to evolve more realistic techniques using simulation approach for reliability analysis of systems. This paper proposes a hybrid approach called as Markov system dynamics (MSD) approach which combines the Markov approach with system dynamics simulation approach for reliability analysis and to study the dynamic behavior of systems. This approach will have the advantages of both Markov as well as system dynamics methodologies. The proposed framework is illustrated for a standby system with repair. The results of the simulation when compared with that obtained by traditional Markov analysis clearly validate the MSD approach as an alternative approach for reliability analysis.


    才庆祥; 彭世济; 张达贤


    Subjected to various stochastic factors, surface mining engineering reliability is difficult to solve by using general reliability mathematical method. The concept of reliability measurement is introduced; And the authors have combined system simulation method with CAD technique and developed an interactive color character graphic design system for evaluating and solving the mining engineering reliability in surface mines under the given constraints.

  9. Reliability and Maintainability Data for Liquid Metal Cooling Systems

    Cadwallader, Lee Charles [Idaho National Laboratory


    One of the coolants of interest for future fusion breeding blankets is lead-lithium. As a liquid metal it offers the advantages of high temperature operation for good station efficiency, low pressure, and moderate flow rate. This coolant is also under examination for use in test blanket modules to be used in the ITER international project. To perform reliability, availability, maintainability and inspectability (RAMI) assessment as well as probabilistic safety assessment (PSA) of lead-lithium cooling systems, component failure rate data are needed to quantify the system models. RAMI assessment also requires repair time data and inspection time data. This paper presents a new survey of the data sets that are available at present to support RAMI and PSA quantification. Recommendations are given for the best data values to use when quantifying system models.

  10. One approach for evaluating the Distributed Computing Design System (DCDS)

    Ellis, J. T.


    The Distributed Computer Design System (DCDS) provides an integrated environment to support the life cycle of developing real-time distributed computing systems. The primary focus of DCDS is to significantly increase system reliability and software development productivity, and to minimize schedule and cost risk. DCDS consists of integrated methodologies, languages, and tools to support the life cycle of developing distributed software and systems. Smooth and well-defined transistions from phase to phase, language to language, and tool to tool provide a unique and unified environment. An approach to evaluating DCDS highlights its benefits.

  11. Improving human reliability through better nuclear power plant system design. Progress report

    Golay, M.W.


    The project on {open_quotes}Development of a Theory of the Dependence of Human Reliability upon System Designs as a Means of Improving Nuclear Power Plant Performance{close_quotes} has been undertaken in order to address the important problem of human error in advanced nuclear power plant designs. Most of the creativity in formulating such concepts has focused upon improving the mechanical reliability of safety related plant systems. However, the lack of a mature theory has retarded similar progress in reducing the likely frequencies of human errors. The main design mechanism used to address this class of concerns has been to reduce or eliminate the human role in plant operations and accident response. The plan of work being pursued in this project is to perform a set of experiments involving human subject who are required to operate, diagnose and respond to changes in computer-simulated systems, relevant to those encountered in nuclear power plants. In the tests the systems are made to differ in complexity in a systematic manner. The computer program used to present the problems to be solved also records the response of the operator as it unfolds. Ultimately this computer is also to be used in compiling the results of the project. The work of this project is focused upon nuclear power plant applications. However, the persuasiveness of human errors in using all sorts of electromechanical machines gives it a much greater potential importance. Because of this we are attempting to pursue our work in a fashion permitting broad generalizations.

  12. Health and safety management system audit reliability pilot project.

    Dyjack, D T; Redinger, C F; Ridge, R S


    This pilot study assessed occupational health and safety (OHS) management system audit finding reliability using a modified test-retest method. Two industrial hygienists with similar training and education conducted four, 1-day management system audits in four dissimilar organizational environments. The researchers examined four auditable sections (employee participation, training, controls, and communications) contained in a publicly available OHS management system assessment instrument. At each site, 102 auditable clauses were evaluated using a progressive 6-point scale. The team examined both the consistency of and agreement between the scores of the two auditors. Consistency was evaluated by calculating the Pearson r correlations for the two auditors' scores at each site and for each section within each site. Pearson correlations comparing overall scores for each site were all very low, ranging from 0.206 to 0.543. Training and communication system assessments correlated the highest, whereas employee participation and control system scores correlated the least. To measure agreement, t-tests were first calculated to determine whether the differences were statistically significant. Aggregate mean scores for two of the four sites were significantly different. Of the 16 total sections evaluated (i.e., 4 sections per site), seven scores were significantly different. Finally, the agreement of the scores between the two auditors for the four sites was evaluated by calculating two types of intraclass correlation coefficients, all of which failed to meet the minimum requirement for agreement. These findings suggest that opportunities for improving the reliability of the instrument and the audit process exist. Future research should include governmental and commercial OHS program assessments and related environmental management systems and their attendant audit protocols.

  13. Improving the reliability of stator insulation system in rotating machines

    Gupta, G.K.; Sedding, H.G.; Culbert, I.M. [Ontario Hydro, Toronto, ON, (Canada)


    Reliable performance of rotating machines, especially generators and primary heat transport pump motors, is critical to the efficient operation on nuclear stations. A significant number of premature machine failures have been attributed to the stator insulation problems. Ontario Hydro has attempted to assure the long term reliability of the insulation system in critical rotating machines through proper specifications and quality assurance tests for new machines and periodic on-line and off-line diagnostic tests on machines in service. The experience gained over the last twenty years is presented in this paper. Functional specifications have been developed for the insulation system in critical rotating machines based on engineering considerations and our past experience. These specifications include insulation stress, insulation resistance and polarization index, partial discharge levels, dissipation factor and tip up, AC and DC hipot tests. Voltage endurance tests are specified for groundwall insulation system of full size production coils and bars. For machines with multi-turn coils, turn insulation strength for fast fronted surges in specified and verified through tests on all coils in the factory and on samples of finished coils in the laboratory. Periodic on-line and off-line diagnostic tests were performed to assess the condition of the stator insulation system in machines in service. Partial discharges are measured on-line using several techniques to detect any excessive degradation of the insulation system in critical machines. Novel sensors have been developed and installed in several machines to facilitate measurements of partial discharges on operating machines. Several off-line tests are performed either to confirm the problems indicated by the on-line test or to assess the insulation system in machines which cannot be easily tested on-line. Experience with these tests, including their capabilities and limitations, are presented. (author)

  14. Power system reliability impacts of wind generation and operational reserve requirements

    E. Gil


    Full Text Available Due to its variability, wind generation integration presents a significant challenge to power system operators in order to maintain adequate reliability levels while ensuring least cost operation. This paper explores the trade-off between the benefits associated to a higher wind penetration and the additional operational reserve requirements that they impose. Such exploration is valued in terms of its effect on power system reliability, measured as an amount of unserved energy. The paper also focuses on how changing the Value of Lost Load (VoLL can be used to attain different reliability targets, and how wind power penetration and the diversity of the wind energy resource will impact quality of supply (in terms of instances of unserved energy. The evaluation of different penetrations of wind power generation, different wind speed profiles, wind resource diversity, and different operational reserve requirements, is conducted on the Chilean Northern Interconnected System (SING using statistical modeling of wind speed time series and computer simulation through a 24-hour ahead unit commitment algorithm and a Monte Carlo simulation scheme. Results for the SING suggest that while wind generation can significantly reduce generation costs, it can also imply higher security costs to reach acceptable reliability levels.

  15. Improvement of the reliability graph with general gates to analyze the reliability of dynamic systems that have various operation modes

    Shin, Seung Ki [Div. of Research Reactor System Design, Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); No, Young Gyu; Seong, Poong Hyun [Dept. of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of)


    The safety of nuclear power plants is analyzed by a probabilistic risk assessment, and the fault tree analysis is the most widely used method for a risk assessment with the event tree analysis. One of the well-known disadvantages of the fault tree is that drawing a fault tree for a complex system is a very cumbersome task. Thus, several graphical modeling methods have been proposed for the convenient and intuitive modeling of complex systems. In this paper, the reliability graph with general gates (RGGG) method, one of the intuitive graphical modeling methods based on Bayesian networks, is improved for the reliability analyses of dynamic systems that have various operation modes with time. A reliability matrix is proposed and it is explained how to utilize the reliability matrix in the RGGG for various cases of operation mode changes. The proposed RGGG with a reliability matrix provides a convenient and intuitive modeling of various operation modes of complex systems, and can also be utilized with dynamic nodes that analyze the failure sequences of subcomponents. The combinatorial use of a reliability matrix with dynamic nodes is illustrated through an application to a shutdown cooling system in a nuclear power plant.

  16. Computers as components principles of embedded computing system design

    Wolf, Marilyn


    Computers as Components: Principles of Embedded Computing System Design, 3e, presents essential knowledge on embedded systems technology and techniques. Updated for today's embedded systems design methods, this edition features new examples including digital signal processing, multimedia, and cyber-physical systems. Author Marilyn Wolf covers the latest processors from Texas Instruments, ARM, and Microchip Technology plus software, operating systems, networks, consumer devices, and more. Like the previous editions, this textbook: Uses real processors to demonstrate both technology and tec

  17. Automated Computer Access Request System

    Snook, Bryan E.


    The Automated Computer Access Request (AutoCAR) system is a Web-based account provisioning application that replaces the time-consuming paper-based computer-access request process at Johnson Space Center (JSC). Auto- CAR combines rules-based and role-based functionality in one application to provide a centralized system that is easily and widely accessible. The system features a work-flow engine that facilitates request routing, a user registration directory containing contact information and user metadata, an access request submission and tracking process, and a system administrator account management component. This provides full, end-to-end disposition approval chain accountability from the moment a request is submitted. By blending both rules-based and rolebased functionality, AutoCAR has the flexibility to route requests based on a user s nationality, JSC affiliation status, and other export-control requirements, while ensuring a user s request is addressed by either a primary or backup approver. All user accounts that are tracked in AutoCAR are recorded and mapped to the native operating system schema on the target platform where user accounts reside. This allows for future extensibility for supporting creation, deletion, and account management directly on the target platforms by way of AutoCAR. The system s directory-based lookup and day-today change analysis of directory information determines personnel moves, deletions, and additions, and automatically notifies a user via e-mail to revalidate his/her account access as a result of such changes. AutoCAR is a Microsoft classic active server page (ASP) application hosted on a Microsoft Internet Information Server (IIS).

  18. Research on computer systems benchmarking

    Smith, Alan Jay (Principal Investigator)


    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  19. Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) Quality Assurance Manual

    C. L. Smith; R. Nims; K. J. Kvarfordt; C. Wharton


    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) is a software application developed for performing a complete probabilistic risk assessment using a personal computer running the Microsoft Windows operating system. SAPHIRE is primarily funded by the U.S. Nuclear Regulatory Commission (NRC). The role of the INL in this project is that of software developer and tester. This development takes place using formal software development procedures and is subject to quality assurance (QA) processes. The purpose of this document is to describe how the SAPHIRE software QA is performed for Version 6 and 7, what constitutes its parts, and limitations of those processes.

  20. A New Approach to Provide Reliable Data Systems Without Using Space-Qualified Electronic Components

    Häbel, W.

    This paper describes the present situation and the expected trends with regard to the availability of electronic components, their quality levels, technology trends and sensitivity to the space environment. Many recognized vendors have already discontinued their MIL production line and state of the art components will in many cases not be offered in this quality level because of the shrinking market. It becomes therefore obvious that new methods need to be considered "How to build reliable Data Systems for space applications without High-Rel parts". One of the most promising approaches is the identification, masking and suppression of faults by developing Fault Tolerant Computer systems which is described in this paper.

  1. Real-time reliability prediction for dynamic systems with both deteriorating and unreliable components

    XU ZhengGuo; JI YinDong; ZHOU DongHua


    As an important technology for predictive maintenance,failure prognosis has attracted more and more attentions in recent years.Real-time reliability prediction is one effective solution to failure prognosis.Considering a dynamic system that is composed of normal,deteriorating and unreliable components,this paper proposes an integrated approach to perform real-time reliability prediction for such a class of systems.For s deteriorating component,the degradation is modeled by a time-varying fault process which is a linear or approximately linear function of time.The behavior of an unreliable component is described by a random variable which has two possible values corresponding to the operating and malfunction conditions of this component.The whole proposed approach contains three algorithms.A modified interacting multiple model particle filter is adopted to estimate the dynamic system's state variables and the unmeasurable time-varying fault.An exponential smoothing algorithm named the Holt's method is used to predict the fault process.In the end,the system's reliability is predicted in real time by use of the Monte Carlo strategy.The proposed approach can effectively predict the impending failure of a dynamic system,which is verified by computer simulations based on a three-vessel water tank system.

  2. Computer vision in control systems

    Jain, Lakhmi


    Volume 1 : This book is focused on the recent advances in computer vision methodologies and technical solutions using conventional and intelligent paradigms. The Contributions include: ·         Morphological Image Analysis for Computer Vision Applications. ·         Methods for Detecting of Structural Changes in Computer Vision Systems. ·         Hierarchical Adaptive KL-based Transform: Algorithms and Applications. ·         Automatic Estimation for Parameters of Image Projective Transforms Based on Object-invariant Cores. ·         A Way of Energy Analysis for Image and Video Sequence Processing. ·         Optimal Measurement of Visual Motion Across Spatial and Temporal Scales. ·         Scene Analysis Using Morphological Mathematics and Fuzzy Logic. ·         Digital Video Stabilization in Static and Dynamic Scenes. ·         Implementation of Hadamard Matrices for Image Processing. ·         A Generalized Criterion ...

  3. Reliability of hydroelectric generation components, systems and units; Confiabilidad de componentes, sistemas y unidades de generacion hidroelectrica

    Sanchez Sanchez, Ramon; Torres Toledano, Gerardo; Franco Nava, Jose Manuel [Instituto de Investigaciones Electricas, Cuernavaca (Mexico)


    This article presents a methodology for the calculation of the reliability of components, systems and hydroelectric generating units, as well as the scope of a computational system for the evaluation of such reliability. In the case of the reliability calculation of components and systems, the computer programs is not limited to hydro stations and can be used in other type of systems. [Espanol] En este articulo se presenta una metodologia para calcular la confiabilidad de componentes, sistemas y unidades de generacion hidroelectrica, asi como el alcance de un sistema computacional para evaluar dicha confiabilidad. En el caso del calculo de confiabilidad de componentes y sistemas, el programa de computo no se limita a centrales hidroelectricas y puede utilizarse en otro tipo de sistemas.

  4. Study of turboprop systems reliability and maintenance costs


    The overall reliability and maintenance costs (R&MC's) of past and current turboprop systems were examined. Maintenance cost drivers were found to be scheduled overhaul (40%), lack of modularity particularly in the propeller and reduction gearbox, and lack of inherent durability (reliability) of some parts. Comparisons were made between the 501-D13/54H60 turboprop system and the widely used JT8D turbofan. It was found that the total maintenance cost per flight hour of the turboprop was 75% higher than that of the JT8D turbofan. Part of this difference was due to propeller and gearbox costs being higher than those of the fan and reverser, but most of the difference was in the engine core where the older technology turboprop core maintenance costs were nearly 70 percent higher than for the turbofan. The estimated maintenance cost of both the advanced turboprop and advanced turbofan were less than the JT8D. The conclusion was that an advanced turboprop and an advanced turbofan, using similar cores, will have very competitive maintenance costs per flight hour.

  5. Adaptive and Reliable Control Algorithm for Hybrid System Architecture

    Osama Abdel Hakeem Abdel Sattar


    Full Text Available A stand-alone system is defined as an autonomous system that supplies electricity without being connected to the electric grid. Hybrid systems combined renewable energy source, that are never depleted (such solar (photovoltaic (PV, wind, hydroelectric, etc. , With other sources of energy, like Diesel. If these hybrid systems are optimally designed, they can be more cost effective and reliable than single systems. However, the design of hybrid systems is complex because of the uncertain renewable energy supplies, load demands and the non-linear characteristics of some components, so the design problem cannot be solved easily by classical optimisation methods. The use of heuristic techniques, such as the genetic algorithms, can give better results than classical methods. This paper presents to a hybrid system control algorithm and also dispatches strategy design in which wind is the primary energy resource with photovoltaic cells. The dimension of the design (max. load is 2000 kW and the sources is implemented as flow 1500 kw from wind, 500 kw from solar and diesel 2000 kw. The main task of the preposed algorithm is to take full advantage of the wind energy and solar energy when it is available and to minimize diesel fuel consumption.

  6. When does a physical system compute?

    Horsman, Clare; Stepney, Susan; Wagner, Rob C; Kendon, Viv


    Computing is a high-level process of a physical system. Recent interest in non-standard computing systems, including quantum and biological computers, has brought this physical basis of computing to the forefront. There has been, however, no consensus on how to tell if a given physical system is acting as a computer or not; leading to confusion over novel computational devices, and even claims that every physical event is a computation. In this paper, we introduce a formal framework that can be used to determine whether a physical system is performing a computation. We demonstrate how the abstract computational level interacts with the physical device level, in comparison with the use of mathematical models in experimental science. This powerful formulation allows a precise description of experiments, technology, computation and simulation, giving our central conclusion: physical computing is the use of a physical system to predict the outcome of an abstract evolution. We give conditions for computing, illustrated using a range of non-standard computing scenarios. The framework also covers broader computing contexts, where there is no obvious human computer user. We introduce the notion of a 'computational entity', and its critical role in defining when computing is taking place in physical systems.

  7. The role of reliability graph models in assuring dependable operation of complex hardware/software systems

    Patterson-Hine, F. A.; Davis, Gloria J.; Pedar, A.


    The complexity of computer systems currently being designed for critical applications in the scientific, commercial, and military arenas requires the development of new techniques for utilizing models of system behavior in order to assure 'ultra-dependability'. The complexity of these systems, such as Space Station Freedom and the Air Traffic Control System, stems from their highly integrated designs containing both hardware and software as critical components. Reliability graph models, such as fault trees and digraphs, are used frequently to model hardware systems. Their applicability for software systems has also been demonstrated for software safety analysis and the analysis of software fault tolerance. This paper discusses further uses of graph models in the design and implementation of fault management systems for safety critical applications.

  8. `95 computer system operation project

    Kim, Young Taek; Lee, Hae Cho; Park, Soo Jin; Kim, Hee Kyung; Lee, Ho Yeun; Lee, Sung Kyu; Choi, Mi Kyung [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)


    This report describes overall project works related to the operation of mainframe computers, the management of nuclear computer codes and the project of nuclear computer code conversion. The results of the project are as follows ; 1. The operation and maintenance of the three mainframe computers and other utilities. 2. The management of the nuclear computer codes. 3. The finishing of the computer codes conversion project. 26 tabs., 5 figs., 17 refs. (Author) .new.

  9. Reliability Considerations of ULP Scaled CMOS in Spacecraft Systems

    White, Mark; MacNeal, Kristen; Cooper, Mark


    NASA, the aerospace community, and other high reliability (hi-rel) users of advanced microelectronic products face many challenges as technology continues to scale into the deep sub-micron region. Decreasing the feature size of CMOS devices not only allows more components to be placed on a single chip, but it increases performance by allowing faster switching (or clock) speeds with reduced power compared to larger scaled devices. Higher performance, and lower operating and stand-by power characteristics of Ultra-Low Power (ULP) microelectronics are not only desirable, but also necessary to meet low power consumption design goals of critical spacecraft systems. The integration of these components in such systems, however, must be balanced with the overall risk tolerance of the project.

  10. A particle swarm model for estimating reliability and scheduling system maintenance

    Puzis, Rami; Shirtz, Dov; Elovici, Yuval


    Modifying data and information system components may introduce new errors and deteriorate the reliability of the system. Reliability can be efficiently regained with reliability centred maintenance, which requires reliability estimation for maintenance scheduling. A variant of the particle swarm model is used to estimate reliability of systems implemented according to the model view controller paradigm. Simulations based on data collected from an online system of a large financial institute are used to compare three component-level maintenance policies. Results show that appropriately scheduled component-level maintenance greatly reduces the cost of upholding an acceptable level of reliability by reducing the need in system-wide maintenance.

  11. Computing abstractions of nonlinear systems

    Reißig, Gunther


    We present an efficient algorithm for computing discrete abstractions of arbitrary memory span for nonlinear discrete-time and sampled systems, in which, apart from possibly numerically integrating ordinary differential equations, the only nontrivial operation to be performed repeatedly is to distinguish empty from non-empty convex polyhedra. We also provide sufficient conditions for the convexity of attainable sets, which is an important requirement for the correctness of the method we propose. It turns out that requirement can be met under rather mild conditions, which essentially reduce to sufficient smoothness in the case of sampled systems. Practicability of our approach in the design of discrete controllers for continuous plants is demonstrated by an example.

  12. Hydronic distribution system computer model

    Andrews, J.W.; Strasser, J.J.


    A computer model of a hot-water boiler and its associated hydronic thermal distribution loop has been developed at Brookhaven National Laboratory (BNL). It is intended to be incorporated as a submodel in a comprehensive model of residential-scale thermal distribution systems developed at Lawrence Berkeley. This will give the combined model the capability of modeling forced-air and hydronic distribution systems in the same house using the same supporting software. This report describes the development of the BNL hydronics model, initial results and internal consistency checks, and its intended relationship to the LBL model. A method of interacting with the LBL model that does not require physical integration of the two codes is described. This will provide capability now, with reduced up-front cost, as long as the number of runs required is not large.

  13. Computer systems and software engineering

    Mckay, Charles W.


    The High Technologies Laboratory (HTL) was established in the fall of 1982 at the University of Houston Clear Lake. Research conducted at the High Tech Lab is focused upon computer systems and software engineering. There is a strong emphasis on the interrelationship of these areas of technology and the United States' space program. In Jan. of 1987, NASA Headquarters announced the formation of its first research center dedicated to software engineering. Operated by the High Tech Lab, the Software Engineering Research Center (SERC) was formed at the University of Houston Clear Lake. The High Tech Lab/Software Engineering Research Center promotes cooperative research among government, industry, and academia to advance the edge-of-knowledge and the state-of-the-practice in key topics of computer systems and software engineering which are critical to NASA. The center also recommends appropriate actions, guidelines, standards, and policies to NASA in matters pertinent to the center's research. Results of the research conducted at the High Tech Lab/Software Engineering Research Center have given direction to many decisions made by NASA concerning the Space Station Program.

  14. Trusted computing for embedded systems

    Soudris, Dimitrios; Anagnostopoulos, Iraklis


    This book describes the state-of-the-art in trusted computing for embedded systems. It shows how a variety of security and trusted computing problems are addressed currently and what solutions are expected to emerge in the coming years. The discussion focuses on attacks aimed at hardware and software for embedded systems, and the authors describe specific solutions to create security features. Case studies are used to present new techniques designed as industrial security solutions. Coverage includes development of tamper resistant hardware and firmware mechanisms for lightweight embedded devices, as well as those serving as security anchors for embedded platforms required by applications such as smart power grids, smart networked and home appliances, environmental and infrastructure sensor networks, etc. ·         Enables readers to address a variety of security threats to embedded hardware and software; ·         Describes design of secure wireless sensor networks, to address secure authen...

  15. A reliable and valid questionnaire was developed to measure computer vision syndrome at the workplace.

    Seguí, María del Mar; Cabrero-García, Julio; Crespo, Ana; Verdú, José; Ronda, Elena


    To design and validate a questionnaire to measure visual symptoms related to exposure to computers in the workplace. Our computer vision syndrome questionnaire (CVS-Q) was based on a literature review and validated through discussion with experts and performance of a pretest, pilot test, and retest. Content validity was evaluated by occupational health, optometry, and ophthalmology experts. Rasch analysis was used in the psychometric evaluation of the questionnaire. Criterion validity was determined by calculating the sensitivity and specificity, receiver operator characteristic curve, and cutoff point. Test-retest repeatability was tested using the intraclass correlation coefficient (ICC) and concordance by Cohen's kappa (κ). The CVS-Q was developed with wide consensus among experts and was well accepted by the target group. It assesses the frequency and intensity of 16 symptoms using a single rating scale (symptom severity) that fits the Rasch rating scale model well. The questionnaire has sensitivity and specificity over 70% and achieved good test-retest repeatability both for the scores obtained [ICC = 0.802; 95% confidence interval (CI): 0.673, 0.884] and CVS classification (κ = 0.612; 95% CI: 0.384, 0.839). The CVS-Q has acceptable psychometric properties, making it a valid and reliable tool to control the visual health of computer workers, and can potentially be used in clinical trials and outcome research. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Aircraft systems design methodology and dispatch reliability prediction

    Bineid, Mansour


    Aircraft despatch reliability was the main subject of this research in the wider content of aircraft reliability. The factors effecting dispatch reliability, aircraft delay, causes of aircraft delays, and aircraft delay costs and magnitudes were examined. Delay cost elements and aircraft delay scenarios were also studied. It concluded that aircraft dispatch reliability is affected by technical and non-technical factors, and that the former are under the designer's control. It showed that ...

  17. Aircraft systems design methodology and dispatch reliability prediction

    Bineid, Mansour


    Aircraft despatch reliability was the main subject of this research in the wider content of aircraft reliability. The factors effecting dispatch reliability, aircraft delay, causes of aircraft delays, and aircraft delay costs and magnitudes were examined. Delay cost elements and aircraft delay scenarios were also studied. It concluded that aircraft dispatch reliability is affected by technical and non-technical factors, and that the former are under the designer's control. It showed that ...

  18. Contaminant monitoring of hydraulic systems. The need for reliable data

    Day, M.J. [Pall Europe Ltd., Portsmouth (United Kingdom)] Rinkinen, J. [Tampere University of Technology, Tampere (Finland)


    The need for both reliable operation of hydraulic and lubrication systems and long component lives has focused users to the benefits of controlling the contamination in the hydraulic fluid. Maximum operating (target) levels are being implemented as part of a condition based maintenance regime. If these are exceeded, maintenance effort is directed to correcting the rise in consummation level, and so make optimum use of resources as maintenance effort is only affected when it is necessary to do so. Fundamental to ibis aspect of condition based monitoring is the provision of accurate and reliable data in the shortest possible time. This way, corrective actions can be implemented immediately so minimising the damage to components. On-line monitoring devices are a way of achieving this and are seeing increased use, but some are affected by the condition of the fluid. Hence, there is a potential for giving incorrect data which will waste time and effort by initiating unnecessary corrective actions. A more disturbing aspect is the effect on the user of continual errors. The most likely effect would be a loss of confidence in the technique or even complete rejection of it and hence the potential benefits will be lost. This presentation explains how contaminant monitoring techniques are applied to ensure that the potential benefits of operating with clean fluids is realised. It examines the sources of error and shows how the user can interrogate the data and satisfy himself of its authenticity. (orig.) 14 refs.

  19. A Reliable Wireless Control System for Tomato Hydroponics

    Hirofumi Ibayashi


    Full Text Available Agricultural systems using advanced information and communication (ICT technology can produce high-quality crops in a stable environment while decreasing the need for manual labor. The system collects a wide variety of environmental data and provides the precise cultivation control needed to produce high value-added crops; however, there are the problems of packet transmission errors in wireless sensor networks or system failure due to having the equipment in a hot and humid environment. In this paper, we propose a reliable wireless control system for hydroponic tomato cultivation using the 400 MHz wireless band and the IEEE 802.15.6 standard. The 400 MHz band, which is lower than the 2.4 GHz band, has good obstacle diffraction, and zero-data-loss communication is realized using the guaranteed time-slot method supported by the IEEE 802.15.6 standard. In addition, this system has fault tolerance and a self-healing function to recover from faults such as packet transmission failures due to deterioration of the wireless communication quality. In our basic experiments, the 400 MHz band wireless communication was not affected by the plants’ growth, and the packet error rate was less than that of the 2.4 GHz band. In summary, we achieved a real-time hydroponic liquid supply control with no data loss by applying a 400 MHz band WSN to hydroponic tomato cultivation.

  20. A Reliable Wireless Control System for Tomato Hydroponics.

    Ibayashi, Hirofumi; Kaneda, Yukimasa; Imahara, Jungo; Oishi, Naoki; Kuroda, Masahiro; Mineno, Hiroshi


    Agricultural systems using advanced information and communication (ICT) technology can produce high-quality crops in a stable environment while decreasing the need for manual labor. The system collects a wide variety of environmental data and provides the precise cultivation control needed to produce high value-added crops; however, there are the problems of packet transmission errors in wireless sensor networks or system failure due to having the equipment in a hot and humid environment. In this paper, we propose a reliable wireless control system for hydroponic tomato cultivation using the 400 MHz wireless band and the IEEE 802.15.6 standard. The 400 MHz band, which is lower than the 2.4 GHz band, has good obstacle diffraction, and zero-data-loss communication is realized using the guaranteed time-slot method supported by the IEEE 802.15.6 standard. In addition, this system has fault tolerance and a self-healing function to recover from faults such as packet transmission failures due to deterioration of the wireless communication quality. In our basic experiments, the 400 MHz band wireless communication was not affected by the plants' growth, and the packet error rate was less than that of the 2.4 GHz band. In summary, we achieved a real-time hydroponic liquid supply control with no data loss by applying a 400 MHz band WSN to hydroponic tomato cultivation.

  1. Risk Based Reliability Centered Maintenance of DOD Fire Protection Systems


    Reliability Analysis of Underground Fire Water Piping at the Paducah Gaseous Diffusion Plant , January 1990. I I I I U B-8 3 I U I I I I APPENDIX C N...paper No. 7B, 1982. I3IEEE-Std-500-1984. 4INPO 83-034, Nuclear Plant Reliability Data Annual Report, October 1983. 5Nonelectronic Parts Reliability Data

  2. Using Expert Systems For Computational Tasks

    Duke, Eugene L.; Regenie, Victoria A.; Brazee, Marylouise; Brumbaugh, Randal W.


    Transformation technique enables inefficient expert systems to run in real time. Paper suggests use of knowledge compiler to transform knowledge base and inference mechanism of expert-system computer program into conventional computer program. Main benefit, faster execution and reduced processing demands. In avionic systems, transformation reduces need for special-purpose computers.

  3. Software For Monitoring VAX Computer Systems

    Farkas, Les; Don, Ken; Lavery, David; Baron, Amy


    VAX Continuous Monitoring System (VAXCMS) computer program developed at NASA Headquarters to aid system managers in monitoring performances of VAX computer systems through generation of graphic images summarizing trends in performance metrics over time. VAXCMS written in DCL and VAX FORTRAN for use with DEC VAX-series computers running VMS 5.1 or later.

  4. Computer Aided Control System Design (CACSD)

    Stoner, Frank T.


    The design of modern aerospace systems relies on the efficient utilization of computational resources and the availability of computational tools to provide accurate system modeling. This research focuses on the development of a computer aided control system design application which provides a full range of stability analysis and control design capabilities for aerospace vehicles.

  5. An investigation of the reliability of Rapid Upper Limb Assessment (RULA) as a method of assessment of children's computing posture.

    Dockrell, Sara; O'Grady, Eleanor; Bennett, Kathleen; Mullarkey, Clare; Mc Connell, Rachel; Ruddy, Rachel; Twomey, Seamus; Flannery, Colleen


    Rapid Upper Limb Assessment (RULA) is a quick observation method of posture analysis. RULA has been used to assess children's computer-related posture, but the reliability of RULA on a paediatric population has not been established. The purpose of this study was to investigate the inter-rater and intra-rater reliability of the use of RULA with children. Video recordings of 24 school children were independently viewed by six trained raters who assessed their postures using RULA, on two separate occasions. RULA demonstrated higher intra-rater reliability than inter-rater reliability although both were moderate to good. RULA was more reliable when used for assessing the older children (8-12 years) than with the younger children (4-7 years). RULA may prove useful as part of an ergonomic assessment, but its level of reliability warrants caution for its sole use when assessing children, and in particular, younger children.

  6. Strength and Reliability of Wood for the Components of Low-cost Wind Turbines: Computational and Experimental Analysis and Applications

    Mishnaevsky, Leon; Freere, Peter; Sharma, Ranjan


    of experiments and computational investigations. Low cost testing machines have been designed, and employed for the systematic analysis of different sorts of Nepali wood, to be used for the wind turbine construction. At the same time, computational micromechanical models of deformation and strength of wood......This paper reports the latest results of the comprehensive program of experimental and computational analysis of strength and reliability of wooden parts of low cost wind turbines. The possibilities of prediction of strength and reliability of different types of wood are studied in the series...

  7. Reliability of structural systems subjected to extreme forcing events

    Joo, Han-Kyul; Sapsis, Themistoklis P


    We characterize the complex, heavy-tailed probability distribution functions (pdf) describing the response and its local extrema for structural systems subjected to random forcing that includes extreme events. Our approach is based on the recent probabilistic decomposition-synthesis technique in, where we decouple rare events regimes from the background fluctuations. The result of the analysis has the form of a semi-analytical approximation formula for the pdf of the response (displacement, velocity, and acceleration) and the pdf of the local extrema. For special limiting cases (lightly damped or heavily damped systems) our analysis provides fully analytical approximations. We also demonstrate how the method can be applied to high dimensional structural systems through a two-degrees-of-freedom structural system undergoing rare events due to intermittent forcing. The derived formulas can be evaluated with very small computational cost and are shown to accurately capture the complicated heavy-tailed and asymmet...

  8. Intellectual control system simulation of carriage streams reliability and ecological safety

    Марина Володимирівна Хара


    Full Text Available Carriage streams reliability and ecological safety control system simulation has been offered in the article. It is based on dividing industrial transport complexes into two constituents, differing from one another by the way of forming and exhausting contaminations: subsystem of stationary sources (loading, unloading and repair and subsystem of movable sources (carriage streams. The aim of the article is to offer a model of intellectual system controlling reliability and ecological safety of carriage streams. It has been made up on the basis of decoupling an industrial transport complex into two constituents differing from one another by the way of forming and exhausting contaminations: subsystem of stationary sources (loading, unloading and maintenance and subsystem of mobile sources (carriage streams in order to form an effective control system in an industrial transport system. As a decision of the problem the structure of ecological safety control of an industrial transport complex with the following constituents has been offered : controlled object; sensor - based system; system of ecological monitoring; expert- informative system and mathematical model of resources control intellectual system consisting of three parts : intellectual transformer (consulting model including databases; controlled object (carriage park; managing device of the system (computing, transforming and executive devices

  9. Criteria of Human-computer Interface Design for Computer Assisted Surgery Systems

    ZHANG Jian-guo; LIN Yan-ping; WANG Cheng-tao; LIU Zhi-hong; YANG Qing-ming


    In recent years, computer assisted surgery (CAS) systems become more and more common in clinical practices, but few specific design criteria have been proposed for human-computer interface (HCI) in CAS systems. This paper tried to give universal criteria of HCI design for CAS systems through introduction of demonstration application, which is total knee replacement (TKR) with a nonimage-based navigation system.A typical computer assisted process can be divided into four phases: the preoperative planning phase, the intraoperative registration phase, the intraoperative navigation phase and finally the postoperative assessment phase. The interface design for four steps is described respectively in the demonstration application. These criteria this paper summarized can be useful to software developers to achieve reliable and effective interfaces for new CAS systems more easily.

  10. Improving Wind Turbine Drivetrain Reliability Using a Combined Experimental, Computational, and Analytical Approach

    Guo, Y.; van Dam, J.; Bergua, R.; Jove, J.; Campbell, J.


    Nontorque loads induced by the wind turbine rotor overhang weight and aerodynamic forces can greatly affect drivetrain loads and responses. If not addressed properly, these loads can result in a decrease in gearbox component life. This work uses analytical modeling, computational modeling, and experimental data to evaluate a unique drivetrain design that minimizes the effects of nontorque loads on gearbox reliability: the Pure Torque(R) drivetrain developed by Alstom. The drivetrain has a hub-support configuration that transmits nontorque loads directly into the tower rather than through the gearbox as in other design approaches. An analytical model of Alstom's Pure Torque drivetrain provides insight into the relationships among turbine component weights, aerodynamic forces, and the resulting drivetrain loads. Main shaft bending loads are orders of magnitude lower than the rated torque and are hardly affected by wind conditions and turbine operations.

  11. Optimal Bi-Objective Redundancy Allocation for Systems Reliability and Risk Management.

    Govindan, Kannan; Jafarian, Ahmad; Azbari, Mostafa E; Choi, Tsan-Ming


    In the big data era, systems reliability is critical to effective systems risk management. In this paper, a novel multiobjective approach, with hybridization of a known algorithm called NSGA-II and an adaptive population-based simulated annealing (APBSA) method is developed to solve the systems reliability optimization problems. In the first step, to create a good algorithm, we use a coevolutionary strategy. Since the proposed algorithm is very sensitive to parameter values, the response surface method is employed to estimate the appropriate parameters of the algorithm. Moreover, to examine the performance of our proposed approach, several test problems are generated, and the proposed hybrid algorithm and other commonly known approaches (i.e., MOGA, NRGA, and NSGA-II) are compared with respect to four performance measures: 1) mean ideal distance; 2) diversification metric; 3) percentage of domination; and 4) data envelopment analysis. The computational studies have shown that the proposed algorithm is an effective approach for systems reliability and risk management.

  12. Reliability of Substation Protection System Based on IEC61850

    XIONG Xiaofu; YU Jun; LIU Xiaofang; SHEN Zhijan


    Although the new technology of protection and automation system of substation based on IEC61850 standard has developed rapidly in China, reliability measures depending on this tech- nology need to be further researched. By taking advantage of convenient information sharing, two kinds of new schemes, shared backup protection unit (SBPU) and signal backup (SB), have been proposed to solve the failure problem of protective devices and current/voltage transducers respec- tively, and the working principle of these two schemes are also described. Furthermore, the key technologies of on-line diagnosis of protective devices' failure and on-line status diagnosis of op- tical or electronic current/voltage transducers to realize the two schemes are proposed.

  13. Tradeoffs for reliable quantum information storage in 2D systems

    Bravyi, Sergey; Terhal, Barbara


    We ask whether there are fundamental limits on storing quantum information reliably in a bounded volume of space. To investigate this question, we study quantum error correcting codes specified by geometrically local commuting constraints on a 2D lattice of finite-dimensional quantum particles. For these 2D systems, we derive a tradeoff between the number of encoded qubits k, the distance of the code d, and the number of particles n. It is shown that kd^2=O(n) where the coefficient in O(n) depends only on the locality of the constraints and dimension of the Hilbert spaces describing individual particles. We show that the analogous tradeoff for the classical information storage is k\\sqrt{d} =O(n).

  14. Budget-Optimal Task Allocation for Reliable Crowdsourcing Systems

    Karger, David R; Shah, Devavrat


    Crowdsourcing systems, in which numerous tasks are electronically distributed to numerous "information piece-workers", have emerged as an effective paradigm for human-powered solving of large scale problems in domains such as image classification, data entry, optical character recognition, recommendation, and proofreading. Because these low-paid workers can be unreliable, nearly all crowdsourcers must devise schemes to increase confidence in their answers, typically by assigning each task multiple times and combining the answers in some way such as majority voting. In this paper, we consider a general model of such crowdsourcing tasks and pose the problem of minimizing the total price (i.e., number of task assignments) that must be paid to achieve a target overall reliability. We give a new algorithm for deciding which tasks to assign to which workers and for inferring correct answers from the workers' answers. We show that our algorithm, inspired by belief propagation and low-rank matrix approximation, signi...

  15. 76 FR 64082 - Mandatory Reliability Standards for the Bulk-Power System; Notice of Staff Meeting


    ... Energy Regulatory Commission Mandatory Reliability Standards for the Bulk-Power System; Notice of Staff...\\ Mandatory Reliability Standards for the Bulk-Power System, Order No. 693, FERC Stats. & Regs. ] 31,242... Discussion on the reliability issues relating to ``Single Point of Failure on Protection Systems,'' on...

  16. Impact of new computing systems on finite element computations

    Noor, A. K.; Storassili, O. O.; Fulton, R. E.


    Recent advances in computer technology that are likely to impact finite element computations are reviewed. The characteristics of supersystems, highly parallel systems, and small systems (mini and microcomputers) are summarized. The interrelations of numerical algorithms and software with parallel architectures are discussed. A scenario is presented for future hardware/software environment and finite element systems. A number of research areas which have high potential for improving the effectiveness of finite element analysis in the new environment are identified.

  17. Transient Faults in Computer Systems

    Masson, Gerald M.


    A powerful technique particularly appropriate for the detection of errors caused by transient faults in computer systems was developed. The technique can be implemented in either software or hardware; the research conducted thus far primarily considered software implementations. The error detection technique developed has the distinct advantage of having provably complete coverage of all errors caused by transient faults that affect the output produced by the execution of a program. In other words, the technique does not have to be tuned to a particular error model to enhance error coverage. Also, the correctness of the technique can be formally verified. The technique uses time and software redundancy. The foundation for an effective, low-overhead, software-based certification trail approach to real-time error detection resulting from transient fault phenomena was developed.

  18. Modelling a reliability system governed by discrete phase-type distributions

    Ruiz-Castro, Juan Eloy [Departamento de Estadistica e Investigacion Operativa, Universidad de Granada, 18071 Granada (Spain)], E-mail:; Perez-Ocon, Rafael [Departamento de Estadistica e Investigacion Operativa, Universidad de Granada, 18071 Granada (Spain)], E-mail:; Fernandez-Villodre, Gemma [Departamento de Estadistica e Investigacion Operativa, Universidad de Granada, 18071 Granada (Spain)


    We present an n-system with one online unit and the others in cold standby. There is a repairman. When the online fails it goes to repair, and instantaneously a standby unit becomes the online one. The operational and repair times follow discrete phase-type distributions. Given that any discrete distribution defined on the positive integers is a discrete phase-type distribution, the system can be considered a general one. A model with unlimited number of units is considered for approximating a system with a great number of units. We show that the process that governs the system is a quasi-birth-and-death process. For this system, performance reliability measures; the up and down periods, and the involved costs are calculated in a matrix and algorithmic form. We show that the discrete case is not a trivial case of the continuous one. The results given in this paper have been implemented computationally with Matlab.

  19. Bayesian Zero-Failure (BAZE) reliability demonstration testing procedure for components of nuclear reactor safety systems

    Waller, R.A.


    A Bayesian-Zero-Failure (BAZE) reliability demonstration testing procedure is presented. The method is developed for an exponential failure-time model and a gamma prior distribution on the failure-rate. A simple graphical approach using percentiles is used to fit the prior distribution. The procedure is given in an easily applied step-by-step form which does not require the use of a computer for its implementation. The BAZE approach is used to obtain sample test plans for selected components of nuclear reactor safety systems.

  20. Computer vision for driver assistance systems

    Handmann, Uwe; Kalinke, Thomas; Tzomakas, Christos; Werner, Martin; von Seelen, Werner


    Systems for automated image analysis are useful for a variety of tasks and their importance is still increasing due to technological advances and an increase of social acceptance. Especially in the field of driver assistance systems the progress in science has reached a level of high performance. Fully or partly autonomously guided vehicles, particularly for road-based traffic, pose high demands on the development of reliable algorithms due to the conditions imposed by natural environments. At the Institut fur Neuroinformatik, methods for analyzing driving relevant scenes by computer vision are developed in cooperation with several partners from the automobile industry. We introduce a system which extracts the important information from an image taken by a CCD camera installed at the rear view mirror in a car. The approach consists of a sequential and a parallel sensor and information processing. Three main tasks namely the initial segmentation (object detection), the object tracking and the object classification are realized by integration in the sequential branch and by fusion in the parallel branch. The main gain of this approach is given by the integrative coupling of different algorithms providing partly redundant information.

  1. Integrated Computer System of Management in Logistics

    Chwesiuk, Krzysztof


    This paper aims at presenting a concept of an integrated computer system of management in logistics, particularly in supply and distribution chains. Consequently, the paper includes the basic idea of the concept of computer-based management in logistics and components of the system, such as CAM and CIM systems in production processes, and management systems for storage, materials flow, and for managing transport, forwarding and logistics companies. The platform which integrates computer-aided management systems is that of electronic data interchange.

  2. Conflict Resolution in Computer Systems

    G. P. Mojarov


    Full Text Available A conflict situation in computer systems CS is the phenomenon arising when the processes have multi-access to the shared resources and none of the involved processes can proceed because of their waiting for the certain resources locked by the other processes which, in turn, are in a similar position. The conflict situation is also called a deadlock that has quite clear impact on the CS state.To find the reduced to practice algorithms to resolve the impasses is of significant applied importance for ensuring information security of computing process and thereupon the presented article is aimed at solving a relevant problem.The gravity of situation depends on the types of processes in a deadlock, types of used resources, number of processes, and a lot of other factors.A disadvantage of the method for preventing the impasses used in many modern operating systems and based on the preliminary planning resources required for the process is obvious - waiting time can be overlong. The preventing method with the process interruption and deallocation of its resources is very specific and a little effective, when there is a set of the polytypic resources requested dynamically. The drawback of another method, to prevent a deadlock by ordering resources, consists in restriction of possible sequences of resource requests.A different way of "struggle" against deadlocks is a prevention of impasses. In the future a prediction of appearing impasses is supposed. There are known methods [1,4,5] to define and prevent conditions under which deadlocks may occur. Thus the preliminary information on what resources a running process can request is used. Before allocating a free resource to the process, a test for a state “safety” condition is provided. The state is "safe" if in the future impasses cannot occur as a result of resource allocation to the process. Otherwise the state is considered to be " hazardous ", and resource allocation is postponed. The obvious

  3. Moment Method Based on Fuzzy Reliability Sensitivity Analysis for a Degradable Structural System

    Song Jun; Lu Zhenzhou


    For a degradable structural system with fuzzy failure region, a moment method based on fuzzy reliability sensitivity algorithm is presented. According to the value assignment of porformance function, the integral region for calculating the fuzzy failure probability is first split into a series of subregions in which the membership function values of the performance function within the fuzzy failure region can be approximated by a set of constants. The fuzzy failure probability is then transformed into a sum of products oftbe random failure probabilities and the approximate constants of the membership function in the subregions. Furthermore, the fuzzy reliability sensitivity analysis is transformed into a series of random reliability sensitivity analysis, and the random reliability sensitivity can be obtained by the constructed moment method. The primary advantages of the presented method include higher efficiency for implicit performance function with low and medium dimensionality and wide applicability to multiple failure modes and nonnormal basic random variables. The limitation is that the required computation effort grows exponentially with the increase of dimensionality of the basic random vari-able; hence, it is not suitable for high dimensionality problem. Compared with the available methods, the presented one is pretty com-petitive in the case that the dimensionality is lower than 10. The presented examples are used to verify the advantages and indicate the limitations.

  4. Running a Reliable Messaging Infrastructure for CERN's Control System

    Ehm, F


    The current middleware for CERN’s Controls System is based on two implementations: CORBA-based Controls MiddleWare (CMW) and Java Messaging Service (JMS). The JMS service is realized using the open source messaging product ActiveMQ and had became an increasing vital part of beam operations as data need to be transported reliably for various areas such as the beam protection system, post mortem analysis, beam commissioning or the alarm system. The current JMS service is made of 18 brokers running either in clusters or as single nodes. The main service is deployed as a two node cluster providing failover and load balancing capabilities for high availability. Non-critical applications running on virtual machines or desktop machines read data via a third broker to decouple the load from the operational main cluster. This scenario has been introduced last year and the statistics showed an uptime of 99.998% and an average data serving rate of 1.6GByte per minute represented by around 150 messages per second. Depl...

  5. Digital optical computers at the optoelectronic computing systems center

    Jordan, Harry F.


    The Digital Optical Computing Program within the National Science Foundation Engineering Research Center for Opto-electronic Computing Systems has as its specific goal research on optical computing architectures suitable for use at the highest possible speeds. The program can be targeted toward exploiting the time domain because other programs in the Center are pursuing research on parallel optical systems, exploiting optical interconnection and optical devices and materials. Using a general purpose computing architecture as the focus, we are developing design techniques, tools and architecture for operation at the speed of light limit. Experimental work is being done with the somewhat low speed components currently available but with architectures which will scale up in speed as faster devices are developed. The design algorithms and tools developed for a general purpose, stored program computer are being applied to other systems such as optimally controlled optical communication networks.

  6. Fisher Pierce products for improving distribution system reliability



    The challenges facing the electric power utility today in the 1990s has changed significantly from those of even 10 years ago. The proliferation of automation and the personnel computer have heightened the requirements and demands put on the electric distribution system. Today`s customers, fighting to compete in a world market, demand quality, uninterrupted power service. Privatization and the concept of unregulated competition require utilities to streamline to minimize system support costs and optimize power delivery efficiency. Fisher Pierce, serving the electric utility industry for over 50 years, offers a line of products to assist utilities in meeting these challenges. The Fisher Pierce Family of products provide tools for the electric utility to exceed customer service demands. A full line of fault indicating devices are offered to expedite system power restoration both locally and in conjunction with SCADA systems. Fisher Pierce is the largest supplier of roadway lighting controls, manufacturing on a 6 million dollar automated line assuring the highest quality in the world. The distribution system capacitor control line offers intelligent local or radio linked switching control to maintain system voltage and Var levels for quality and cost efficient power delivery under varying customer loads. Additional products, designed to authenticate revenue metering calibration and verify on sight metering service wiring, help optimize the profitability of the utility assuring continuous system service improvements for their customers.

  7. The Remote Computer Control (RCC) system

    Holmes, W.


    A system to remotely control job flow on a host computer from any touchtone telephone is briefly described. Using this system a computer programmer can submit jobs to a host computer from any touchtone telephone. In addition the system can be instructed by the user to call back when a job is finished. Because of this system every touchtone telephone becomes a conversant computer peripheral. This system known as the Remote Computer Control (RCC) system utilizes touchtone input, touchtone output, voice input, and voice output. The RCC system is microprocessor based and is currently using the INTEL 80/30microcomputer. Using the RCC system a user can submit, cancel, and check the status of jobs on a host computer. The RCC system peripherals consist of a CRT for operator control, a printer for logging all activity, mass storage for the storage of user parameters, and a PROM card for program storage.

  8. Development of human reliability analysis methodology and its computer code during low power/shutdown operation

    Chung, Chang Hyun; You, Young Woo; Huh, Chang Wook; Kim, Ju Yeul; Kim Do Hyung; Kim, Yoon Ik; Yang, Hui Chang [Seoul National University, Seoul (Korea, Republic of); Jae, Moo Sung [Hansung University, Seoul (Korea, Republic of)


    The objective of this study is to develop the appropriate procedure that can evaluate the human error in LP/S(lower power/shutdown) and the computer code that calculate the human error probabilities(HEPs) using this framework. The assessment of applicability of the typical HRA methodologies to LP/S is conducted and a new HRA procedure, SEPLOT (Systematic Evaluation Procedure for LP/S Operation Tasks) which presents the characteristics of LP/S is developed by selection and categorization of human actions by reviewing present studies. This procedure is applied to evaluate the LOOP(Loss of Off-site Power) sequence and the HEPs obtained by using SEPLOT are used to quantitative evaluation of the core uncovery frequency. In this evaluation one of the dynamic reliability computer codes, DYLAM-3 which has the advantages against the ET/FT is used. The SEPLOT developed in this study can give the basis and arrangement as to the human error evaluation technique. And this procedure can make it possible to assess the dynamic aspects of accidents leading to core uncovery applying the HEPs obtained by using the SEPLOT as input data to DYLAM-3 code, Eventually, it is expected that the results of this study will contribute to improve safety in LP/S and reduce uncertainties in risk. 57 refs. 17 tabs., 33 figs. (author)

  9. Reliability of phantom pain relief in neurorehabilitation using a multimodal virtual reality system.

    Sano, Yuko; Ichinose, Akimichi; Wake, Naoki; Osumi, Michihiro; Sumitani, Masahiko; Kumagaya, Shin-Ichiro; Kuniyoshi, Yasuo


    The objective of this study is to demonstrate the reliability of relief from phantom limb pain in neurore-habilitation using a multimodal virtual reality system. We have developed a virtual reality rehabilitation system with multimodal sensory feedback and applied it to six patients with brachial plexus avulsion or arm amputation. In an experiment, patients executed a reaching task using a virtual phantom limb displayed in a three-dimensional computer graphic environment manipulated by their real intact limb. The intensity of the phantom limb pain was evaluated through a short-form McGill pain questionnaire. The experiments were conducted twice on different days at more than four-week intervals for each patient. The reliability of our task's ability to relieve pain was demonstrated by the test-retest method, which checks the degree of the relative similarity between the pain reduction rates in two experiments using Fisher's intraclass correlation coefficient (ICC). The ICC was 0.737, indicating sufficient reproducibility of our task. The average of the reduction rates across participants was 50.2%, and it was significantly different from 0 (p virtual reality system reduces the phantom limb pain with sufficient reliability.

  10. System Reliability Assessment of Existing Jacket Platforms in Malaysian Waters

    V.J. Kurian; M.C. Voon; M.M.A. Wahab; M.S. Liew


    Reliability of offshore platforms has become a very important issue in the Malaysian Oil and Gas Industry as, majority of the jacket platforms in Malaysian waters to date, have exceeded their design life. Reliability of a jacket platform can be assessed through reliability index and probability of failure. Conventional metocean consideration uses 100 year return period wave height associated with 100 year return period current velocity and wind speed. However, recent study shows that for Mala...

  11. Implementation of Computational Electromagnetic on Distributed Systems


    Now the new generation of technology could raise the bar for distributed computing. It seems to be a trend to solve computational electromagnetic work on a distributed system with parallel computing techniques. In this paper, we analyze the parallel characteristics of the distributed system and the possibility of setting up a tightly coupled distributed system by using LAN in our lab. The analysis of the performance of different computational methods, such as FEM, MOM, FDTD and finite difference method, are given. Our work on setting up a distributed system and the performance of the test bed is also included. At last, we mention the implementation of one of our computational electromagnetic codes.

  12. Statistics and Analysis on Reliability of HVDC Transmission Systems of SGCC



    Reliability level of HVDC power transmission systems becomes an important factor impacting the entire power grid. The author analyzes the reliability of HVDC power transmission systems owned by SGCC since 2003 in respect of forced outage times, forced energy unavailability, scheduled energy unavailability and energy utilization efficiency. The results show that the reliability level of HVDC power transmission systems owned by SGCC is improving. By analyzing different reliability indices of HVDC power transmission system, the maximum asset benefits of power grid can be achieved through building a scientific and reasonable reliability evaluation system.

  13. Development of Probabilistic Reliability Models of Photovoltaic System Topologies for System Adequacy Evaluation

    Ahmad Alferidi


    Full Text Available The contribution of solar power in electric power systems has been increasing rapidly due to its environmentally friendly nature. Photovoltaic (PV systems contain solar cell panels, power electronic converters, high power switching and often transformers. These components collectively play an important role in shaping the reliability of PV systems. Moreover, the power output of PV systems is variable, so it cannot be controlled as easily as conventional generation due to the unpredictable nature of weather conditions. Therefore, solar power has a different influence on generating system reliability compared to conventional power sources. Recently, different PV system designs have been constructed to maximize the output power of PV systems. These different designs are commonly adopted based on the scale of a PV system. Large-scale grid-connected PV systems are generally connected in a centralized or a string structure. Central and string PV schemes are different in terms of connecting the inverter to PV arrays. Micro-inverter systems are recognized as a third PV system topology. It is therefore important to evaluate the reliability contribution of PV systems under these topologies. This work utilizes a probabilistic technique to develop a power output model for a PV generation system. A reliability model is then developed for a PV integrated power system in order to assess the reliability and energy contribution of the solar system to meet overall system demand. The developed model is applied to a small isolated power unit to evaluate system adequacy and capacity level of a PV system considering the three topologies.

  14. 48 CFR 1852.246-70 - Mission Critical Space System Personnel Reliability Program.


    ... System Personnel Reliability Program. 1852.246-70 Section 1852.246-70 Federal Acquisition Regulations... Reliability Program. As prescribed in 1846.370(a), insert the following clause: Mission Critical Space System Personnel Reliability Program (MAR 1997) (a) In implementation of the Mission Critical Space...

  15. 75 FR 52528 - Mandatory Reliability Standards for the Bulk-Power System; Notice of Technical Conference


    ... Energy Regulatory Commission Mandatory Reliability Standards for the Bulk-Power System; Notice of... ] 61,053 (2007). \\2\\ Mandatory Reliability Standards for the Bulk Power System, 130 FERC ] 61,218... a frequency response requirement.'' \\3\\ \\1\\ Mandatory Reliability Standards for the...

  16. Cybersecurity of embedded computers systems

    Carlioz, Jean


    International audience; Several articles have recently raised the issue of computer security of commercial flights by evoking the "connected aircraft, hackers target" or "Wi-Fi on planes, an open door for hackers ? " Or "Can you hack the computer of an Airbus or a Boeing ?". The feared scenario consists in a takeover of operational aircraft software that intentionally cause an accident. Moreover, several computer security experts have lately announced they had detected flaws in embedded syste...

  17. Applied computation and security systems

    Saeed, Khalid; Choudhury, Sankhayan; Chaki, Nabendu


    This book contains the extended version of the works that have been presented and discussed in the First International Doctoral Symposium on Applied Computation and Security Systems (ACSS 2014) held during April 18-20, 2014 in Kolkata, India. The symposium has been jointly organized by the AGH University of Science & Technology, Cracow, Poland and University of Calcutta, India. The Volume I of this double-volume book contains fourteen high quality book chapters in three different parts. Part 1 is on Pattern Recognition and it presents four chapters. Part 2 is on Imaging and Healthcare Applications contains four more book chapters. The Part 3 of this volume is on Wireless Sensor Networking and it includes as many as six chapters. Volume II of the book has three Parts presenting a total of eleven chapters in it. Part 4 consists of five excellent chapters on Software Engineering ranging from cloud service design to transactional memory. Part 5 in Volume II is on Cryptography with two book...

  18. Numerical methods for reliability and safety assessment multiscale and multiphysics systems

    Hami, Abdelkhalak


    This book offers unique insight on structural safety and reliability by combining computational methods that address multiphysics problems, involving multiple equations describing different physical phenomena, and multiscale problems, involving discrete sub-problems that together  describe important aspects of a system at multiple scales. The book examines a range of engineering domains and problems using dynamic analysis, nonlinear methods, error estimation, finite element analysis, and other computational techniques. This book also: ·       Introduces novel numerical methods ·       Illustrates new practical applications ·       Examines recent engineering applications ·       Presents up-to-date theoretical results ·       Offers perspective relevant to a wide audience, including teaching faculty/graduate students, researchers, and practicing engineers


    Lee, Hsien-Hsin S


    The overall objective of this research project is to develop novel architectural techniques as well as system software to achieve a highly secure and intrusion-tolerant computing system. Such system will be autonomous, self-adapting, introspective, with self-healing capability under the circumstances of improper operations, abnormal workloads, and malicious attacks. The scope of this research includes: (1) System-wide, unified introspection techniques for autonomic systems, (2) Secure information-flow microarchitecture, (3) Memory-centric security architecture, (4) Authentication control and its implication to security, (5) Digital right management, (5) Microarchitectural denial-of-service attacks on shared resources. During the period of the project, we developed several architectural techniques and system software for achieving a robust, secure, and reliable computing system toward our goal.

  20. Reliability analysis of onboard laser ranging systems for control systems by movement of spacecraft

    E. I. Starovoitov


    Full Text Available The purpose of this paper is to study and find the ways to improve the reliability of onboard laser ranging system (LRS used to control the spacecraft rendezvous and descent. The onboard LRS can be implemented with optical-mechanical scanner and without it. The paper analyses the key factors, which influence on the reliability of both LRS. Reliability of LRS is pretty much defined by the reliability of the laser source and its radiation mode. Solid-state diode-pumped lasers are primarily used as a radiation source. The radiation mode, which is defined by requirements for measurement errors of range and speed affect their reliability. The basic assumption is that the resource of solid state lasers is determined by the number pulses of pumping diodes. The paper investigates the influence of radiation mode of solid-state laser on the reliability function when measuring a passive spacecraft rendezvous dosing velocity using a differential method. With the measurement error, respectively, 10 m for range and 0.6 m/s for velocity a reliability function of 0.99 has been achieved. Reducing the measurement error of velocity to 0.5 m/s either results in reduced reliability function <0.99 or it is necessary to reduce the initial error of measurement range up to 3.5...5 m to correspond to the reliability function ≥ 0.995. For the optomechanical scanner-based LRS the maximum pulse repetition frequency versus the range has been obtained. This dependence has been used as a basis to define the reliability function. The paper investigates the influence of moving parts on the reliability of scanning LRS with sealed or unsealed optomechanical unit. As a result, it has been found that the exception of moving parts is justified provided that manufacturing the sealed optomechanical LRS unit is impossible. In this case, the reliability function increases from 0.99 to 0.9999. When sealing the opto-mechanical unit, the same increase in reliability is achieved through