WorldWideScience

Sample records for model providing reliable

  1. Bring Your Own Device - Providing Reliable Model of Data Access

    Directory of Open Access Journals (Sweden)

    Stąpór Paweł

    2016-10-01

    Full Text Available The article presents a model of Bring Your Own Device (BYOD as a model network, which provides the user reliable access to network resources. BYOD is a model dynamically developing, which can be applied in many areas. Research network has been launched in order to carry out the test, in which as a service of BYOD model Work Folders service was used. This service allows the user to synchronize files between the device and the server. An access to the network is completed through the wireless communication by the 802.11n standard. Obtained results are shown and analyzed in this article.

  2. Reliability constrained decision model for energy service provider incorporating demand response programs

    International Nuclear Information System (INIS)

    Mahboubi-Moghaddam, Esmaeil; Nayeripour, Majid; Aghaei, Jamshid

    2016-01-01

    Highlights: • The operation of Energy Service Providers (ESPs) in electricity markets is modeled. • Demand response as the cost-effective solution is used for energy service provider. • The market price uncertainty is modeled using the robust optimization technique. • The reliability of the distribution network is embedded into the framework. • The simulation results demonstrate the benefits of robust framework for ESPs. - Abstract: Demand response (DR) programs are becoming a critical concept for the efficiency of current electric power industries. Therefore, its various capabilities and barriers have to be investigated. In this paper, an effective decision model is presented for the strategic behavior of energy service providers (ESPs) to demonstrate how to participate in the day-ahead electricity market and how to allocate demand in the smart distribution network. Since market price affects DR and vice versa, a new two-step sequential framework is proposed, in which unit commitment problem (UC) is solved to forecast the expected locational marginal prices (LMPs), and successively DR program is applied to optimize the total cost of providing energy for the distribution network customers. This total cost includes the cost of purchased power from the market and distributed generation (DG) units, incentive cost paid to the customers, and compensation cost of power interruptions. To obtain compensation cost, the reliability evaluation of the distribution network is embedded into the framework using some innovative constraints. Furthermore, to consider the unexpected behaviors of the other market participants, the LMP prices are modeled as the uncertainty parameters using the robust optimization technique, which is more practical compared to the conventional stochastic approach. The simulation results demonstrate the significant benefits of the presented framework for the strategic performance of ESPs.

  3. A stochastic simulation model for reliable PV system sizing providing for solar radiation fluctuations

    International Nuclear Information System (INIS)

    Kaplani, E.; Kaplanis, S.

    2012-01-01

    Highlights: ► Solar radiation data for European cities follow the Extreme Value or Weibull distribution. ► Simulation model for the sizing of SAPV systems based on energy balance and stochastic analysis. ► Simulation of PV Generator-Loads-Battery Storage System performance for all months. ► Minimum peak power and battery capacity required for reliable SAPV sizing for various European cities. ► Peak power and battery capacity reduced by more than 30% for operation 95% success rate. -- Abstract: The large fluctuations observed in the daily solar radiation profiles affect highly the reliability of the PV system sizing. Increasing the reliability of the PV system requires higher installed peak power (P m ) and larger battery storage capacity (C L ). This leads to increased costs, and makes PV technology less competitive. This research paper presents a new stochastic simulation model for stand-alone PV systems, developed to determine the minimum installed P m and C L for the PV system to be energy independent. The stochastic simulation model developed, makes use of knowledge acquired from an in-depth statistical analysis of the solar radiation data for the site, and simulates the energy delivered, the excess energy burnt, the load profiles and the state of charge of the battery system for the month the sizing is applied, and the PV system performance for the entire year. The simulation model provides the user with values for the autonomy factor d, simulating PV performance in order to determine the minimum P m and C L depending on the requirements of the application, i.e. operation with critical or non-critical loads. The model makes use of NASA’s Surface meteorology and Solar Energy database for the years 1990–2004 for various cities in Europe with a different climate. The results obtained with this new methodology indicate a substantial reduction in installed peak power and battery capacity, both for critical and non-critical operation, when compared to

  4. Will British weather provide reliable electricity?

    International Nuclear Information System (INIS)

    Oswald, James; Raine, Mike; Ashraf-Ball, Hezlin

    2008-01-01

    There has been much academic debate on the ability of wind to provide a reliable electricity supply. The model presented here calculates the hourly power delivery of 25 GW of wind turbines distributed across Britain's grid, and assesses power delivery volatility and the implications for individual generators on the system. Met Office hourly wind speed data are used to determine power output and are calibrated using Ofgem's published wind output records. There are two main results. First, the model suggests that power swings of 70% within 12 h are to be expected in winter, and will require individual generators to go on or off line frequently, thereby reducing the utilisation and reliability of large centralised plants. These reductions will lead to increases in the cost of electricity and reductions in potential carbon savings. Secondly, it is shown that electricity demand in Britain can reach its annual peak with a simultaneous demise of wind power in Britain and neighbouring countries to very low levels. This significantly undermines the case for connecting the UK transmission grid to neighbouring grids. Recommendations are made for improving 'cost of wind' calculations. The authors are grateful for the sponsorship provided by The Renewable Energy Foundation

  5. Proposed reliability cost model

    Science.gov (United States)

    Delionback, L. M.

    1973-01-01

    The research investigations which were involved in the study include: cost analysis/allocation, reliability and product assurance, forecasting methodology, systems analysis, and model-building. This is a classic example of an interdisciplinary problem, since the model-building requirements include the need for understanding and communication between technical disciplines on one hand, and the financial/accounting skill categories on the other. The systems approach is utilized within this context to establish a clearer and more objective relationship between reliability assurance and the subcategories (or subelements) that provide, or reenforce, the reliability assurance for a system. Subcategories are further subdivided as illustrated by a tree diagram. The reliability assurance elements can be seen to be potential alternative strategies, or approaches, depending on the specific goals/objectives of the trade studies. The scope was limited to the establishment of a proposed reliability cost-model format. The model format/approach is dependent upon the use of a series of subsystem-oriented CER's and sometimes possible CTR's, in devising a suitable cost-effective policy.

  6. Travel time reliability modeling.

    Science.gov (United States)

    2011-07-01

    This report includes three papers as follows: : 1. Guo F., Rakha H., and Park S. (2010), "A Multi-state Travel Time Reliability Model," : Transportation Research Record: Journal of the Transportation Research Board, n 2188, : pp. 46-54. : 2. Park S.,...

  7. Reliability and Model Fit

    Science.gov (United States)

    Stanley, Leanne M.; Edwards, Michael C.

    2016-01-01

    The purpose of this article is to highlight the distinction between the reliability of test scores and the fit of psychometric measurement models, reminding readers why it is important to consider both when evaluating whether test scores are valid for a proposed interpretation and/or use. It is often the case that an investigator judges both the…

  8. Supply chain reliability modelling

    Directory of Open Access Journals (Sweden)

    Eugen Zaitsev

    2012-03-01

    Full Text Available Background: Today it is virtually impossible to operate alone on the international level in the logistics business. This promotes the establishment and development of new integrated business entities - logistic operators. However, such cooperation within a supply chain creates also many problems related to the supply chain reliability as well as the optimization of the supplies planning. The aim of this paper was to develop and formulate the mathematical model and algorithms to find the optimum plan of supplies by using economic criterion and the model for the probability evaluating of non-failure operation of supply chain. Methods: The mathematical model and algorithms to find the optimum plan of supplies were developed and formulated by using economic criterion and the model for the probability evaluating of non-failure operation of supply chain. Results and conclusions: The problem of ensuring failure-free performance of goods supply channel analyzed in the paper is characteristic of distributed network systems that make active use of business process outsourcing technologies. The complex planning problem occurring in such systems that requires taking into account the consumer's requirements for failure-free performance in terms of supply volumes and correctness can be reduced to a relatively simple linear programming problem through logical analysis of the structures. The sequence of the operations, which should be taken into account during the process of the supply planning with the supplier's functional reliability, was presented.

  9. Power transformer reliability modelling

    NARCIS (Netherlands)

    Schijndel, van A.

    2010-01-01

    Problem description Electrical power grids serve to transport and distribute electrical power with high reliability and availability at acceptable costs and risks. These grids play a crucial though preferably invisible role in supplying sufficient power in a convenient form. Today’s society has

  10. Reliability analysis and operator modelling

    International Nuclear Information System (INIS)

    Hollnagel, Erik

    1996-01-01

    The paper considers the state of operator modelling in reliability analysis. Operator models are needed in reliability analysis because operators are needed in process control systems. HRA methods must therefore be able to account both for human performance variability and for the dynamics of the interaction. A selected set of first generation HRA approaches is briefly described in terms of the operator model they use, their classification principle, and the actual method they propose. In addition, two examples of second generation methods are also considered. It is concluded that first generation HRA methods generally have very simplistic operator models, either referring to the time-reliability relationship or to elementary information processing concepts. It is argued that second generation HRA methods must recognise that cognition is embedded in a context, and be able to account for that in the way human reliability is analysed and assessed

  11. Reliability Overhaul Model

    Science.gov (United States)

    1989-08-01

    Random variables for the conditional exponential distribution are generated using the inverse transform method. C1) Generate U - UCO,i) (2) Set s - A ln...e - [(x+s - 7)/ n] 0 + [Cx-T)/n]0 c. Random variables from the conditional weibull distribution are generated using the inverse transform method. C1...using a standard normal transformation and the inverse transform method. B - 3 APPENDIX 3 DISTRIBUTIONS SUPPORTED BY THE MODEL (1) Generate Y - PCX S

  12. PROVIDING RELIABILITY OF HUMAN RESOURCES IN PRODUCTION MANAGEMENT PROCESS

    Directory of Open Access Journals (Sweden)

    Anna MAZUR

    2014-07-01

    Full Text Available People are the most valuable asset of an organization and the results of a company mostly depends on them. The human factor can also be a weak link in the company and cause of the high risk for many of the processes. Reliability of the human factor in the process of the manufacturing process will depend on many factors. The authors include aspects of human error, safety culture, knowledge, communication skills, teamwork and leadership role in the developed model of reliability of human resources in the management of the production process. Based on the case study and the results of research and observation of the author present risk areas defined in a specific manufacturing process and the results of evaluation of the reliability of human resources in the process.

  13. Reliability Modeling of Wind Turbines

    DEFF Research Database (Denmark)

    Kostandyan, Erik

    Cost reductions for offshore wind turbines are a substantial requirement in order to make offshore wind energy more competitive compared to other energy supply methods. During the 20 – 25 years of wind turbines useful life, Operation & Maintenance costs are typically estimated to be a quarter...... for Operation & Maintenance planning. Concentrating efforts on development of such models, this research is focused on reliability modeling of Wind Turbine critical subsystems (especially the power converter system). For reliability assessment of these components, structural reliability methods are applied...... to one third of the total cost of energy. Reduction of Operation & Maintenance costs will result in significant cost savings and result in cheaper electricity production. Operation & Maintenance processes mainly involve actions related to replacements or repair. Identifying the right times when...

  14. Proposed Reliability/Cost Model

    Science.gov (United States)

    Delionback, L. M.

    1982-01-01

    New technique estimates cost of improvement in reliability for complex system. Model format/approach is dependent upon use of subsystem cost-estimating relationships (CER's) in devising cost-effective policy. Proposed methodology should have application in broad range of engineering management decisions.

  15. Reliability in the Rasch Model

    Czech Academy of Sciences Publication Activity Database

    Martinková, Patrícia; Zvára, K.

    2007-01-01

    Roč. 43, č. 3 (2007), s. 315-326 ISSN 0023-5954 R&D Projects: GA MŠk(CZ) 1M06014 Institutional research plan: CEZ:AV0Z10300504 Keywords : Cronbach's alpha * Rasch model * reliability Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.552, year: 2007 http://dml.cz/handle/10338.dmlcz/135776

  16. Multinomial-exponential reliability function: a software reliability model

    International Nuclear Information System (INIS)

    Saiz de Bustamante, Amalio; Saiz de Bustamante, Barbara

    2003-01-01

    The multinomial-exponential reliability function (MERF) was developed during a detailed study of the software failure/correction processes. Later on MERF was approximated by a much simpler exponential reliability function (EARF), which keeps most of MERF mathematical properties, so the two functions together makes up a single reliability model. The reliability model MERF/EARF considers the software failure process as a non-homogeneous Poisson process (NHPP), and the repair (correction) process, a multinomial distribution. The model supposes that both processes are statistically independent. The paper discusses the model's theoretical basis, its mathematical properties and its application to software reliability. Nevertheless it is foreseen model applications to inspection and maintenance of physical systems. The paper includes a complete numerical example of the model application to a software reliability analysis

  17. High Availability Applications for NOMADS at the NOAA Web Operations Center Aimed at Providing Reliable Real Time Access to Operational Model Data

    Science.gov (United States)

    Alpert, J. C.; Rutledge, G.; Wang, J.; Freeman, P.; Kang, C. Y.

    2009-05-01

    The NOAA Operational Modeling Archive Distribution System (NOMADS) is now delivering high availability services as part of NOAA's official real time data dissemination at its Web Operations Center (WOC). The WOC is a web service used by all organizational units in NOAA and acts as a data repository where public information can be posted to a secure and scalable content server. A goal is to foster collaborations among the research and education communities, value added retailers, and public access for science and development efforts aimed at advancing modeling and GEO-related tasks. The services used to access the operational model data output are the Open-source Project for a Network Data Access Protocol (OPeNDAP), implemented with the Grid Analysis and Display System (GrADS) Data Server (GDS), and applications for slicing, dicing and area sub-setting the large matrix of real time model data holdings. This approach insures an efficient use of computer resources because users transmit/receive only the data necessary for their tasks including metadata. Data sets served in this way with a high availability server offer vast possibilities for the creation of new products for value added retailers and the scientific community. New applications to access data and observations for verification of gridded model output, and progress toward integration with access to conventional and non-conventional observations will be discussed. We will demonstrate how users can use NOMADS services to repackage area subsets either using repackaging of GRIB2 files, or values selected by ensemble component, (forecast) time, vertical levels, global horizontal location, and by variable, virtually a 6- Dimensional analysis services across the internet.

  18. Nonspecialist Raters Can Provide Reliable Assessments of Procedural Skills

    DEFF Research Database (Denmark)

    Mahmood, Oria; Dagnæs, Julia; Bube, Sarah

    2018-01-01

    was significant (p Pearson's correlation of 0.77 for the nonspecialists and 0.75 for the specialists. The test-retest reliability showed the biggest difference between the 2 groups, 0.59 and 0.38 for the nonspecialist raters and the specialist raters, respectively (p ... was chosen as it is a simple procedural skill that is crucial to master in a resident urology program. RESULTS: The internal consistency of assessments was high, Cronbach's α = 0.93 and 0.95 for nonspecialist and specialist raters, respectively (p correlations). The interrater reliability...

  19. Development of reliable pavement models.

    Science.gov (United States)

    2011-05-01

    The current report proposes a framework for estimating the reliability of a given pavement structure as analyzed by : the Mechanistic-Empirical Pavement Design Guide (MEPDG). The methodology proposes using a previously fit : response surface, in plac...

  20. Software reliability models for critical applications

    Energy Technology Data Exchange (ETDEWEB)

    Pham, H.; Pham, M.

    1991-12-01

    This report presents the results of the first phase of the ongoing EG&G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place. 407 refs., 4 figs., 2 tabs.

  1. Software reliability models for critical applications

    Energy Technology Data Exchange (ETDEWEB)

    Pham, H.; Pham, M.

    1991-12-01

    This report presents the results of the first phase of the ongoing EG G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place. 407 refs., 4 figs., 2 tabs.

  2. Reliability models for Space Station power system

    Science.gov (United States)

    Singh, C.; Patton, A. D.; Kim, Y.; Wagner, H.

    1987-01-01

    This paper presents a methodology for the reliability evaluation of Space Station power system. The two options considered are the photovoltaic system and the solar dynamic system. Reliability models for both of these options are described along with the methodology for calculating the reliability indices.

  3. Measurement-based reliability/performability models

    Science.gov (United States)

    Hsueh, Mei-Chen

    1987-01-01

    Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.

  4. Reliability and continuous regeneration model

    Directory of Open Access Journals (Sweden)

    Anna Pavlisková

    2006-06-01

    Full Text Available The failure-free function of an object is very important for the service. This leads to the interest in the determination of the object reliability and failure intensity. The reliability of an element is defined by the theory of probability.The element durability T is a continuous random variate with the probability density f. The failure intensity (tλ is a very important reliability characteristics of the element. Often it is an increasing function, which corresponds to the element ageing. We disposed of the data about a belt conveyor failures recorded during the period of 90 months. The given ses behaves according to the normal distribution. By using a mathematical analysis and matematical statistics, we found the failure intensity function (tλ. The function (tλ increases almost linearly.

  5. Bayesian methodology for reliability model acceptance

    International Nuclear Information System (INIS)

    Zhang Ruoxue; Mahadevan, Sankaran

    2003-01-01

    This paper develops a methodology to assess the reliability computation model validity using the concept of Bayesian hypothesis testing, by comparing the model prediction and experimental observation, when there is only one computational model available to evaluate system behavior. Time-independent and time-dependent problems are investigated, with consideration of both cases: with and without statistical uncertainty in the model. The case of time-independent failure probability prediction with no statistical uncertainty is a straightforward application of Bayesian hypothesis testing. However, for the life prediction (time-dependent reliability) problem, a new methodology is developed in this paper to make the same Bayesian hypothesis testing concept applicable. With the existence of statistical uncertainty in the model, in addition to the application of a predictor estimator of the Bayes factor, the uncertainty in the Bayes factor is explicitly quantified through treating it as a random variable and calculating the probability that it exceeds a specified value. The developed method provides a rational criterion to decision-makers for the acceptance or rejection of the computational model

  6. Reliability

    OpenAIRE

    Condon, David; Revelle, William

    2017-01-01

    Separating the signal in a test from the irrelevant noise is a challenge for all measurement. Low test reliability limits test validity, attenuates important relationships, and can lead to regression artifacts. Multiple approaches to the assessment and improvement of reliability are discussed. The advantages and disadvantages of several different approaches to reliability are considered. Practical advice on how to assess reliability using open source software is provided.

  7. Reliability Modeling of Double Beam Bridge Crane

    Science.gov (United States)

    Han, Zhu; Tong, Yifei; Luan, Jiahui; Xiangdong, Li

    2018-05-01

    This paper briefly described the structure of double beam bridge crane and the basic parameters of double beam bridge crane are defined. According to the structure and system division of double beam bridge crane, the reliability architecture of double beam bridge crane system is proposed, and the reliability mathematical model is constructed.

  8. Reliability Modeling of Wind Turbines

    DEFF Research Database (Denmark)

    Kostandyan, Erik

    Cost reductions for offshore wind turbines are a substantial requirement in order to make offshore wind energy more competitive compared to other energy supply methods. During the 20 – 25 years of wind turbines useful life, Operation & Maintenance costs are typically estimated to be a quarter...... and uncertainties are quantified. Further, estimation of annual failure probability for structural components taking into account possible faults in electrical or mechanical systems is considered. For a representative structural failure mode, a probabilistic model is developed that incorporates grid loss failures...

  9. Reliability modeling of an engineered barrier system

    International Nuclear Information System (INIS)

    Ananda, M.M.A.; Singh, A.K.; Flueck, J.A.

    1993-01-01

    The Weibull distribution is widely used in reliability literature as a distribution of time to failure, as it allows for both increasing failure rate (IFR) and decreasing failure rate (DFR) models. It has also been used to develop models for an engineered barrier system (EBS), which is known to be one of the key components in a deep geological repository for high level radioactive waste (HLW). The EBS failure time can more realistically be modelled by an IFR distribution, since the failure rate for the EBS is not expected to decrease with time. In this paper, we use an IFR distribution to develop a reliability model for the EBS

  10. Reliability modeling of an engineered barrier system

    International Nuclear Information System (INIS)

    Ananda, M.M.A.; Singh, A.K.; Flueck, J.A.

    1993-01-01

    The Weibull distribution is widely used in reliability literature as a distribution of time to failure, as it allows for both increasing failure rate (IFR) and decreasing failure rate (DFR) models. It has also been used to develop models for an engineered barrier system (EBS), which is known to be one of the key components in a deep geological repository for high level radioactive waste (HLW). The EBS failure time can more realistically be modelled by an IFR distribution, since the failure rate for the EBS is not expected to decrease with time. In this paper, an IFR distribution is used to develop a reliability model for the EBS

  11. Towards a reliable animal model of migraine

    DEFF Research Database (Denmark)

    Olesen, Jes; Jansen-Olesen, Inger

    2012-01-01

    The pharmaceutical industry shows a decreasing interest in the development of drugs for migraine. One of the reasons for this could be the lack of reliable animal models for studying the effect of acute and prophylactic migraine drugs. The infusion of glyceryl trinitrate (GTN) is the best validated...... and most studied human migraine model. Several attempts have been made to transfer this model to animals. The different variants of this model are discussed as well as other recent models....

  12. Space Vehicle Reliability Modeling in DIORAMA

    Energy Technology Data Exchange (ETDEWEB)

    Tornga, Shawn Robert [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-12

    When modeling system performance of space based detection systems it is important to consider spacecraft reliability. As space vehicles age the components become prone to failure for a variety of reasons such as radiation damage. Additionally, some vehicles may lose the ability to maneuver once they exhaust fuel supplies. Typically failure is divided into two categories: engineering mistakes and technology surprise. This document will report on a method of simulating space vehicle reliability in the DIORAMA framework.

  13. Plant and control system reliability and risk model

    International Nuclear Information System (INIS)

    Niemelae, I.M.

    1986-01-01

    A new reliability modelling technique for control systems and plants is demonstrated. It is based on modified boolean algebra and it has been automated into an efficient computer code called RELVEC. The code is useful for getting an overall view of the reliability parameters or for an in-depth reliability analysis, which is essential in risk analysis, where the model must be capable of answering to specific questions like: 'What is the probability of this temperature limiter to provide a false alarm', or 'what is the probability of air pressure in this subsystem to drop below lower limit'. (orig./DG)

  14. Overcoming some limitations of imprecise reliability models

    DEFF Research Database (Denmark)

    Kozine, Igor; Krymsky, Victor

    2011-01-01

    The application of imprecise reliability models is often hindered by the rapid growth in imprecision that occurs when many components constitute a system and by the fact that time to failure is bounded from above. The latter results in the necessity to explicitly introduce an upper bound on time ...

  15. System reliability time-dependent models

    International Nuclear Information System (INIS)

    Debernardo, H.D.

    1991-06-01

    A probabilistic methodology for safety system technical specification evaluation was developed. The method for Surveillance Test Interval (S.T.I.) evaluation basically means an optimization of S.T.I. of most important system's periodically tested components. For Allowed Outage Time (A.O.T.) calculations, the method uses system reliability time-dependent models (A computer code called FRANTIC III). A new approximation, which was called Independent Minimal Cut Sets (A.C.I.), to compute system unavailability was also developed. This approximation is better than Rare Event Approximation (A.E.R.) and the extra computing cost is neglectible. A.C.I. was joined to FRANTIC III to replace A.E.R. on future applications. The case study evaluations verified that this methodology provides a useful probabilistic assessment of surveillance test intervals and allowed outage times for many plant components. The studied system is a typical configuration of nuclear power plant safety systems (two of three logic). Because of the good results, these procedures will be used by the Argentine nuclear regulatory authorities in evaluation of technical specification of Atucha I and Embalse nuclear power plant safety systems. (Author) [es

  16. Reliable RANSAC Using a Novel Preprocessing Model

    Directory of Open Access Journals (Sweden)

    Xiaoyan Wang

    2013-01-01

    Full Text Available Geometric assumption and verification with RANSAC has become a crucial step for corresponding to local features due to its wide applications in biomedical feature analysis and vision computing. However, conventional RANSAC is very time-consuming due to redundant sampling times, especially dealing with the case of numerous matching pairs. This paper presents a novel preprocessing model to explore a reduced set with reliable correspondences from initial matching dataset. Both geometric model generation and verification are carried out on this reduced set, which leads to considerable speedups. Afterwards, this paper proposes a reliable RANSAC framework using preprocessing model, which was implemented and verified using Harris and SIFT features, respectively. Compared with traditional RANSAC, experimental results show that our method is more efficient.

  17. A Framework to Improve Communication and Reliability Between Cloud Consumer and Provider in the Cloud

    OpenAIRE

    Vivek Sridhar

    2014-01-01

    Cloud services consumers demand reliable methods for choosing appropriate cloud service provider for their requirements. Number of cloud consumer is increasing day by day and so cloud providers, hence requirement for a common platform for interacting between cloud provider and cloud consumer is also on the raise. This paper introduces Cloud Providers Market Platform Dashboard. This will act as not only just cloud provider discoverability but also provide timely report to consumer on cloud ser...

  18. Quantitative metal magnetic memory reliability modeling for welded joints

    Science.gov (United States)

    Xing, Haiyan; Dang, Yongbin; Wang, Ben; Leng, Jiancheng

    2016-03-01

    Metal magnetic memory(MMM) testing has been widely used to detect welded joints. However, load levels, environmental magnetic field, and measurement noises make the MMM data dispersive and bring difficulty to quantitative evaluation. In order to promote the development of quantitative MMM reliability assessment, a new MMM model is presented for welded joints. Steel Q235 welded specimens are tested along the longitudinal and horizontal lines by TSC-2M-8 instrument in the tensile fatigue experiments. The X-ray testing is carried out synchronously to verify the MMM results. It is found that MMM testing can detect the hidden crack earlier than X-ray testing. Moreover, the MMM gradient vector sum K vs is sensitive to the damage degree, especially at early and hidden damage stages. Considering the dispersion of MMM data, the K vs statistical law is investigated, which shows that K vs obeys Gaussian distribution. So K vs is the suitable MMM parameter to establish reliability model of welded joints. At last, the original quantitative MMM reliability model is first presented based on the improved stress strength interference theory. It is shown that the reliability degree R gradually decreases with the decreasing of the residual life ratio T, and the maximal error between prediction reliability degree R 1 and verification reliability degree R 2 is 9.15%. This presented method provides a novel tool of reliability testing and evaluating in practical engineering for welded joints.

  19. Centralized Bayesian reliability modelling with sensor networks

    Czech Academy of Sciences Publication Activity Database

    Dedecius, Kamil; Sečkárová, Vladimíra

    2013-01-01

    Roč. 19, č. 5 (2013), s. 471-482 ISSN 1387-3954 R&D Projects: GA MŠk 7D12004 Grant - others:GA MŠk(CZ) SVV-265315 Keywords : Bayesian modelling * Sensor network * Reliability Subject RIV: BD - Theory of Information Impact factor: 0.984, year: 2013 http://library.utia.cas.cz/separaty/2013/AS/dedecius-0392551.pdf

  20. Stochastic models in reliability and maintenance

    CERN Document Server

    2002-01-01

    Our daily lives can be maintained by the high-technology systems. Computer systems are typical examples of such systems. We can enjoy our modern lives by using many computer systems. Much more importantly, we have to maintain such systems without failure, but cannot predict when such systems will fail and how to fix such systems without delay. A stochastic process is a set of outcomes of a random experiment indexed by time, and is one of the key tools needed to analyze the future behavior quantitatively. Reliability and maintainability technologies are of great interest and importance to the maintenance of such systems. Many mathematical models have been and will be proposed to describe reliability and maintainability systems by using the stochastic processes. The theme of this book is "Stochastic Models in Reliability and Main­ tainability. " This book consists of 12 chapters on the theme above from the different viewpoints of stochastic modeling. Chapter 1 is devoted to "Renewal Processes," under which cla...

  1. An integrated approach to human reliability analysis -- decision analytic dynamic reliability model

    International Nuclear Information System (INIS)

    Holmberg, J.; Hukki, K.; Norros, L.; Pulkkinen, U.; Pyy, P.

    1999-01-01

    The reliability of human operators in process control is sensitive to the context. In many contemporary human reliability analysis (HRA) methods, this is not sufficiently taken into account. The aim of this article is that integration between probabilistic and psychological approaches in human reliability should be attempted. This is achieved first, by adopting such methods that adequately reflect the essential features of the process control activity, and secondly, by carrying out an interactive HRA process. Description of the activity context, probabilistic modeling, and psychological analysis form an iterative interdisciplinary sequence of analysis in which the results of one sub-task maybe input to another. The analysis of the context is carried out first with the help of a common set of conceptual tools. The resulting descriptions of the context promote the probabilistic modeling, through which new results regarding the probabilistic dynamics can be achieved. These can be incorporated in the context descriptions used as reference in the psychological analysis of actual performance. The results also provide new knowledge of the constraints of activity, by providing information of the premises of the operator's actions. Finally, the stochastic marked point process model gives a tool, by which psychological methodology may be interpreted and utilized for reliability analysis

  2. Cost Calculation Model for Logistics Service Providers

    Directory of Open Access Journals (Sweden)

    Zoltán Bokor

    2012-11-01

    Full Text Available The exact calculation of logistics costs has become a real challenge in logistics and supply chain management. It is essential to gain reliable and accurate costing information to attain efficient resource allocation within the logistics service provider companies. Traditional costing approaches, however, may not be sufficient to reach this aim in case of complex and heterogeneous logistics service structures. So this paper intends to explore the ways of improving the cost calculation regimes of logistics service providers and show how to adopt the multi-level full cost allocation technique in logistics practice. After determining the methodological framework, a sample cost calculation scheme is developed and tested by using estimated input data. Based on the theoretical findings and the experiences of the pilot project it can be concluded that the improved costing model contributes to making logistics costing more accurate and transparent. Moreover, the relations between costs and performances also become more visible, which enhances the effectiveness of logistics planning and controlling significantly

  3. Data Used in Quantified Reliability Models

    Science.gov (United States)

    DeMott, Diana; Kleinhammer, Roger K.; Kahn, C. J.

    2014-01-01

    Data is the crux to developing quantitative risk and reliability models, without the data there is no quantification. The means to find and identify reliability data or failure numbers to quantify fault tree models during conceptual and design phases is often the quagmire that precludes early decision makers consideration of potential risk drivers that will influence design. The analyst tasked with addressing a system or product reliability depends on the availability of data. But, where is does that data come from and what does it really apply to? Commercial industries, government agencies, and other international sources might have available data similar to what you are looking for. In general, internal and external technical reports and data based on similar and dissimilar equipment is often the first and only place checked. A common philosophy is "I have a number - that is good enough". But, is it? Have you ever considered the difference in reported data from various federal datasets and technical reports when compared to similar sources from national and/or international datasets? Just how well does your data compare? Understanding how the reported data was derived, and interpreting the information and details associated with the data is as important as the data itself.

  4. Modeling of system reliability Petri nets with aging tokens

    International Nuclear Information System (INIS)

    Volovoi, V.

    2004-01-01

    The paper addresses the dynamic modeling of degrading and repairable complex systems. Emphasis is placed on the convenience of modeling for the end user, with special attention being paid to the modeling part of a problem, which is considered to be decoupled from the choice of solution algorithms. Depending on the nature of the problem, these solution algorithms can include discrete event simulation or numerical solution of the differential equations that govern underlying stochastic processes. Such modularity allows a focus on the needs of system reliability modeling and tailoring of the modeling formalism accordingly. To this end, several salient features are chosen from the multitude of existing extensions of Petri nets, and a new concept of aging tokens (tokens with memory) is introduced. The resulting framework provides for flexible and transparent graphical modeling with excellent representational power that is particularly suited for system reliability modeling with non-exponentially distributed firing times. The new framework is compared with existing Petri-net approaches and other system reliability modeling techniques such as reliability block diagrams and fault trees. The relative differences are emphasized and illustrated with several examples, including modeling of load sharing, imperfect repair of pooled items, multiphase missions, and damage-tolerant maintenance. Finally, a simple implementation of the framework using discrete event simulation is described

  5. Statistical models and methods for reliability and survival analysis

    CERN Document Server

    Couallier, Vincent; Huber-Carol, Catherine; Mesbah, Mounir; Huber -Carol, Catherine; Limnios, Nikolaos; Gerville-Reache, Leo

    2013-01-01

    Statistical Models and Methods for Reliability and Survival Analysis brings together contributions by specialists in statistical theory as they discuss their applications providing up-to-date developments in methods used in survival analysis, statistical goodness of fit, stochastic processes for system reliability, amongst others. Many of these are related to the work of Professor M. Nikulin in statistics over the past 30 years. The authors gather together various contributions with a broad array of techniques and results, divided into three parts - Statistical Models and Methods, Statistical

  6. Transparent reliability model for fault-tolerant safety systems

    International Nuclear Information System (INIS)

    Bodsberg, Lars; Hokstad, Per

    1997-01-01

    A reliability model is presented which may serve as a tool for identification of cost-effective configurations and operating philosophies of computer-based process safety systems. The main merit of the model is the explicit relationship in the mathematical formulas between failure cause and the means used to improve system reliability such as self-test, redundancy, preventive maintenance and corrective maintenance. A component failure taxonomy has been developed which allows the analyst to treat hardware failures, human failures, and software failures of automatic systems in an integrated manner. Furthermore, the taxonomy distinguishes between failures due to excessive environmental stresses and failures initiated by humans during engineering and operation. Attention has been given to develop a transparent model which provides predictions which are in good agreement with observed system performance, and which is applicable for non-experts in the field of reliability

  7. Reliability model of SNS linac (spallation neutron source-ORNL)

    International Nuclear Information System (INIS)

    Pitigoi, A.; Fernandez, P.

    2015-01-01

    A reliability model of SNS LINAC (Spallation Neutron Source at Oak Ridge National Laboratory) has been developed using risk spectrum reliability analysis software and the analysis of the accelerator system's reliability has been performed. The analysis results have been evaluated by comparing them with the SNS operational data. This paper presents the main results and conclusions focusing on the definition of design weaknesses and provides recommendations to improve reliability of the MYRRHA ( linear accelerator. The reliability results show that the most affected SNS LINAC parts/systems are: 1) SCL (superconducting linac), front-end systems: IS, LEBT (low-energy beam transport line), MEBT (medium-energy beam transport line), diagnostics and controls; 2) RF systems (especially the SCL RF system); 3) power supplies and PS controllers. These results are in line with the records in the SNS logbook. The reliability issue that needs to be enforced in the linac design is the redundancy of the systems, subsystems and components most affected by failures. For compensation purposes, there is a need for intelligent fail-over redundancy implementation in controllers. Enough diagnostics has to be implemented to allow reliable functioning of the redundant solutions and to ensure the compensation function

  8. Average inactivity time model, associated orderings and reliability properties

    Science.gov (United States)

    Kayid, M.; Izadkhah, S.; Abouammoh, A. M.

    2018-02-01

    In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.

  9. Standardized Patients Provide a Reliable Assessment of Athletic Training Students' Clinical Skills

    Science.gov (United States)

    Armstrong, Kirk J.; Jarriel, Amanda J.

    2016-01-01

    Context: Providing students reliable objective feedback regarding their clinical performance is of great value for ongoing clinical skill assessment. Since a standardized patient (SP) is trained to consistently portray the case, students can be assessed and receive immediate feedback within the same clinical encounter; however, no research, to our…

  10. Inter-Rater Reliability of Provider Interpretations of Irritable Bowel Syndrome Food and Symptom Journals.

    Science.gov (United States)

    Zia, Jasmine; Chung, Chia-Fang; Xu, Kaiyuan; Dong, Yi; Schenk, Jeanette M; Cain, Kevin; Munson, Sean; Heitkemper, Margaret M

    2017-11-04

    There are currently no standardized methods for identifying trigger food(s) from irritable bowel syndrome (IBS) food and symptom journals. The primary aim of this study was to assess the inter-rater reliability of providers' interpretations of IBS journals. A second aim was to describe whether these interpretations varied for each patient. Eight providers reviewed 17 IBS journals and rated how likely key food groups (fermentable oligo-di-monosaccharides and polyols, high-calorie, gluten, caffeine, high-fiber) were to trigger IBS symptoms for each patient. Agreement of trigger food ratings was calculated using Krippendorff's α-reliability estimate. Providers were also asked to write down recommendations they would give to each patient. Estimates of agreement of trigger food likelihood ratings were poor (average α = 0.07). Most providers gave similar trigger food likelihood ratings for over half the food groups. Four providers gave the exact same written recommendation(s) (range 3-7) to over half the patients. Inter-rater reliability of provider interpretations of IBS food and symptom journals was poor. Providers favored certain trigger food likelihood ratings and written recommendations. This supports the need for a more standardized method for interpreting these journals and/or more rigorous techniques to accurately identify personalized IBS food triggers.

  11. System Statement of Tasks of Calculating and Providing the Reliability of Heating Cogeneration Plants in Power Systems

    Science.gov (United States)

    Biryuk, V. V.; Tsapkova, A. B.; Larin, E. A.; Livshiz, M. Y.; Sheludko, L. P.

    2018-01-01

    A set of mathematical models for calculating the reliability indexes of structurally complex multifunctional combined installations in heat and power supply systems was developed. Reliability of energy supply is considered as required condition for the creation and operation of heat and power supply systems. The optimal value of the power supply system coefficient F is based on an economic assessment of the consumers’ loss caused by the under-supply of electric power and additional system expences for the creation and operation of an emergency capacity reserve. Rationing of RI of the industrial heat supply is based on the use of concept of technological margin of safety of technological processes. The definition of rationed RI values of heat supply of communal consumers is based on the air temperature level iside the heated premises. The complex allows solving a number of practical tasks for providing reliability of heat supply for consumers. A probabilistic model is developed for calculating the reliability indexes of combined multipurpose heat and power plants in heat-and-power supply systems. The complex of models and calculation programs can be used to solve a wide range of specific tasks of optimization of schemes and parameters of combined heat and power plants and systems, as well as determining the efficiency of various redundance methods to ensure specified reliability of power supply.

  12. Inter-Rater Reliability of Provider Interpretations of Irritable Bowel Syndrome Food and Symptom Journals

    Directory of Open Access Journals (Sweden)

    Jasmine Zia

    2017-11-01

    Full Text Available There are currently no standardized methods for identifying trigger food(s from irritable bowel syndrome (IBS food and symptom journals. The primary aim of this study was to assess the inter-rater reliability of providers’ interpretations of IBS journals. A second aim was to describe whether these interpretations varied for each patient. Eight providers reviewed 17 IBS journals and rated how likely key food groups (fermentable oligo-di-monosaccharides and polyols, high-calorie, gluten, caffeine, high-fiber were to trigger IBS symptoms for each patient. Agreement of trigger food ratings was calculated using Krippendorff’s α-reliability estimate. Providers were also asked to write down recommendations they would give to each patient. Estimates of agreement of trigger food likelihood ratings were poor (average α = 0.07. Most providers gave similar trigger food likelihood ratings for over half the food groups. Four providers gave the exact same written recommendation(s (range 3–7 to over half the patients. Inter-rater reliability of provider interpretations of IBS food and symptom journals was poor. Providers favored certain trigger food likelihood ratings and written recommendations. This supports the need for a more standardized method for interpreting these journals and/or more rigorous techniques to accurately identify personalized IBS food triggers.

  13. Reliability modeling and analysis of smart power systems

    CERN Document Server

    Karki, Rajesh; Verma, Ajit Kumar

    2014-01-01

    The volume presents the research work in understanding, modeling and quantifying the risks associated with different ways of implementing smart grid technology in power systems in order to plan and operate a modern power system with an acceptable level of reliability. Power systems throughout the world are undergoing significant changes creating new challenges to system planning and operation in order to provide reliable and efficient use of electrical energy. The appropriate use of smart grid technology is an important drive in mitigating these problems and requires considerable research acti

  14. Interrater reliability of Violence Risk Appraisal Guide scores provided in Canadian criminal proceedings.

    Science.gov (United States)

    Edens, John F; Penson, Brittany N; Ruchensky, Jared R; Cox, Jennifer; Smith, Shannon Toney

    2016-12-01

    Published research suggests that most violence risk assessment tools have relatively high levels of interrater reliability, but recent evidence of inconsistent scores among forensic examiners in adversarial settings raises concerns about the "field reliability" of such measures. This study specifically examined the reliability of Violence Risk Appraisal Guide (VRAG) scores in Canadian criminal cases identified in the legal database, LexisNexis. Over 250 reported cases were located that made mention of the VRAG, with 42 of these cases containing 2 or more scores that could be submitted to interrater reliability analyses. Overall, scores were skewed toward higher risk categories. The intraclass correlation (ICCA1) was .66, with pairs of forensic examiners placing defendants into the same VRAG risk "bin" in 68% of the cases. For categorical risk statements (i.e., low, moderate, high), examiners provided converging assessment results in most instances (86%). In terms of potential predictors of rater disagreement, there was no evidence for adversarial allegiance in our sample. Rater disagreement in the scoring of 1 VRAG item (Psychopathy Checklist-Revised; Hare, 2003), however, strongly predicted rater disagreement in the scoring of the VRAG (r = .58). (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  15. Modeling high-Power Accelerators Reliability-SNS LINAC (SNS-ORNL); MAX LINAC (MYRRHA)

    International Nuclear Information System (INIS)

    Pitigoi, A. E.; Fernandez Ramos, P.

    2013-01-01

    Improving reliability has recently become a very important objective in the field of particle accelerators. The particle accelerators in operation are constantly undergoing modifications, and improvements are implemented using new technologies, more reliable components or redundant schemes (to obtain more reliability, strength, more power, etc.) A reliability model of SNS (Spallation Neutron Source) LINAC has been developed within MAX project and analysis of the accelerator systems reliability has been performed within the MAX project, using the Risk Spectrum reliability analysis software. The analysis results have been evaluated by comparison with the SNS operational data. Results and conclusions are presented in this paper, oriented to identify design weaknesses and provide recommendations for improving reliability of MYRRHA linear accelerator. The SNS reliability model developed for the MAX preliminary design phase indicates possible avenues for further investigation that could be needed to improve the reliability of the high-power accelerators, in view of the future reliability targets of ADS accelerators.

  16. Evaluating the reliability of predictions made using environmental transfer models

    International Nuclear Information System (INIS)

    1989-01-01

    The development and application of mathematical models for predicting the consequences of releases of radionuclides into the environment from normal operations in the nuclear fuel cycle and in hypothetical accident conditions has increased dramatically in the last two decades. This Safety Practice publication has been prepared to provide guidance on the available methods for evaluating the reliability of environmental transfer model predictions. It provides a practical introduction of the subject and a particular emphasis has been given to worked examples in the text. It is intended to supplement existing IAEA publications on environmental assessment methodology. 60 refs, 17 figs, 12 tabs

  17. Modeling human reliability analysis using MIDAS

    International Nuclear Information System (INIS)

    Boring, R. L.

    2006-01-01

    This paper documents current efforts to infuse human reliability analysis (HRA) into human performance simulation. The Idaho National Laboratory is teamed with NASA Ames Research Center to bridge the SPAR-H HRA method with NASA's Man-machine Integration Design and Analysis System (MIDAS) for use in simulating and modeling the human contribution to risk in nuclear power plant control room operations. It is anticipated that the union of MIDAS and SPAR-H will pave the path for cost-effective, timely, and valid simulated control room operators for studying current and next generation control room configurations. This paper highlights considerations for creating the dynamic HRA framework necessary for simulation, including event dependency and granularity. This paper also highlights how the SPAR-H performance shaping factors can be modeled in MIDAS across static, dynamic, and initiator conditions common to control room scenarios. This paper concludes with a discussion of the relationship of the workload factors currently in MIDAS and the performance shaping factors in SPAR-H. (authors)

  18. An architectural model for software reliability quantification: sources of data

    International Nuclear Information System (INIS)

    Smidts, C.; Sova, D.

    1999-01-01

    Software reliability assessment models in use today treat software as a monolithic block. An aversion towards 'atomic' models seems to exist. These models appear to add complexity to the modeling, to the data collection and seem intrinsically difficult to generalize. In 1997, we introduced an architecturally based software reliability model called FASRE. The model is based on an architecture derived from the requirements which captures both functional and nonfunctional requirements and on a generic classification of functions, attributes and failure modes. The model focuses on evaluation of failure mode probabilities and uses a Bayesian quantification framework. Failure mode probabilities of functions and attributes are propagated to the system level using fault trees. It can incorporate any type of prior information such as results of developers' testing, historical information on a specific functionality and its attributes, and, is ideally suited for reusable software. By building an architecture and deriving its potential failure modes, the model forces early appraisal and understanding of the weaknesses of the software, allows reliability analysis of the structure of the system, provides assessments at a functional level as well as at a systems' level. In order to quantify the probability of failure (or the probability of success) of a specific element of our architecture, data are needed. The term element of the architecture is used here in its broadest sense to mean a single failure mode or a higher level of abstraction such as a function. The paper surveys the potential sources of software reliability data available during software development. Next the mechanisms for incorporating these sources of relevant data to the FASRE model are identified

  19. Human reliability data collection and modelling

    International Nuclear Information System (INIS)

    1991-09-01

    The main purpose of this document is to review and outline the current state-of-the-art of the Human Reliability Assessment (HRA) used for quantitative assessment of nuclear power plants safe and economical operation. Another objective is to consider Human Performance Indicators (HPI) which can alert plant manager and regulator to departures from states of normal and acceptable operation. These two objectives are met in the three sections of this report. The first objective has been divided into two areas, based on the location of the human actions being considered. That is, the modelling and data collection associated with control room actions are addressed first in chapter 1 while actions outside the control room (including maintenance) are addressed in chapter 2. Both chapters 1 and 2 present a brief outline of the current status of HRA for these areas, and major outstanding issues. Chapter 3 discusses HPI. Such performance indicators can signal, at various levels, changes in factors which influence human performance. The final section of this report consists of papers presented by the participants of the Technical Committee Meeting. A separate abstract was prepared for each of these papers. Refs, figs and tabs

  20. Providing reliable energy in a time of constraints : a North American concern

    International Nuclear Information System (INIS)

    Egan, T.; Turk, E.

    2008-04-01

    The reliability of the North American electricity grid was discussed. Government initiatives designed to control carbon dioxide (CO 2 ) and other emissions in some regions of Canada may lead to electricity supply constraints in other regions. A lack of investment in transmission infrastructure has resulted in constraints within the North American transmission grid, and the growth of smaller projects is now raising concerns about transmission capacity. Labour supply shortages in the electricity industry are also creating concerns about the long-term security of the electricity market. Measures to address constraints must be considered in the current context of the North American electricity system. The extensive transmission interconnects and integration between the United States and Canada will provide a framework for greater trade and market opportunities between the 2 countries. Coordinated actions and increased integration will enable Canada and the United States to increase the reliability of electricity supply. However, both countries must work cooperatively to increase generation supply using both mature and emerging technologies. The cross-border transmission grid must be enhanced by increasing transmission capacity as well as by implementing new reliability rules, building new infrastructure, and ensuring infrastructure protection. Barriers to cross-border electricity trade must be identified and avoided. Demand-side and energy efficiency measures must also be implemented. It was concluded that both countries must focus on developing strategies for addressing the environmental concerns related to electricity production. 6 figs

  1. Structural hybrid reliability index and its convergent solving method based on random–fuzzy–interval reliability model

    OpenAIRE

    Hai An; Ling Zhou; Hui Sun

    2016-01-01

    Aiming to resolve the problems of a variety of uncertainty variables that coexist in the engineering structure reliability analysis, a new hybrid reliability index to evaluate structural hybrid reliability, based on the random–fuzzy–interval model, is proposed in this article. The convergent solving method is also presented. First, the truncated probability reliability model, the fuzzy random reliability model, and the non-probabilistic interval reliability model are introduced. Then, the new...

  2. Reliability modelling - PETROBRAS 2010 integrated gas supply chain

    Energy Technology Data Exchange (ETDEWEB)

    Faertes, Denise; Heil, Luciana; Saker, Leonardo; Vieira, Flavia; Risi, Francisco; Domingues, Joaquim; Alvarenga, Tobias; Carvalho, Eduardo; Mussel, Patricia

    2010-09-15

    The purpose of this paper is to present the innovative reliability modeling of Petrobras 2010 integrated gas supply chain. The model represents a challenge in terms of complexity and software robustness. It was jointly developed by PETROBRAS Gas and Power Department and Det Norske Veritas. It was carried out with the objective of evaluating security of supply of 2010 gas network design that was conceived to connect Brazilian Northeast and Southeast regions. To provide best in class analysis, state of the art software was used to quantify the availability and the efficiency of the overall network and its individual components.

  3. Building and integrating reliability models in a Reliability-Centered-Maintenance approach

    International Nuclear Information System (INIS)

    Verite, B.; Villain, B.; Venturini, V.; Hugonnard, S.; Bryla, P.

    1998-03-01

    Electricite de France (EDF) has recently developed its OMF-Structures method, designed to optimize preventive maintenance of passive structures such as pipes and support, based on risk. In particular, reliability performances of components need to be determined; it is a two-step process, consisting of a qualitative sort followed by a quantitative evaluation, involving two types of models. Initially, degradation models are widely used to exclude some components from the field of preventive maintenance. The reliability of the remaining components is then evaluated by means of quantitative reliability models. The results are then included in a risk indicator that is used to directly optimize preventive maintenance tasks. (author)

  4. Reliability Model of Power Transformer with ONAN Cooling

    OpenAIRE

    M. Sefidgaran; M. Mirzaie; A. Ebrahimzadeh

    2010-01-01

    Reliability of a power system is considerably influenced by its equipments. Power transformers are one of the most critical and expensive equipments of a power system and their proper functions are vital for the substations and utilities. Therefore, reliability model of power transformer is very important in the risk assessment of the engineering systems. This model shows the characteristics and functions of a transformer in the power system. In this paper the reliability model...

  5. Development of Model for Providing Feasible Scholarship

    Directory of Open Access Journals (Sweden)

    Harry Dhika

    2016-05-01

    Full Text Available The current work focuses on the development of a model to determine a feasible scholarship recipient on the basis of the naiv¨e Bayes’ method using very simple and limited attributes. Those attributes are the applicants academic year, represented by their semester, academic performance, represented by their GPa, socioeconomic ability, which represented the economic capability to attend a higher education institution, and their level of social involvement. To establish and evaluate the model performance, empirical data are collected, and the data of 100 students are divided into 80 student data for the model training and the remaining of 20 student data are for the model testing. The results suggest that the model is capable to provide recommendations for the potential scholarship recipient at the level of accuracy of 95%.

  6. Reliability physics and engineering time-to-failure modeling

    CERN Document Server

    McPherson, J W

    2013-01-01

    Reliability Physics and Engineering provides critically important information that is needed for designing and building reliable cost-effective products. Key features include:  ·       Materials/Device Degradation ·       Degradation Kinetics ·       Time-To-Failure Modeling ·       Statistical Tools ·       Failure-Rate Modeling ·       Accelerated Testing ·       Ramp-To-Failure Testing ·       Important Failure Mechanisms for Integrated Circuits ·       Important Failure Mechanisms for  Mechanical Components ·       Conversion of Dynamic  Stresses into Static Equivalents ·       Small Design Changes Producing Major Reliability Improvements ·       Screening Methods ·       Heat Generation and Dissipation ·       Sampling Plans and Confidence Intervals This textbook includes numerous example problems with solutions. Also, exercise problems along with the answers are included at the end of each chapter. Relia...

  7. Model-based human reliability analysis: prospects and requirements

    International Nuclear Information System (INIS)

    Mosleh, A.; Chang, Y.H.

    2004-01-01

    Major limitations of the conventional methods for human reliability analysis (HRA), particularly those developed for operator response analysis in probabilistic safety assessments (PSA) of nuclear power plants, are summarized as a motivation for the need and a basis for developing requirements for the next generation HRA methods. It is argued that a model-based approach that provides explicit cognitive causal links between operator behaviors and directly or indirectly measurable causal factors should be at the core of the advanced methods. An example of such causal model is briefly reviewed, where due to the model complexity and input requirements can only be currently implemented in a dynamic PSA environment. The computer simulation code developed for this purpose is also described briefly, together with current limitations in the models, data, and the computer implementation

  8. Using the Weibull distribution reliability, modeling and inference

    CERN Document Server

    McCool, John I

    2012-01-01

    Understand and utilize the latest developments in Weibull inferential methods While the Weibull distribution is widely used in science and engineering, most engineers do not have the necessary statistical training to implement the methodology effectively. Using the Weibull Distribution: Reliability, Modeling, and Inference fills a gap in the current literature on the topic, introducing a self-contained presentation of the probabilistic basis for the methodology while providing powerful techniques for extracting information from data. The author explains the use of the Weibull distribution

  9. Supersonic shear imaging provides a reliable measurement of resting muscle shear elastic modulus

    International Nuclear Information System (INIS)

    Lacourpaille, Lilian; Hug, François; Bouillard, Killian; Nordez, Antoine; Hogrel, Jean-Yves

    2012-01-01

    The aim of the present study was to assess the reliability of shear elastic modulus measurements performed using supersonic shear imaging (SSI) in nine resting muscles (i.e. gastrocnemius medialis, tibialis anterior, vastus lateralis, rectus femoris, triceps brachii, biceps brachii, brachioradialis, adductor pollicis obliquus and abductor digiti minimi) of different architectures and typologies. Thirty healthy subjects were randomly assigned to the intra-session reliability (n = 20), inter-day reliability (n = 21) and the inter-observer reliability (n = 16) experiments. Muscle shear elastic modulus ranged from 2.99 (gastrocnemius medialis) to 4.50 kPa (adductor digiti minimi and tibialis anterior). On the whole, very good reliability was observed, with a coefficient of variation (CV) ranging from 4.6% to 8%, except for the inter-operator reliability of adductor pollicis obliquus (CV = 11.5%). The intraclass correlation coefficients were good (0.871 ± 0.045 for the intra-session reliability, 0.815 ± 0.065 for the inter-day reliability and 0.709 ± 0.141 for the inter-observer reliability). Both the reliability and the ease of use of SSI make it a potentially interesting technique that would be of benefit to fundamental, applied and clinical research projects that need an accurate assessment of muscle mechanical properties. (note)

  10. Time domain series system definition and gear set reliability modeling

    International Nuclear Information System (INIS)

    Xie, Liyang; Wu, Ningxiang; Qian, Wenxue

    2016-01-01

    Time-dependent multi-configuration is a typical feature for mechanical systems such as gear trains and chain drives. As a series system, a gear train is distinct from a traditional series system, such as a chain, in load transmission path, system-component relationship, system functioning manner, as well as time-dependent system configuration. Firstly, the present paper defines time-domain series system to which the traditional series system reliability model is not adequate. Then, system specific reliability modeling technique is proposed for gear sets, including component (tooth) and subsystem (tooth-pair) load history description, material priori/posterior strength expression, time-dependent and system specific load-strength interference analysis, as well as statistically dependent failure events treatment. Consequently, several system reliability models are developed for gear sets with different tooth numbers in the scenario of tooth root material ultimate tensile strength failure. The application of the models is discussed in the last part, and the differences between the system specific reliability model and the traditional series system reliability model are illustrated by virtue of several numerical examples. - Highlights: • A new type of series system, i.e. time-domain multi-configuration series system is defined, that is of great significance to reliability modeling. • Multi-level statistical analysis based reliability modeling method is presented for gear transmission system. • Several system specific reliability models are established for gear set reliability estimation. • The differences between the traditional series system reliability model and the new model are illustrated.

  11. Models on reliability of non-destructive testing

    International Nuclear Information System (INIS)

    Simola, K.; Pulkkinen, U.

    1998-01-01

    The reliability of ultrasonic inspections has been studied in e.g. international PISC (Programme for the Inspection of Steel Components) exercises. These exercises have produced a large amount of information on the effect of various factors on the reliability of inspections. The information obtained from reliability experiments are used to model the dependency of flaw detection probability on various factors and to evaluate the performance of inspection equipment, including the sizing accuracy. The information from experiments is utilised in a most effective way when mathematical models are applied. Here, some statistical models for reliability of non-destructive tests are introduced. In order to demonstrate the use of inspection reliability models, they have been applied to the inspection results of intergranular stress corrosion cracking (IGSCC) type flaws in PISC III exercise (PISC 1995). The models are applied to both flaw detection frequency data of all inspection teams and to flaw sizing data of one participating team. (author)

  12. An FEC Adaptive Multicast MAC Protocol for Providing Reliability in WLANs

    Science.gov (United States)

    Basalamah, Anas; Sato, Takuro

    For wireless multicast applications like multimedia conferencing, voice over IP and video/audio streaming, a reliable transmission of packets within short delivery delay is needed. Moreover, reliability is crucial to the performance of error intolerant applications like file transfer, distributed computing, chat and whiteboard sharing. Forward Error Correction (FEC) is frequently used in wireless multicast to enhance Packet Error Rate (PER) performance, but cannot assure full reliability unless coupled with Automatic Repeat Request forming what is knows as Hybrid-ARQ. While reliable FEC can be deployed at different levels of the protocol stack, it cannot be deployed on the MAC layer of the unreliable IEEE802.11 WLAN due to its inability to exchange ACKs with multiple recipients. In this paper, we propose a Multicast MAC protocol that enhances WLAN reliability by using Adaptive FEC and study it's performance through mathematical analysis and simulation. Our results show that our protocol can deliver high reliability and throughput performance.

  13. Reliability modelling and simulation of switched linear system ...

    African Journals Online (AJOL)

    Reliability modelling and simulation of switched linear system control using temporal databases. ... design of fault-tolerant real-time switching systems control and modelling embedded micro-schedulers for complex systems maintenance.

  14. A possibilistic uncertainty model in classical reliability theory

    International Nuclear Information System (INIS)

    De Cooman, G.; Capelle, B.

    1994-01-01

    The authors argue that a possibilistic uncertainty model can be used to represent linguistic uncertainty about the states of a system and of its components. Furthermore, the basic properties of the application of this model to classical reliability theory are studied. The notion of the possibilistic reliability of a system or a component is defined. Based on the concept of a binary structure function, the important notion of a possibilistic function is introduced. It allows to calculate the possibilistic reliability of a system in terms of the possibilistic reliabilities of its components

  15. Development of a Conservative Model Validation Approach for Reliable Analysis

    Science.gov (United States)

    2015-01-01

    CIE 2015 August 2-5, 2015, Boston, Massachusetts, USA [DRAFT] DETC2015-46982 DEVELOPMENT OF A CONSERVATIVE MODEL VALIDATION APPROACH FOR RELIABLE...obtain a conservative simulation model for reliable design even with limited experimental data. Very little research has taken into account the...3, the proposed conservative model validation is briefly compared to the conventional model validation approach. Section 4 describes how to account

  16. Developing Fast and Reliable Flood Models

    DEFF Research Database (Denmark)

    Thrysøe, Cecilie; Toke, Jens; Borup, Morten

    2016-01-01

    . A surrogate model is set up for a case study area in Aarhus, Denmark, to replace a MIKE FLOOD model. The drainage surrogates are able to reproduce the MIKE URBAN results for a set of rain inputs. The coupled drainage-surface surrogate model lacks details in the surface description which reduces its overall...... accuracy. The model shows no instability, hence larger time steps can be applied, which reduces the computational time by more than a factor 1400. In conclusion, surrogate models show great potential for usage in urban water modelling....

  17. Evaluation of mobile ad hoc network reliability using propagation-based link reliability model

    International Nuclear Information System (INIS)

    Padmavathy, N.; Chaturvedi, Sanjay K.

    2013-01-01

    A wireless mobile ad hoc network (MANET) is a collection of solely independent nodes (that can move randomly around the area of deployment) making the topology highly dynamic; nodes communicate with each other by forming a single hop/multi-hop network and maintain connectivity in decentralized manner. MANET is modelled using geometric random graphs rather than random graphs because the link existence in MANET is a function of the geometric distance between the nodes and the transmission range of the nodes. Among many factors that contribute to the MANET reliability, the reliability of these networks also depends on the robustness of the link between the mobile nodes of the network. Recently, the reliability of such networks has been evaluated for imperfect nodes (transceivers) with binary model of communication links based on the transmission range of the mobile nodes and the distance between them. However, in reality, the probability of successful communication decreases as the signal strength deteriorates due to noise, fading or interference effects even up to the nodes' transmission range. Hence, in this paper, using a propagation-based link reliability model rather than a binary-model with nodes following a known failure distribution to evaluate the network reliability (2TR m , ATR m and AoTR m ) of MANET through Monte Carlo Simulation is proposed. The method is illustrated with an application and some imperative results are also presented

  18. Experiment research on cognition reliability model of nuclear power plant

    International Nuclear Information System (INIS)

    Zhao Bingquan; Fang Xiang

    1999-01-01

    The objective of the paper is to improve the reliability of operation on real nuclear power plant of operators through the simulation research to the cognition reliability of nuclear power plant operators. The research method of the paper is to make use of simulator of nuclear power plant as research platform, to take present international research model of reliability of human cognition based on three-parameter Weibull distribution for reference, to develop and get the research model of Chinese nuclear power plant operators based on two-parameter Weibull distribution. By making use of two-parameter Weibull distribution research model of cognition reliability, the experiments about the cognition reliability of nuclear power plant operators have been done. Compared with the results of other countries such USA and Hungary, the same results can be obtained, which can do good to the safety operation of nuclear power plant

  19. Do downscaled general circulation models reliably simulate historical climatic conditions?

    Science.gov (United States)

    Bock, Andrew R.; Hay, Lauren E.; McCabe, Gregory J.; Markstrom, Steven L.; Atkinson, R. Dwight

    2018-01-01

    The accuracy of statistically downscaled (SD) general circulation model (GCM) simulations of monthly surface climate for historical conditions (1950–2005) was assessed for the conterminous United States (CONUS). The SD monthly precipitation (PPT) and temperature (TAVE) from 95 GCMs from phases 3 and 5 of the Coupled Model Intercomparison Project (CMIP3 and CMIP5) were used as inputs to a monthly water balance model (MWBM). Distributions of MWBM input (PPT and TAVE) and output [runoff (RUN)] variables derived from gridded station data (GSD) and historical SD climate were compared using the Kolmogorov–Smirnov (KS) test For all three variables considered, the KS test results showed that variables simulated using CMIP5 generally are more reliable than those derived from CMIP3, likely due to improvements in PPT simulations. At most locations across the CONUS, the largest differences between GSD and SD PPT and RUN occurred in the lowest part of the distributions (i.e., low-flow RUN and low-magnitude PPT). Results indicate that for the majority of the CONUS, there are downscaled GCMs that can reliably simulate historical climatic conditions. But, in some geographic locations, none of the SD GCMs replicated historical conditions for two of the three variables (PPT and RUN) based on the KS test, with a significance level of 0.05. In these locations, improved GCM simulations of PPT are needed to more reliably estimate components of the hydrologic cycle. Simple metrics and statistical tests, such as those described here, can provide an initial set of criteria to help simplify GCM selection.

  20. Wireless Channel Modeling Perspectives for Ultra-Reliable Communications

    DEFF Research Database (Denmark)

    Eggers, Patrick Claus F.; Popovski, Petar

    2018-01-01

    Ultra-Reliable Communication (URC) is one of the distinctive features of the upcoming 5G wireless communication. The level of reliability, going down to packet error rates (PER) of $10^{-9}$, should be sufficiently convincing in order to remove cables in an industrial setting or provide remote co...

  1. An interval-valued reliability model with bounded failure rates

    DEFF Research Database (Denmark)

    Kozine, Igor; Krymsky, Victor

    2012-01-01

    The approach to deriving interval-valued reliability measures described in this paper is distinctive from other imprecise reliability models in that it overcomes the issue of having to impose an upper bound on time to failure. It rests on the presupposition that a constant interval-valued failure...... rate is known possibly along with other reliability measures, precise or imprecise. The Lagrange method is used to solve the constrained optimization problem to derive new reliability measures of interest. The obtained results call for an exponential-wise approximation of failure probability density...

  2. Analytical modeling of nuclear power station operator reliability

    International Nuclear Information System (INIS)

    Sabri, Z.A.; Husseiny, A.A.

    1979-01-01

    The operator-plant interface is a critical component of power stations which requires the formulation of mathematical models to be applied in plant reliability analysis. The human model introduced here is based on cybernetic interactions and allows for use of available data from psychological experiments, hot and cold training and normal operation. The operator model is identified and integrated in the control and protection systems. The availability and reliability are given for different segments of the operator task and for specific periods of the operator life: namely, training, operation and vigilance or near retirement periods. The results can be easily and directly incorporated in system reliability analysis. (author)

  3. Reliability modeling of Clinch River breeder reactor electrical shutdown systems

    International Nuclear Information System (INIS)

    Schatz, R.A.; Duetsch, K.L.

    1974-01-01

    The initial simulation of the probabilistic properties of the Clinch River Breeder Reactor Plant (CRBRP) electrical shutdown systems is described. A model of the reliability (and availability) of the systems is presented utilizing Success State and continuous-time, discrete state Markov modeling techniques as significant elements of an overall reliability assessment process capable of demonstrating the achievement of program goals. This model is examined for its sensitivity to safe/unsafe failure rates, sybsystem redundant configurations, test and repair intervals, monitoring by reactor operators; and the control exercised over system reliability by design modifications and the selection of system operating characteristics. (U.S.)

  4. Modeling and Forecasting (Un)Reliable Realized Covariances for More Reliable Financial Decisions

    DEFF Research Database (Denmark)

    Bollerslev, Tim; Patton, Andrew J.; Quaedvlieg, Rogier

    We propose a new framework for modeling and forecasting common financial risks based on (un)reliable realized covariance measures constructed from high-frequency intraday data. Our new approach explicitly incorporates the effect of measurement errors and time-varying attenuation biases into the c......We propose a new framework for modeling and forecasting common financial risks based on (un)reliable realized covariance measures constructed from high-frequency intraday data. Our new approach explicitly incorporates the effect of measurement errors and time-varying attenuation biases...

  5. Conceptual Software Reliability Prediction Models for Nuclear Power Plant Safety Systems

    International Nuclear Information System (INIS)

    Johnson, G.; Lawrence, D.; Yu, H.

    2000-01-01

    The objective of this project is to develop a method to predict the potential reliability of software to be used in a digital system instrumentation and control system. The reliability prediction is to make use of existing measures of software reliability such as those described in IEEE Std 982 and 982.2. This prediction must be of sufficient accuracy to provide a value for uncertainty that could be used in a nuclear power plant probabilistic risk assessment (PRA). For the purposes of the project, reliability was defined to be the probability that the digital system will successfully perform its intended safety function (for the distribution of conditions under which it is expected to respond) upon demand with no unintended functions that might affect system safety. The ultimate objective is to use the identified measures to develop a method for predicting the potential quantitative reliability of a digital system. The reliability prediction models proposed in this report are conceptual in nature. That is, possible prediction techniques are proposed and trial models are built, but in order to become a useful tool for predicting reliability, the models must be tested, modified according to the results, and validated. Using methods outlined by this project, models could be constructed to develop reliability estimates for elements of software systems. This would require careful review and refinement of the models, development of model parameters from actual experience data or expert elicitation, and careful validation. By combining these reliability estimates (generated from the validated models for the constituent parts) in structural software models, the reliability of the software system could then be predicted. Modeling digital system reliability will also require that methods be developed for combining reliability estimates for hardware and software. System structural models must also be developed in order to predict system reliability based upon the reliability

  6. Models for Battery Reliability and Lifetime

    Energy Technology Data Exchange (ETDEWEB)

    Smith, K.; Wood, E.; Santhanagopalan, S.; Kim, G. H.; Neubauer, J.; Pesaran, A.

    2014-03-01

    Models describing battery degradation physics are needed to more accurately understand how battery usage and next-generation battery designs can be optimized for performance and lifetime. Such lifetime models may also reduce the cost of battery aging experiments and shorten the time required to validate battery lifetime. Models for chemical degradation and mechanical stress are reviewed. Experimental analysis of aging data from a commercial iron-phosphate lithium-ion (Li-ion) cell elucidates the relative importance of several mechanical stress-induced degradation mechanisms.

  7. RELIABILITY MODELING BASED ON INCOMPLETE DATA: OIL PUMP APPLICATION

    Directory of Open Access Journals (Sweden)

    Ahmed HAFAIFA

    2014-07-01

    Full Text Available The reliability analysis for industrial maintenance is now increasingly demanded by the industrialists in the world. Indeed, the modern manufacturing facilities are equipped by data acquisition and monitoring system, these systems generates a large volume of data. These data can be used to infer future decisions affecting the health facilities. These data can be used to infer future decisions affecting the state of the exploited equipment. However, in most practical cases the data used in reliability modelling are incomplete or not reliable. In this context, to analyze the reliability of an oil pump, this work proposes to examine and treat the incomplete, incorrect or aberrant data to the reliability modeling of an oil pump. The objective of this paper is to propose a suitable methodology for replacing the incomplete data using a regression method.

  8. BUILDING MODEL ANALYSIS APPLICATIONS WITH THE JOINT UNIVERSAL PARAMETER IDENTIFICATION AND EVALUATION OF RELIABILITY (JUPITER) API

    Science.gov (United States)

    The open-source, public domain JUPITER (Joint Universal Parameter IdenTification and Evaluation of Reliability) API (Application Programming Interface) provides conventions and Fortran-90 modules to develop applications (computer programs) for analyzing process models. The input ...

  9. MODELING HUMAN RELIABILITY ANALYSIS USING MIDAS

    Energy Technology Data Exchange (ETDEWEB)

    Ronald L. Boring; Donald D. Dudenhoeffer; Bruce P. Hallbert; Brian F. Gore

    2006-05-01

    This paper summarizes an emerging collaboration between Idaho National Laboratory and NASA Ames Research Center regarding the utilization of high-fidelity MIDAS simulations for modeling control room crew performance at nuclear power plants. The key envisioned uses for MIDAS-based control room simulations are: (i) the estimation of human error with novel control room equipment and configurations, (ii) the investigative determination of risk significance in recreating past event scenarios involving control room operating crews, and (iii) the certification of novel staffing levels in control rooms. It is proposed that MIDAS serves as a key component for the effective modeling of risk in next generation control rooms.

  10. Evaluation of Validity and Reliability for Hierarchical Scales Using Latent Variable Modeling

    Science.gov (United States)

    Raykov, Tenko; Marcoulides, George A.

    2012-01-01

    A latent variable modeling method is outlined, which accomplishes estimation of criterion validity and reliability for a multicomponent measuring instrument with hierarchical structure. The approach provides point and interval estimates for the scale criterion validity and reliability coefficients, and can also be used for testing composite or…

  11. Markerless motion capture can provide reliable 3D gait kinematics in the sagittal and frontal plane

    DEFF Research Database (Denmark)

    Sandau, Martin; Koblauch, Henrik; Moeslund, Thomas B.

    2014-01-01

    Estimating 3D joint rotations in the lower extremities accurately and reliably remains unresolved in markerless motion capture, despite extensive studies in the past decades. The main problems have been ascribed to the limited accuracy of the 3D reconstructions. Accordingly, the purpose of the pr......Estimating 3D joint rotations in the lower extremities accurately and reliably remains unresolved in markerless motion capture, despite extensive studies in the past decades. The main problems have been ascribed to the limited accuracy of the 3D reconstructions. Accordingly, the purpose...... subjects in whom hip, knee and ankle joint were analysed. Flexion/extension angles as well as hip abduction/adduction closely resembled those obtained from the marker based system. However, the internal/external rotations, knee abduction/adduction and ankle inversion/eversion were less reliable....

  12. Software reliability growth model for safety systems of nuclear reactor

    International Nuclear Information System (INIS)

    Thirugnana Murthy, D.; Murali, N.; Sridevi, T.; Satya Murty, S.A.V.; Velusamy, K.

    2014-01-01

    The demand for complex software systems has increased more rapidly than the ability to design, implement, test, and maintain them, and the reliability of software systems has become a major concern for our, modern society.Software failures have impaired several high visibility programs in space, telecommunications, defense and health industries. Besides the costs involved, it setback the projects. The ways of quantifying it and using it for improvement and control of the software development and maintenance process. This paper discusses need for systematic approaches for measuring and assuring software reliability which is a major share of project development resources. It covers the reliability models with the concern on 'Reliability Growth'. It includes data collection on reliability, statistical estimation and prediction, metrics and attributes of product architecture, design, software development, and the operational environment. Besides its use for operational decisions like deployment, it includes guiding software architecture, development, testing and verification and validation. (author)

  13. A Reliability Based Model for Wind Turbine Selection

    Directory of Open Access Journals (Sweden)

    A.K. Rajeevan

    2013-06-01

    Full Text Available A wind turbine generator output at a specific site depends on many factors, particularly cut- in, rated and cut-out wind speed parameters. Hence power output varies from turbine to turbine. The objective of this paper is to develop a mathematical relationship between reliability and wind power generation. The analytical computation of monthly wind power is obtained from weibull statistical model using cubic mean cube root of wind speed. Reliability calculation is based on failure probability analysis. There are many different types of wind turbinescommercially available in the market. From reliability point of view, to get optimum reliability in power generation, it is desirable to select a wind turbine generator which is best suited for a site. The mathematical relationship developed in this paper can be used for site-matching turbine selection in reliability point of view.

  14. How many neurologists/epileptologists are needed to provide reliable descriptions of seizure types?

    NARCIS (Netherlands)

    van Ast, J. F.; Talmon, J. L.; Renier, W. O.; Hasman, A.

    2003-01-01

    We are developing seizure descriptions as a basis for decision support. Based on an existing dataset we used the Spearman-Brown prophecy formula to estimate how many neurologist/epileptologists are needed to obtain reliable seizure descriptions (rho = 0.9). By extending the number of participants to

  15. A Survey of Software Reliability Modeling and Estimation

    Science.gov (United States)

    1983-09-01

    considered include: the Jelinski-Moranda Model, the ,Geometric Model,’ and Musa’s Model. A Monte -Carlo study of the behavior of the ’V"’"*least squares...ceedings Number 261, 1979, pp. 34-1, 34-11. IoelAmrit, AGieboSSukert, Alan and Goel, Ararat , "A Guidebookfor Software Reliability Assessment, 1980

  16. Power plant reliability calculation with Markov chain models

    International Nuclear Information System (INIS)

    Senegacnik, A.; Tuma, M.

    1998-01-01

    In the paper power plant operation is modelled using continuous time Markov chains with discrete state space. The model is used to compute the power plant reliability and the importance and influence of individual states, as well as the transition probabilities between states. For comparison the model is fitted to data for coal and nuclear power plants recorded over several years. (orig.) [de

  17. System Reliability Analysis Capability and Surrogate Model Application in RAVEN

    Energy Technology Data Exchange (ETDEWEB)

    Rabiti, Cristian [Idaho National Lab. (INL), Idaho Falls, ID (United States); Alfonsi, Andrea [Idaho National Lab. (INL), Idaho Falls, ID (United States); Huang, Dongli [Idaho National Lab. (INL), Idaho Falls, ID (United States); Gleicher, Frederick [Idaho National Lab. (INL), Idaho Falls, ID (United States); Wang, Bei [Idaho National Lab. (INL), Idaho Falls, ID (United States); Adbel-Khalik, Hany S. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Pascucci, Valerio [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis L. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-11-01

    This report collect the effort performed to improve the reliability analysis capabilities of the RAVEN code and explore new opportunity in the usage of surrogate model by extending the current RAVEN capabilities to multi physics surrogate models and construction of surrogate models for high dimensionality fields.

  18. Reliability assessment using degradation models: bayesian and classical approaches

    Directory of Open Access Journals (Sweden)

    Marta Afonso Freitas

    2010-04-01

    Full Text Available Traditionally, reliability assessment of devices has been based on (accelerated life tests. However, for highly reliable products, little information about reliability is provided by life tests in which few or no failures are typically observed. Since most failures arise from a degradation mechanism at work for which there are characteristics that degrade over time, one alternative is monitor the device for a period of time and assess its reliability from the changes in performance (degradation observed during that period. The goal of this article is to illustrate how degradation data can be modeled and analyzed by using "classical" and Bayesian approaches. Four methods of data analysis based on classical inference are presented. Next we show how Bayesian methods can also be used to provide a natural approach to analyzing degradation data. The approaches are applied to a real data set regarding train wheels degradation.Tradicionalmente, o acesso à confiabilidade de dispositivos tem sido baseado em testes de vida (acelerados. Entretanto, para produtos altamente confiáveis, pouca informação a respeito de sua confiabilidade é fornecida por testes de vida no quais poucas ou nenhumas falhas são observadas. Uma vez que boa parte das falhas é induzida por mecanismos de degradação, uma alternativa é monitorar o dispositivo por um período de tempo e acessar sua confiabilidade através das mudanças em desempenho (degradação observadas durante aquele período. O objetivo deste artigo é ilustrar como dados de degradação podem ser modelados e analisados utilizando-se abordagens "clássicas" e Bayesiana. Quatro métodos de análise de dados baseados em inferência clássica são apresentados. A seguir, mostramos como os métodos Bayesianos podem também ser aplicados para proporcionar uma abordagem natural à análise de dados de degradação. As abordagens são aplicadas a um banco de dados real relacionado à degradação de rodas de trens.

  19. A Novel OBDD-Based Reliability Evaluation Algorithm for Wireless Sensor Networks on the Multicast Model

    Directory of Open Access Journals (Sweden)

    Zongshuai Yan

    2015-01-01

    Full Text Available The two-terminal reliability calculation for wireless sensor networks (WSNs is a #P-hard problem. The reliability calculation of WSNs on the multicast model provides an even worse combinatorial explosion of node states with respect to the calculation of WSNs on the unicast model; many real WSNs require the multicast model to deliver information. This research first provides a formal definition for the WSN on the multicast model. Next, a symbolic OBDD_Multicast algorithm is proposed to evaluate the reliability of WSNs on the multicast model. Furthermore, our research on OBDD_Multicast construction avoids the problem of invalid expansion, which reduces the number of subnetworks by identifying the redundant paths of two adjacent nodes and s-t unconnected paths. Experiments show that the OBDD_Multicast both reduces the complexity of the WSN reliability analysis and has a lower running time than Xing’s OBDD- (ordered binary decision diagram- based algorithm.

  20. Maximum Entropy Discrimination Poisson Regression for Software Reliability Modeling.

    Science.gov (United States)

    Chatzis, Sotirios P; Andreou, Andreas S

    2015-11-01

    Reliably predicting software defects is one of the most significant tasks in software engineering. Two of the major components of modern software reliability modeling approaches are: 1) extraction of salient features for software system representation, based on appropriately designed software metrics and 2) development of intricate regression models for count data, to allow effective software reliability data modeling and prediction. Surprisingly, research in the latter frontier of count data regression modeling has been rather limited. More specifically, a lack of simple and efficient algorithms for posterior computation has made the Bayesian approaches appear unattractive, and thus underdeveloped in the context of software reliability modeling. In this paper, we try to address these issues by introducing a novel Bayesian regression model for count data, based on the concept of max-margin data modeling, effected in the context of a fully Bayesian model treatment with simple and efficient posterior distribution updates. Our novel approach yields a more discriminative learning technique, making more effective use of our training data during model inference. In addition, it allows of better handling uncertainty in the modeled data, which can be a significant problem when the training data are limited. We derive elegant inference algorithms for our model under the mean-field paradigm and exhibit its effectiveness using the publicly available benchmark data sets.

  1. Estimation of some stochastic models used in reliability engineering

    International Nuclear Information System (INIS)

    Huovinen, T.

    1989-04-01

    The work aims to study the estimation of some stochastic models used in reliability engineering. In reliability engineering continuous probability distributions have been used as models for the lifetime of technical components. We consider here the following distributions: exponential, 2-mixture exponential, conditional exponential, Weibull, lognormal and gamma. Maximum likelihood method is used to estimate distributions from observed data which may be either complete or censored. We consider models based on homogeneous Poisson processes such as gamma-poisson and lognormal-poisson models for analysis of failure intensity. We study also a beta-binomial model for analysis of failure probability. The estimators of the parameters for three models are estimated by the matching moments method and in the case of gamma-poisson and beta-binomial models also by maximum likelihood method. A great deal of mathematical or statistical problems that arise in reliability engineering can be solved by utilizing point processes. Here we consider the statistical analysis of non-homogeneous Poisson processes to describe the failing phenomena of a set of components with a Weibull intensity function. We use the method of maximum likelihood to estimate the parameters of the Weibull model. A common cause failure can seriously reduce the reliability of a system. We consider a binomial failure rate (BFR) model as an application of the marked point processes for modelling common cause failure in a system. The parameters of the binomial failure rate model are estimated with the maximum likelihood method

  2. Multi-state reliability for coolant pump based on dependent competitive failure model

    International Nuclear Information System (INIS)

    Shang Yanlong; Cai Qi; Zhao Xinwen; Chen Ling

    2013-01-01

    By taking into account the effect of degradation due to internal vibration and external shocks. and based on service environment and degradation mechanism of nuclear power plant coolant pump, a multi-state reliability model of coolant pump was proposed for the system that involves competitive failure process between shocks and degradation. Using this model, degradation state probability and system reliability were obtained under the consideration of internal vibration and external shocks for the degraded coolant pump. It provided an effective method to reliability analysis for coolant pump in nuclear power plant based on operating environment. The results can provide a decision making basis for design changing and maintenance optimization. (authors)

  3. Modeling cognition dynamics and its application to human reliability analysis

    International Nuclear Information System (INIS)

    Mosleh, A.; Smidts, C.; Shen, S.H.

    1996-01-01

    For the past two decades, a number of approaches have been proposed for the identification and estimation of the likelihood of human errors, particularly for use in the risk and reliability studies of nuclear power plants. Despite the wide-spread use of the most popular among these methods, their fundamental weaknesses are widely recognized, and the treatment of human reliability has been considered as one of the soft spots of risk studies of large technological systems. To alleviate the situation, new efforts have focused on the development of human reliability models based on a more fundamental understanding of operator response and its cognitive aspects

  4. Reliability model for common mode failures in redundant safety systems

    International Nuclear Information System (INIS)

    Fleming, K.N.

    1974-12-01

    A method is presented for computing the reliability of redundant safety systems, considering both independent and common mode type failures. The model developed for the computation is a simple extension of classical reliability theory. The feasibility of the method is demonstrated with the use of an example. The probability of failure of a typical diesel-generator emergency power system is computed based on data obtained from U. S. diesel-generator operating experience. The results are compared with reliability predictions based on the assumption that all failures are independent. The comparison shows a significant increase in the probability of redundant system failure, when common failure modes are considered. (U.S.)

  5. Guide to Working with Model Providers.

    Science.gov (United States)

    Walter, Katie; Hassel, Bryan C.

    Often a central feature of a school's improvement efforts is the adoption of a Comprehensive School Reform (CSR) model, an externally developed research-based design for school improvement. Adopting a model is only the first step in CSR. Another important step is forging partnerships with developers of CSR models. This guide aims to help schools…

  6. Learning reliable manipulation strategies without initial physical models

    Science.gov (United States)

    Christiansen, Alan D.; Mason, Matthew T.; Mitchell, Tom M.

    1990-01-01

    A description is given of a robot, possessing limited sensory and effectory capabilities but no initial model of the effects of its actions on the world, that acquires such a model through exploration, practice, and observation. By acquiring an increasingly correct model of its actions, it generates increasingly successful plans to achieve its goals. In an apparently nondeterministic world, achieving reliability requires the identification of reliable actions and a preference for using such actions. Furthermore, by selecting its training actions carefully, the robot can significantly improve its learning rate.

  7. A new model for reliability optimization of series-parallel systems with non-homogeneous components

    International Nuclear Information System (INIS)

    Feizabadi, Mohammad; Jahromi, Abdolhamid Eshraghniaye

    2017-01-01

    In discussions related to reliability optimization using redundancy allocation, one of the structures that has attracted the attention of many researchers, is series-parallel structure. In models previously presented for reliability optimization of series-parallel systems, there is a restricting assumption based on which all components of a subsystem must be homogeneous. This constraint limits system designers in selecting components and prevents achieving higher levels of reliability. In this paper, a new model is proposed for reliability optimization of series-parallel systems, which makes possible the use of non-homogeneous components in each subsystem. As a result of this flexibility, the process of supplying system components will be easier. To solve the proposed model, since the redundancy allocation problem (RAP) belongs to the NP-hard class of optimization problems, a genetic algorithm (GA) is developed. The computational results of the designed GA are indicative of high performance of the proposed model in increasing system reliability and decreasing costs. - Highlights: • In this paper, a new model is proposed for reliability optimization of series-parallel systems. • In the previous models, there is a restricting assumption based on which all components of a subsystem must be homogeneous. • The presented model provides a possibility for the subsystems’ components to be non- homogeneous in the required conditions. • The computational results demonstrate the high performance of the proposed model in improving reliability and reducing costs.

  8. Using LISREL to Evaluate Measurement Models and Scale Reliability.

    Science.gov (United States)

    Fleishman, John; Benson, Jeri

    1987-01-01

    LISREL program was used to examine measurement model assumptions and to assess reliability of Coopersmith Self-Esteem Inventory for Children, Form B. Data on 722 third-sixth graders from over 70 schools in large urban school district were used. LISREL program assessed (1) nature of basic measurement model for scale, (2) scale invariance across…

  9. Travel Time Reliability for Urban Networks : Modelling and Empirics

    NARCIS (Netherlands)

    Zheng, F.; Liu, Xiaobo; van Zuylen, H.J.; Li, Jie; Lu, Chao

    2017-01-01

    The importance of travel time reliability in traffic management, control, and network design has received a lot of attention in the past decade. In this paper, a network travel time distribution model based on the Johnson curve system is proposed. The model is applied to field travel time data

  10. An analytical model for computation of reliability of waste management facilities with intermediate storages

    International Nuclear Information System (INIS)

    Kallweit, A.; Schumacher, F.

    1977-01-01

    A high reliability is called for waste management facilities within the fuel cycle of nuclear power stations which can be fulfilled by providing intermediate storage facilities and reserve capacities. In this report a model based on the theory of Markov processes is described which allows computation of reliability characteristics of waste management facilities containing intermediate storage facilities. The application of the model is demonstrated by an example. (orig.) [de

  11. Models for reliability and management of NDT data

    International Nuclear Information System (INIS)

    Simola, K.

    1997-01-01

    In this paper the reliability of NDT measurements was approached from three directions. We have modelled the flaw sizing performance, the probability of flaw detection, and developed models to update the knowledge of true flaw size based on sequential measurement results and flaw sizing reliability model. In discussed models the measured flaw characteristics (depth, length) are assumed to be simple functions of the true characteristics and random noise corresponding to measurement errors, and the models are based on logarithmic transforms. Models for Bayesian updating of the flaw size distributions were developed. Using these models, it is possible to take into account the prior information of the flaw size and combine it with the measured results. A Bayesian approach could contribute e. g. to the definition of an appropriate combination of practical assessments and technical justifications in NDT system qualifications, as expressed by the European regulatory bodies

  12. Reliability Analysis of Wireless Sensor Networks Using Markovian Model

    Directory of Open Access Journals (Sweden)

    Jin Zhu

    2012-01-01

    Full Text Available This paper investigates reliability analysis of wireless sensor networks whose topology is switching among possible connections which are governed by a Markovian chain. We give the quantized relations between network topology, data acquisition rate, nodes' calculation ability, and network reliability. By applying Lyapunov method, sufficient conditions of network reliability are proposed for such topology switching networks with constant or varying data acquisition rate. With the conditions satisfied, the quantity of data transported over wireless network node will not exceed node capacity such that reliability is ensured. Our theoretical work helps to provide a deeper understanding of real-world wireless sensor networks, which may find its application in the fields of network design and topology control.

  13. Fuse Modeling for Reliability Study of Power Electronic Circuits

    DEFF Research Database (Denmark)

    Bahman, Amir Sajjad; Iannuzzo, Francesco; Blaabjerg, Frede

    2017-01-01

    This paper describes a comprehensive modeling approach on reliability of fuses used in power electronic circuits. When fuses are subjected to current pulses, cyclic temperature stress is introduced to the fuse element and will wear out the component. Furthermore, the fuse may be used in a large......, and rated voltage/current are opposed to shift in time to effect early breaking during the normal operation of the circuit. Therefore, in such cases, a reliable protection required for the other circuit components will not be achieved. The thermo-mechanical models, fatigue analysis and thermo...

  14. Structural hybrid reliability index and its convergent solving method based on random–fuzzy–interval reliability model

    Directory of Open Access Journals (Sweden)

    Hai An

    2016-08-01

    Full Text Available Aiming to resolve the problems of a variety of uncertainty variables that coexist in the engineering structure reliability analysis, a new hybrid reliability index to evaluate structural hybrid reliability, based on the random–fuzzy–interval model, is proposed in this article. The convergent solving method is also presented. First, the truncated probability reliability model, the fuzzy random reliability model, and the non-probabilistic interval reliability model are introduced. Then, the new hybrid reliability index definition is presented based on the random–fuzzy–interval model. Furthermore, the calculation flowchart of the hybrid reliability index is presented and it is solved using the modified limit-step length iterative algorithm, which ensures convergence. And the validity of convergent algorithm for the hybrid reliability model is verified through the calculation examples in literature. In the end, a numerical example is demonstrated to show that the hybrid reliability index is applicable for the wear reliability assessment of mechanisms, where truncated random variables, fuzzy random variables, and interval variables coexist. The demonstration also shows the good convergence of the iterative algorithm proposed in this article.

  15. Providing reliable equipment to IAEA through a low risk transition plan

    International Nuclear Information System (INIS)

    Green, L.; Weinstock, E.V.; Karlin, E.W.

    1988-01-01

    The development and production of safeguards equipment is a complex process containing many potential pitfalls between the conceptual design and its implementation in the field. The conditions for equipment use are especially demanding. At the same time, the consequences of failure may be serious. Repeated failure may result in the loss of credibility of safeguards. Expensive back up measures such as re-verification of inventories may be required. Inspectors may come to distrust the equipment. Finally, the expense of maintaining the equipment may be excessive. It is therefore essential that the process for bringing equipment for the conceptual stage to actual routine use minimizes the risk of producing equipment that is unsuitable for the job. Fortunately, approaches for accomplishing this have already been developed in both the industrial and commercial sectors. One such approach, the Low Risk Transition Plan (LRTP) is described to show it can be applied to the production of reliable safeguards equipment

  16. A model for assessing human cognitive reliability in PRA studies

    International Nuclear Information System (INIS)

    Hannaman, G.W.; Spurgin, A.J.; Lukic, Y.

    1985-01-01

    This paper summarizes the status of a research project sponsored by EPRI as part of the Probabilistic Risk Assessment (PRA) technology improvement program and conducted by NUS Corporation to develop a model of Human Cognitive Reliability (HCR). The model was synthesized from features identified in a review of existing models. The model development was based on the hypothesis that the key factors affecting crew response times are separable. The inputs to the model consist of key parameters the values of which can be determined by PRA analysts for each accident situation being assessed. The output is a set of curves which represent the probability of control room crew non-response as a function of time for different conditions affecting their performance. The non-response probability is then a contributor to the overall non-success of operating crews to achieve a functional objective identified in the PRA study. Simulator data and some small scale tests were utilized to illustrate the calibration of interim HCR model coefficients for different types of cognitive processing since the data were sparse. The model can potentially help PRA analysts make human reliability assessments more explicit. The model incorporates concepts from psychological models of human cognitive behavior, information from current collections of human reliability data sources and crew response time data from simulator training exercises

  17. THE SIMULATION DIAGNOSTIC METHODS AND REGENERATION WAYS OF REINFORCED - CONCRETE CONSTRUCTIONS OF BRIDGES IN PROVIDING THEIR OPERATING RELIABILITY AND LONGEVITY

    Directory of Open Access Journals (Sweden)

    B. V. Savchinskiy

    2010-03-01

    Full Text Available On the basis of analysis of existing diagnostic methods and regeneration ways of reinforced-concrete constructions of bridges the recommendations on introduction of new modern technologies of renewal of reinforced-concrete constructions of bridges in providing their operating reliability and longevity are offered.

  18. THE SIMULATION DIAGNOSTIC METHODS AND REGENERATION WAYS OF REINFORCED - CONCRETE CONSTRUCTIONS OF BRIDGES IN PROVIDING THEIR OPERATING RELIABILITY AND LONGEVITY

    OpenAIRE

    B. V. Savchinskiy

    2010-01-01

    On the basis of analysis of existing diagnostic methods and regeneration ways of reinforced-concrete constructions of bridges the recommendations on introduction of new modern technologies of renewal of reinforced-concrete constructions of bridges in providing their operating reliability and longevity are offered.

  19. Why We Need Reliable, Valid, and Appropriate Learning Disability Assessments: The Perspective of a Postsecondary Disability Service Provider

    Science.gov (United States)

    Wolforth, Joan

    2012-01-01

    This paper discusses issues regarding the validity and reliability of psychoeducational assessments provided to Disability Services Offices at Canadian Universities. Several vignettes illustrate some current issues and the potential consequences when university students are given less than thorough disability evaluations and ascribed diagnoses.…

  20. The reliability of the Adelaide in-shoe foot model.

    Science.gov (United States)

    Bishop, Chris; Hillier, Susan; Thewlis, Dominic

    2017-07-01

    Understanding the biomechanics of the foot is essential for many areas of research and clinical practice such as orthotic interventions and footwear development. Despite the widespread attention paid to the biomechanics of the foot during gait, what largely remains unknown is how the foot moves inside the shoe. This study investigated the reliability of the Adelaide In-Shoe Foot Model, which was designed to quantify in-shoe foot kinematics and kinetics during walking. Intra-rater reliability was assessed in 30 participants over five walking trials whilst wearing shoes during two data collection sessions, separated by one week. Sufficient reliability for use was interpreted as a coefficient of multiple correlation and intra-class correlation coefficient of >0.61. Inter-rater reliability was investigated separately in a second sample of 10 adults by two researchers with experience in applying markers for the purpose of motion analysis. The results indicated good consistency in waveform estimation for most kinematic and kinetic data, as well as good inter-and intra-rater reliability. The exception is the peak medial ground reaction force, the minimum abduction angle and the peak abduction/adduction external hindfoot joint moments which resulted in less than acceptable repeatability. Based on our results, the Adelaide in-shoe foot model can be used with confidence for 24 commonly measured biomechanical variables during shod walking. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Modeling of humidity-related reliability in enclosures with electronics

    DEFF Research Database (Denmark)

    Hygum, Morten Arnfeldt; Popok, Vladimir

    2015-01-01

    Reliability of electronics that operate outdoor is strongly affected by environmental factors such as temperature and humidity. Fluctuations of these parameters can lead to water condensation inside enclosures. Therefore, modelling of humidity distribution in a container with air and freely exposed...

  2. Models of Information Security Highly Reliable Computing Systems

    Directory of Open Access Journals (Sweden)

    Vsevolod Ozirisovich Chukanov

    2016-03-01

    Full Text Available Methods of the combined reservation are considered. The models of reliability of systems considering parameters of restoration and prevention of blocks of system are described. Ratios for average quantity prevention and an availability quotient of blocks of system are given.

  3. Design Protocols and Analytical Strategies that Incorporate Structural Reliability Models

    Science.gov (United States)

    Duffy, Stephen F.

    1997-01-01

    Ceramic matrix composites (CMC) and intermetallic materials (e.g., single crystal nickel aluminide) are high performance materials that exhibit attractive mechanical, thermal and chemical properties. These materials are critically important in advancing certain performance aspects of gas turbine engines. From an aerospace engineer's perspective the new generation of ceramic composites and intermetallics offers a significant potential for raising the thrust/weight ratio and reducing NO(x) emissions of gas turbine engines. These aspects have increased interest in utilizing these materials in the hot sections of turbine engines. However, as these materials evolve and their performance characteristics improve a persistent need exists for state-of-the-art analytical methods that predict the response of components fabricated from CMC and intermetallic material systems. This need provided the motivation for the technology developed under this research effort. Continuous ceramic fiber composites exhibit an increase in work of fracture, which allows for "graceful" rather than catastrophic failure. When loaded in the fiber direction, these composites retain substantial strength capacity beyond the initiation of transverse matrix cracking despite the fact that neither of its constituents would exhibit such behavior if tested alone. As additional load is applied beyond first matrix cracking, the matrix tends to break in a series of cracks bridged by the ceramic fibers. Any additional load is born increasingly by the fibers until the ultimate strength of the composite is reached. Thus modeling efforts supported under this research effort have focused on predicting this sort of behavior. For single crystal intermetallics the issues that motivated the technology development involved questions relating to material behavior and component design. Thus the research effort supported by this grant had to determine the statistical nature and source of fracture in a high strength, Ni

  4. Stochastic Differential Equation-Based Flexible Software Reliability Growth Model

    Directory of Open Access Journals (Sweden)

    P. K. Kapur

    2009-01-01

    Full Text Available Several software reliability growth models (SRGMs have been developed by software developers in tracking and measuring the growth of reliability. As the size of software system is large and the number of faults detected during the testing phase becomes large, so the change of the number of faults that are detected and removed through each debugging becomes sufficiently small compared with the initial fault content at the beginning of the testing phase. In such a situation, we can model the software fault detection process as a stochastic process with continuous state space. In this paper, we propose a new software reliability growth model based on Itô type of stochastic differential equation. We consider an SDE-based generalized Erlang model with logistic error detection function. The model is estimated and validated on real-life data sets cited in literature to show its flexibility. The proposed model integrated with the concept of stochastic differential equation performs comparatively better than the existing NHPP-based models.

  5. Modular reliability modeling of the TJNAF personnel safety system

    International Nuclear Information System (INIS)

    Cinnamon, J.; Mahoney, K.

    1997-01-01

    A reliability model for the Thomas Jefferson National Accelerator Facility (formerly CEBAF) personnel safety system has been developed. The model, which was implemented using an Excel spreadsheet, allows simulation of all or parts of the system. Modularity os the model's implementation allows rapid open-quotes what if open-quotes case studies to simulate change in safety system parameters such as redundancy, diversity, and failure rates. Particular emphasis is given to the prediction of failure modes which would result in the failure of both of the redundant safety interlock systems. In addition to the calculation of the predicted reliability of the safety system, the model also calculates availability of the same system. Such calculations allow the user to make tradeoff studies between reliability and availability, and to target resources to improving those parts of the system which would most benefit from redesign or upgrade. The model includes calculated, manufacturer's data, and Jefferson Lab field data. This paper describes the model, methods used, and comparison of calculated to actual data for the Jefferson Lab personnel safety system. Examples are given to illustrate the model's utility and ease of use

  6. Modeling and Analysis of Component Faults and Reliability

    DEFF Research Database (Denmark)

    Le Guilly, Thibaut; Olsen, Petur; Ravn, Anders Peter

    2016-01-01

    This chapter presents a process to design and validate models of reactive systems in the form of communicating timed automata. The models are extended with faults associated with probabilities of occurrence. This enables a fault tree analysis of the system using minimal cut sets that are automati......This chapter presents a process to design and validate models of reactive systems in the form of communicating timed automata. The models are extended with faults associated with probabilities of occurrence. This enables a fault tree analysis of the system using minimal cut sets...... that are automatically generated. The stochastic information on the faults is used to estimate the reliability of the fault affected system. The reliability is given with respect to properties of the system state space. We illustrate the process on a concrete example using the Uppaal model checker for validating...... the ideal system model and the fault modeling. Then the statistical version of the tool, UppaalSMC, is used to find reliability estimates....

  7. Using Model Replication to Improve the Reliability of Agent-Based Models

    Science.gov (United States)

    Zhong, Wei; Kim, Yushim

    The basic presupposition of model replication activities for a computational model such as an agent-based model (ABM) is that, as a robust and reliable tool, it must be replicable in other computing settings. This assumption has recently gained attention in the community of artificial society and simulation due to the challenges of model verification and validation. Illustrating the replication of an ABM representing fraudulent behavior in a public service delivery system originally developed in the Java-based MASON toolkit for NetLogo by a different author, this paper exemplifies how model replication exercises provide unique opportunities for model verification and validation process. At the same time, it helps accumulate best practices and patterns of model replication and contributes to the agenda of developing a standard methodological protocol for agent-based social simulation.

  8. Maintenance overtime policies in reliability theory models with random working cycles

    CERN Document Server

    Nakagawa, Toshio

    2015-01-01

    This book introduces a new concept of replacement in maintenance and reliability theory. Replacement overtime, where replacement occurs at the first completion of a working cycle over a planned time, is a new research topic in maintenance theory and also serves to provide a fresh optimization technique in reliability engineering. In comparing replacement overtime with standard and random replacement techniques theoretically and numerically, 'Maintenance Overtime Policies in Reliability Theory' highlights the key benefits to be gained by adopting this new approach and shows how they can be applied to inspection policies, parallel systems and cumulative damage models. Utilizing the latest research in replacement overtime by internationally recognized experts, readers are introduced to new topics and methods, and learn how to practically apply this knowledge to actual reliability models. This book will serve as an essential guide to a new subject of study for graduate students and researchers and also provides a...

  9. A general graphical user interface for automatic reliability modeling

    Science.gov (United States)

    Liceaga, Carlos A.; Siewiorek, Daniel P.

    1991-01-01

    Reported here is a general Graphical User Interface (GUI) for automatic reliability modeling of Processor Memory Switch (PMS) structures using a Markov model. This GUI is based on a hierarchy of windows. One window has graphical editing capabilities for specifying the system's communication structure, hierarchy, reconfiguration capabilities, and requirements. Other windows have field texts, popup menus, and buttons for specifying parameters and selecting actions. An example application of the GUI is given.

  10. Photovoltaic Reliability Performance Model v 2.0

    Energy Technology Data Exchange (ETDEWEB)

    2016-12-16

    PV-RPM is intended to address more “real world” situations by coupling a photovoltaic system performance model with a reliability model so that inverters, modules, combiner boxes, etc. can experience failures and be repaired (or left unrepaired). The model can also include other effects, such as module output degradation over time or disruptions such as electrical grid outages. In addition, PV-RPM is a dynamic probabilistic model that can be used to run many realizations (i.e., possible future outcomes) of a system’s performance using probability distributions to represent uncertain parameter inputs.

  11. NHPP-Based Software Reliability Models Using Equilibrium Distribution

    Science.gov (United States)

    Xiao, Xiao; Okamura, Hiroyuki; Dohi, Tadashi

    Non-homogeneous Poisson processes (NHPPs) have gained much popularity in actual software testing phases to estimate the software reliability, the number of remaining faults in software and the software release timing. In this paper, we propose a new modeling approach for the NHPP-based software reliability models (SRMs) to describe the stochastic behavior of software fault-detection processes. The fundamental idea is to apply the equilibrium distribution to the fault-detection time distribution in NHPP-based modeling. We also develop efficient parameter estimation procedures for the proposed NHPP-based SRMs. Through numerical experiments, it can be concluded that the proposed NHPP-based SRMs outperform the existing ones in many data sets from the perspective of goodness-of-fit and prediction performance.

  12. Predicting Flow Breakdown Probability and Duration in Stochastic Network Models: Impact on Travel Time Reliability

    Energy Technology Data Exchange (ETDEWEB)

    Dong, Jing [ORNL; Mahmassani, Hani S. [Northwestern University, Evanston

    2011-01-01

    This paper proposes a methodology to produce random flow breakdown endogenously in a mesoscopic operational model, by capturing breakdown probability and duration. Based on previous research findings that probability of flow breakdown can be represented as a function of flow rate and the duration can be characterized by a hazard model. By generating random flow breakdown at various levels and capturing the traffic characteristics at the onset of the breakdown, the stochastic network simulation model provides a tool for evaluating travel time variability. The proposed model can be used for (1) providing reliability related traveler information; (2) designing ITS (intelligent transportation systems) strategies to improve reliability; and (3) evaluating reliability-related performance measures of the system.

  13. Structural reliability in context of statistical uncertainties and modelling discrepancies

    International Nuclear Information System (INIS)

    Pendola, Maurice

    2000-01-01

    Structural reliability methods have been largely improved during the last years and have showed their ability to deal with uncertainties during the design stage or to optimize the functioning and the maintenance of industrial installations. They are based on a mechanical modeling of the structural behavior according to the considered failure modes and on a probabilistic representation of input parameters of this modeling. In practice, only limited statistical information is available to build the probabilistic representation and different sophistication levels of the mechanical modeling may be introduced. Thus, besides the physical randomness, other uncertainties occur in such analyses. The aim of this work is triple: 1. at first, to propose a methodology able to characterize the statistical uncertainties due to the limited number of data in order to take them into account in the reliability analyses. The obtained reliability index measures the confidence in the structure considering the statistical information available. 2. Then, to show a methodology leading to reliability results evaluated from a particular mechanical modeling but by using a less sophisticated one. The objective is then to decrease the computational efforts required by the reference modeling. 3. Finally, to propose partial safety factors that are evolving as a function of the number of statistical data available and as a function of the sophistication level of the mechanical modeling that is used. The concepts are illustrated in the case of a welded pipe and in the case of a natural draught cooling tower. The results show the interest of the methodologies in an industrial context. [fr

  14. Providing Reliability Services through Demand Response: A Prelimnary Evaluation of the Demand Response Capabilities of Alcoa Inc.

    Energy Technology Data Exchange (ETDEWEB)

    Starke, Michael R [ORNL; Kirby, Brendan J [ORNL; Kueck, John D [ORNL; Todd, Duane [Alcoa; Caulfield, Michael [Alcoa; Helms, Brian [Alcoa

    2009-02-01

    Demand response is the largest underutilized reliability resource in North America. Historic demand response programs have focused on reducing overall electricity consumption (increasing efficiency) and shaving peaks but have not typically been used for immediate reliability response. Many of these programs have been successful but demand response remains a limited resource. The Federal Energy Regulatory Commission (FERC) report, 'Assessment of Demand Response and Advanced Metering' (FERC 2006) found that only five percent of customers are on some form of demand response program. Collectively they represent an estimated 37,000 MW of response potential. These programs reduce overall energy consumption, lower green house gas emissions by allowing fossil fuel generators to operate at increased efficiency and reduce stress on the power system during periods of peak loading. As the country continues to restructure energy markets with sophisticated marginal cost models that attempt to minimize total energy costs, the ability of demand response to create meaningful shifts in the supply and demand equations is critical to creating a sustainable and balanced economic response to energy issues. Restructured energy market prices are set by the cost of the next incremental unit of energy, so that as additional generation is brought into the market, the cost for the entire market increases. The benefit of demand response is that it reduces overall demand and shifts the entire market to a lower pricing level. This can be very effective in mitigating price volatility or scarcity pricing as the power system responds to changing demand schedules, loss of large generators, or loss of transmission. As a global producer of alumina, primary aluminum, and fabricated aluminum products, Alcoa Inc., has the capability to provide demand response services through its manufacturing facilities and uniquely through its aluminum smelting facilities. For a typical aluminum smelter

  15. Reliability assessment of competing risks with generalized mixed shock models

    International Nuclear Information System (INIS)

    Rafiee, Koosha; Feng, Qianmei; Coit, David W.

    2017-01-01

    This paper investigates reliability modeling for systems subject to dependent competing risks considering the impact from a new generalized mixed shock model. Two dependent competing risks are soft failure due to a degradation process, and hard failure due to random shocks. The shock process contains fatal shocks that can cause hard failure instantaneously, and nonfatal shocks that impact the system in three different ways: 1) damaging the unit by immediately increasing the degradation level, 2) speeding up the deterioration by accelerating the degradation rate, and 3) weakening the unit strength by reducing the hard failure threshold. While the first impact from nonfatal shocks comes from each individual shock, the other two impacts are realized when the condition for a new generalized mixed shock model is satisfied. Unlike most existing mixed shock models that consider a combination of two shock patterns, our new generalized mixed shock model includes three classic shock patterns. According to the proposed generalized mixed shock model, the degradation rate and the hard failure threshold can simultaneously shift multiple times, whenever the condition for one of these three shock patterns is satisfied. An example using micro-electro-mechanical systems devices illustrates the effectiveness of the proposed approach with sensitivity analysis. - Highlights: • A rich reliability model for systems subject to dependent failures is proposed. • The degradation rate and the hard failure threshold can shift simultaneously. • The shift is triggered by a new generalized mixed shock model. • The shift can occur multiple times under the generalized mixed shock model.

  16. Specification and Design of a Fault Recovery Model for the Reliable Multicast Protocol

    Science.gov (United States)

    Montgomery, Todd; Callahan, John R.; Whetten, Brian

    1996-01-01

    The Reliable Multicast Protocol (RMP) provides a unique, group-based model for distributed programs that need to handle reconfiguration events at the application layer. This model, called membership views, provides an abstraction in which events such as site failures, network partitions, and normal join-leave events are viewed as group reformations. RMP provides access to this model through an application programming interface (API) that notifies an application when a group is reformed as the result of a some event. RMP provides applications with reliable delivery of messages using an underlying IP Multicast media to other group members in a distributed environment even in the case of reformations. A distributed application can use various Quality of Service (QoS) levels provided by RMP to tolerate group reformations. This paper explores the implementation details of the mechanisms in RMP that provide distributed applications with membership view information and fault recovery capabilities.

  17. Testing the reliability of ice-cream cone model

    Science.gov (United States)

    Pan, Zonghao; Shen, Chenglong; Wang, Chuanbing; Liu, Kai; Xue, Xianghui; Wang, Yuming; Wang, Shui

    2015-04-01

    Coronal Mass Ejections (CME)'s properties are important to not only the physical scene itself but space-weather prediction. Several models (such as cone model, GCS model, and so on) have been raised to get rid of the projection effects within the properties observed by spacecraft. According to SOHO/ LASCO observations, we obtain the 'real' 3D parameters of all the FFHCMEs (front-side full halo Coronal Mass Ejections) within the 24th solar cycle till July 2012, by the ice-cream cone model. Considering that the method to obtain 3D parameters from the CME observations by multi-satellite and multi-angle has higher accuracy, we use the GCS model to obtain the real propagation parameters of these CMEs in 3D space and compare the results with which by ice-cream cone model. Then we could discuss the reliability of the ice-cream cone model.

  18. Creation and Reliability Analysis of Vehicle Dynamic Weighing Model

    Directory of Open Access Journals (Sweden)

    Zhi-Ling XU

    2014-08-01

    Full Text Available In this paper, it is modeled by using ADAMS to portable axle load meter of dynamic weighing system, controlling a single variable simulation weighing process, getting the simulation weighing data under the different speed and weight; simultaneously using portable weighing system with the same parameters to achieve the actual measurement, comparative analysis the simulation results under the same conditions, at 30 km/h or less, the simulation value and the measured value do not differ by more than 5 %, it is not only to verify the reliability of dynamic weighing model, but also to create possible for improving algorithm study efficiency by using dynamic weighing model simulation.

  19. Human Performance Modeling for Dynamic Human Reliability Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Boring, Ronald Laurids [Idaho National Laboratory; Joe, Jeffrey Clark [Idaho National Laboratory; Mandelli, Diego [Idaho National Laboratory

    2015-08-01

    Part of the U.S. Department of Energy’s (DOE’s) Light Water Reac- tor Sustainability (LWRS) Program, the Risk-Informed Safety Margin Charac- terization (RISMC) Pathway develops approaches to estimating and managing safety margins. RISMC simulations pair deterministic plant physics models with probabilistic risk models. As human interactions are an essential element of plant risk, it is necessary to integrate human actions into the RISMC risk framework. In this paper, we review simulation based and non simulation based human reliability analysis (HRA) methods. This paper summarizes the founda- tional information needed to develop a feasible approach to modeling human in- teractions in RISMC simulations.

  20. Imperfect Preventive Maintenance Model Study Based On Reliability Limitation

    Directory of Open Access Journals (Sweden)

    Zhou Qian

    2016-01-01

    Full Text Available Effective maintenance is crucial for equipment performance in industry. Imperfect maintenance conform to actual failure process. Taking the dynamic preventive maintenance cost into account, the preventive maintenance model was constructed by using age reduction factor. The model regards the minimization of repair cost rate as final target. It use allowed smallest reliability as the replacement condition. Equipment life was assumed to follow two parameters Weibull distribution since it was one of the most commonly adopted distributions to fit cumulative failure problems. Eventually the example verifies the rationality and benefits of the model.

  1. DJFS: Providing Highly Reliable and High‐Performance File System with Small‐Sized

    Directory of Open Access Journals (Sweden)

    Junghoon Kim

    2017-11-01

    Full Text Available File systems and applications try to implement their own update protocols to guarantee data consistency, which is one of the most crucial aspects of computing systems. However, we found that the storage devices are substantially under‐utilized when preserving data consistency because they generate massive storage write traffic with many disk cache flush operations and force‐unit‐access (FUA commands. In this paper, we present DJFS (Delta‐Journaling File System that provides both a high level of performance and data consistency for different applications. We made three technical contributions to achieve our goal. First, to remove all storage accesses with disk cache flush operations and FUA commands, DJFS uses small‐sized NVRAM for a file system journal. Second, to reduce the access latency and space requirements of NVRAM, DJFS attempts to journal compress the differences in the modified blocks. Finally, to relieve explicit checkpointing overhead, DJFS aggressively reflects the checkpoint transactions to file system area in the unit of the specified region. Our evaluation on TPC‐C SQLite benchmark shows that, using our novel optimization schemes, DJFS outperforms Ext4 by up to 64.2 times with only 128 MB of NVRAM.

  2. Modelling and estimating degradation processes with application in structural reliability

    International Nuclear Information System (INIS)

    Chiquet, J.

    2007-06-01

    The characteristic level of degradation of a given structure is modeled through a stochastic process called the degradation process. The random evolution of the degradation process is governed by a differential system with Markovian environment. We put the associated reliability framework by considering the failure of the structure once the degradation process reaches a critical threshold. A closed form solution of the reliability function is obtained thanks to Markov renewal theory. Then, we build an estimation methodology for the parameters of the stochastic processes involved. The estimation methods and the theoretical results, as well as the associated numerical algorithms, are validated on simulated data sets. Our method is applied to the modelling of a real degradation mechanism, known as crack growth, for which an experimental data set is considered. (authors)

  3. Structural reliability analysis under evidence theory using the active learning kriging model

    Science.gov (United States)

    Yang, Xufeng; Liu, Yongshou; Ma, Panke

    2017-11-01

    Structural reliability analysis under evidence theory is investigated. It is rigorously proved that a surrogate model providing only correct sign prediction of the performance function can meet the accuracy requirement of evidence-theory-based reliability analysis. Accordingly, a method based on the active learning kriging model which only correctly predicts the sign of the performance function is proposed. Interval Monte Carlo simulation and a modified optimization method based on Karush-Kuhn-Tucker conditions are introduced to make the method more efficient in estimating the bounds of failure probability based on the kriging model. Four examples are investigated to demonstrate the efficiency and accuracy of the proposed method.

  4. Fuzzy Goal Programming Approach in Selective Maintenance Reliability Model

    Directory of Open Access Journals (Sweden)

    Neha Gupta

    2013-12-01

    Full Text Available 800x600 In the present paper, we have considered the allocation problem of repairable components for a parallel-series system as a multi-objective optimization problem and have discussed two different models. In first model the reliability of subsystems are considered as different objectives. In second model the cost and time spent on repairing the components are considered as two different objectives. These two models is formulated as multi-objective Nonlinear Programming Problem (MONLPP and a Fuzzy goal programming method is used to work out the compromise allocation in multi-objective selective maintenance reliability model in which we define the membership functions of each objective function and then transform membership functions into equivalent linear membership functions by first order Taylor series and finally by forming a fuzzy goal programming model obtain a desired compromise allocation of maintenance components. A numerical example is also worked out to illustrate the computational details of the method.  Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

  5. Business Cases for Microgrids: Modeling Interactions of Technology Choice, Reliability, Cost, and Benefit

    Science.gov (United States)

    Hanna, Ryan

    Distributed energy resources (DERs), and increasingly microgrids, are becoming an integral part of modern distribution systems. Interest in microgrids--which are insular and autonomous power networks embedded within the bulk grid--stems largely from the vast array of flexibilities and benefits they can offer stakeholders. Managed well, they can improve grid reliability and resiliency, increase end-use energy efficiency by coupling electric and thermal loads, reduce transmission losses by generating power locally, and may reduce system-wide emissions, among many others. Whether these public benefits are realized, however, depends on whether private firms see a "business case", or private value, in investing. To this end, firms need models that evaluate costs, benefits, risks, and assumptions that underlie decisions to invest. The objectives of this dissertation are to assess the business case for microgrids that provide what industry analysts forecast as two primary drivers of market growth--that of providing energy services (similar to an electric utility) as well as reliability service to customers within. Prototypical first adopters are modeled--using an existing model to analyze energy services and a new model that couples that analysis with one of reliability--to explore interactions between technology choice, reliability, costs, and benefits. The new model has a bi-level hierarchy; it uses heuristic optimization to select and size DERs and analytical optimization to schedule them. It further embeds Monte Carlo simulation to evaluate reliability as well as regression models for customer damage functions to monetize reliability. It provides least-cost microgrid configurations for utility customers who seek to reduce interruption and operating costs. Lastly, the model is used to explore the impact of such adoption on system-wide greenhouse gas emissions in California. Results indicate that there are, at present, co-benefits for emissions reductions when customers

  6. Software reliability growth models with normal failure time distributions

    International Nuclear Information System (INIS)

    Okamura, Hiroyuki; Dohi, Tadashi; Osaki, Shunji

    2013-01-01

    This paper proposes software reliability growth models (SRGM) where the software failure time follows a normal distribution. The proposed model is mathematically tractable and has sufficient ability of fitting to the software failure data. In particular, we consider the parameter estimation algorithm for the SRGM with normal distribution. The developed algorithm is based on an EM (expectation-maximization) algorithm and is quite simple for implementation as software application. Numerical experiment is devoted to investigating the fitting ability of the SRGMs with normal distribution through 16 types of failure time data collected in real software projects

  7. Power Electronic Packaging Design, Assembly Process, Reliability and Modeling

    CERN Document Server

    Liu, Yong

    2012-01-01

    Power Electronic Packaging presents an in-depth overview of power electronic packaging design, assembly,reliability and modeling. Since there is a drastic difference between IC fabrication and power electronic packaging, the book systematically introduces typical power electronic packaging design, assembly, reliability and failure analysis and material selection so readers can clearly understand each task's unique characteristics. Power electronic packaging is one of the fastest growing segments in the power electronic industry, due to the rapid growth of power integrated circuit (IC) fabrication, especially for applications like portable, consumer, home, computing and automotive electronics. This book also covers how advances in both semiconductor content and power advanced package design have helped cause advances in power device capability in recent years. The author extrapolates the most recent trends in the book's areas of focus to highlight where further improvement in materials and techniques can d...

  8. SIERRA - A 3-D device simulator for reliability modeling

    Science.gov (United States)

    Chern, Jue-Hsien; Arledge, Lawrence A., Jr.; Yang, Ping; Maeda, John T.

    1989-05-01

    SIERRA is a three-dimensional general-purpose semiconductor-device simulation program which serves as a foundation for investigating integrated-circuit (IC) device and reliability issues. This program solves the Poisson and continuity equations in silicon under dc, transient, and small-signal conditions. Executing on a vector/parallel minisupercomputer, SIERRA utilizes a matrix solver which uses an incomplete LU (ILU) preconditioned conjugate gradient square (CGS, BCG) method. The ILU-CGS method provides a good compromise between memory size and convergence rate. The authors have observed a 5x to 7x speedup over standard direct methods in simulations of transient problems containing highly coupled Poisson and continuity equations such as those found in reliability-oriented simulations. The application of SIERRA to parasitic CMOS latchup and dynamic random-access memory single-event-upset studies is described.

  9. Reliability analysis and prediction of mixed mode load using Markov Chain Model

    International Nuclear Information System (INIS)

    Nikabdullah, N.; Singh, S. S. K.; Alebrahim, R.; Azizi, M. A.; K, Elwaleed A.; Noorani, M. S. M.

    2014-01-01

    The aim of this paper is to present the reliability analysis and prediction of mixed mode loading by using a simple two state Markov Chain Model for an automotive crankshaft. The reliability analysis and prediction for any automotive component or structure is important for analyzing and measuring the failure to increase the design life, eliminate or reduce the likelihood of failures and safety risk. The mechanical failures of the crankshaft are due of high bending and torsion stress concentration from high cycle and low rotating bending and torsional stress. The Markov Chain was used to model the two states based on the probability of failure due to bending and torsion stress. In most investigations it revealed that bending stress is much serve than torsional stress, therefore the probability criteria for the bending state would be higher compared to the torsion state. A statistical comparison between the developed Markov Chain Model and field data was done to observe the percentage of error. The reliability analysis and prediction was derived and illustrated from the Markov Chain Model were shown in the Weibull probability and cumulative distribution function, hazard rate and reliability curve and the bathtub curve. It can be concluded that Markov Chain Model has the ability to generate near similar data with minimal percentage of error and for a practical application; the proposed model provides a good accuracy in determining the reliability for the crankshaft under mixed mode loading

  10. Reliability modeling of digital RPS with consideration of undetected software faults

    Energy Technology Data Exchange (ETDEWEB)

    Khalaquzzaman, M.; Lee, Seung Jun; Jung, Won Dea [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kim, Man Cheol [Chung Ang Univ., Seoul (Korea, Republic of)

    2013-10-15

    This paper provides overview of different software reliability methodologies and proposes a technic for estimating the reliability of RPS with consideration of undetected software faults. Software reliability analysis of safety critical software has been challenging despite spending a huge effort for developing large number of software reliability models, and no consensus yet to attain on an appropriate modeling methodology. However, it is realized that the combined application of BBN based SDLC fault prediction method and random black-box testing of software would provide better ground for reliability estimation of safety critical software. Digitalizing the reactor protection system of nuclear power plant has been initiated several decades ago and now full digitalization has been adopted in the new generation of NPPs around the world because digital I and C systems have many better technical features like easier configurability and maintainability over analog I and C systems. Digital I and C systems are also drift-free and incorporation of new features is much easier. Rules and regulation for safe operation of NPPs are established and has been being practiced by the operators as well as regulators of NPPs to ensure safety. The failure mechanism of hardware and analog systems well understood and the risk analysis methods for these components and systems are well established. However, digitalization of I and C system in NPP introduces some crisis and uncertainty in reliability analysis methods of the digital systems/components because software failure mechanisms are still unclear.

  11. Reliable software systems via chains of object models with provably correct behavior

    International Nuclear Information System (INIS)

    Yakhnis, A.; Yakhnis, V.

    1996-01-01

    This work addresses specification and design of reliable safety-critical systems, such as nuclear reactor control systems. Reliability concerns are addressed in complimentary fashion by different fields. Reliability engineers build software reliability models, etc. Safety engineers focus on prevention of potential harmful effects of systems on environment. Software/hardware correctness engineers focus on production of reliable systems on the basis of mathematical proofs. The authors think that correctness may be a crucial guiding issue in the development of reliable safety-critical systems. However, purely formal approaches are not adequate for the task, because they neglect the connection with the informal customer requirements. They alleviate that as follows. First, on the basis of the requirements, they build a model of the system interactions with the environment, where the system is viewed as a black box. They will provide foundations for automated tools which will (a) demonstrate to the customer that all of the scenarios of system behavior are presented in the model, (b) uncover scenarios not present in the requirements, and (c) uncover inconsistent scenarios. The developers will work with the customer until the black box model will not possess scenarios (b) and (c) above. Second, the authors will build a chain of several increasingly detailed models, where the first model is the black box model and the last model serves to automatically generated proved executable code. The behavior of each model will be proved to conform to the behavior of the previous one. They build each model as a cluster of interactive concurrent objects, thus they allow both top-down and bottom-up development

  12. Assessment of leg muscles mechanical capacities: Which jump, loading, and variable type provide the most reliable outcomes?

    Science.gov (United States)

    García-Ramos, Amador; Feriche, Belén; Pérez-Castilla, Alejandro; Padial, Paulino; Jaric, Slobodan

    2017-07-01

    This study aimed to explore the strength of the force-velocity (F-V) relationship of lower limb muscles and the reliability of its parameters (maximum force [F 0 ], slope [a], maximum velocity [V 0 ], and maximum power [P 0 ]). Twenty-three men were tested in two different jump types (squat and countermovement jump: SJ and CMJ), performed under two different loading conditions (free weight and Smith machine: Free and Smith) with 0, 17, 30, 45, 60, and 75 kg loads. The maximum and averaged values of F and V were obtained for the F-V relationship modelling. All F-V relationships were strong and linear independently whether observed from the averaged across the participants (r ≥ 0.98) or individual data (r = 0.94-0.98), while their parameters were generally highly reliable (F 0 [CV: 4.85%, ICC: 0.87], V 0 [CV: 6.10%, ICC: 0.82], a [CV: 10.5%, ICC: 0.81], and P 0 [CV: 3.5%, ICC: 0.93]). Both the strength of the F-V relationships and the reliability of their parameters were significantly higher for (1) the CMJ over the SJ, (2) the Free over the Smith loading type, and (3) the maximum over the averaged F and V variables. In conclusion, although the F-V relationships obtained from all the jumps tested were linear and generally highly reliable, the less appropriate choice for testing the F-V relationship could be through the averaged F and V data obtained from the SJ performed either in a Free weight or in a Smith machine. Insubstantial differences exist among the other combinations tested.

  13. Assessing the Reliability of Curriculum-Based Measurement: An Application of Latent Growth Modeling

    Science.gov (United States)

    Yeo, Seungsoo; Kim, Dong-Il; Branum-Martin, Lee; Wayman, Miya Miura; Espin, Christine A.

    2012-01-01

    The purpose of this study was to demonstrate the use of Latent Growth Modeling (LGM) as a method for estimating reliability of Curriculum-Based Measurement (CBM) progress-monitoring data. The LGM approach permits the error associated with each measure to differ at each time point, thus providing an alternative method for examining of the…

  14. Stochastic reliability and maintenance modeling essays in honor of Professor Shunji Osaki on his 70th birthday

    CERN Document Server

    Nakagawa, Toshio

    2013-01-01

    In honor of the work of Professor Shunji Osaki, Stochastic Reliability and Maintenance Modeling provides a comprehensive study of the legacy of and ongoing research in stochastic reliability and maintenance modeling. Including associated application areas such as dependable computing, performance evaluation, software engineering, communication engineering, distinguished researchers review and build on the contributions over the last four decades by Professor Shunji Osaki. Fundamental yet significant research results are presented and discussed clearly alongside new ideas and topics on stochastic reliability and maintenance modeling to inspire future research. Across 15 chapters readers gain the knowledge and understanding to apply reliability and maintenance theory to computer and communication systems. Stochastic Reliability and Maintenance Modeling is ideal for graduate students and researchers in reliability engineering, and workers, managers and engineers engaged in computer, maintenance and management wo...

  15. Reliability Analysis of Sealing Structure of Electromechanical System Based on Kriging Model

    Science.gov (United States)

    Zhang, F.; Wang, Y. M.; Chen, R. W.; Deng, W. W.; Gao, Y.

    2018-05-01

    The sealing performance of aircraft electromechanical system has a great influence on flight safety, and the reliability of its typical seal structure is analyzed by researcher. In this paper, we regard reciprocating seal structure as a research object to study structural reliability. Having been based on the finite element numerical simulation method, the contact stress between the rubber sealing ring and the cylinder wall is calculated, and the relationship between the contact stress and the pressure of the hydraulic medium is built, and the friction force on different working conditions are compared. Through the co-simulation, the adaptive Kriging model obtained by EFF learning mechanism is used to describe the failure probability of the seal ring, so as to evaluate the reliability of the sealing structure. This article proposes a new idea of numerical evaluation for the reliability analysis of sealing structure, and also provides a theoretical basis for the optimal design of sealing structure.

  16. Stochastic process corrosion growth models for pipeline reliability

    International Nuclear Information System (INIS)

    Bazán, Felipe Alexander Vargas; Beck, André Teófilo

    2013-01-01

    Highlights: •Novel non-linear stochastic process corrosion growth model is proposed. •Corrosion rate modeled as random Poisson pulses. •Time to corrosion initiation and inherent time-variability properly represented. •Continuous corrosion growth histories obtained. •Model is shown to precisely fit actual corrosion data at two time points. -- Abstract: Linear random variable corrosion models are extensively employed in reliability analysis of pipelines. However, linear models grossly neglect well-known characteristics of the corrosion process. Herein, a non-linear model is proposed, where corrosion rate is represented as a Poisson square wave process. The resulting model represents inherent time-variability of corrosion growth, produces continuous growth and leads to mean growth at less-than-one power of time. Different corrosion models are adjusted to the same set of actual corrosion data for two inspections. The proposed non-linear random process corrosion growth model leads to the best fit to the data, while better representing problem physics

  17. Understanding software faults and their role in software reliability modeling

    Science.gov (United States)

    Munson, John C.

    1994-01-01

    This study is a direct result of an on-going project to model the reliability of a large real-time control avionics system. In previous modeling efforts with this system, hardware reliability models were applied in modeling the reliability behavior of this system. In an attempt to enhance the performance of the adapted reliability models, certain software attributes were introduced in these models to control for differences between programs and also sequential executions of the same program. As the basic nature of the software attributes that affect software reliability become better understood in the modeling process, this information begins to have important implications on the software development process. A significant problem arises when raw attribute measures are to be used in statistical models as predictors, for example, of measures of software quality. This is because many of the metrics are highly correlated. Consider the two attributes: lines of code, LOC, and number of program statements, Stmts. In this case, it is quite obvious that a program with a high value of LOC probably will also have a relatively high value of Stmts. In the case of low level languages, such as assembly language programs, there might be a one-to-one relationship between the statement count and the lines of code. When there is a complete absence of linear relationship among the metrics, they are said to be orthogonal or uncorrelated. Usually the lack of orthogonality is not serious enough to affect a statistical analysis. However, for the purposes of some statistical analysis such as multiple regression, the software metrics are so strongly interrelated that the regression results may be ambiguous and possibly even misleading. Typically, it is difficult to estimate the unique effects of individual software metrics in the regression equation. The estimated values of the coefficients are very sensitive to slight changes in the data and to the addition or deletion of variables in the

  18. Reliable low precision simulations in land surface models

    Science.gov (United States)

    Dawson, Andrew; Düben, Peter D.; MacLeod, David A.; Palmer, Tim N.

    2017-12-01

    Weather and climate models must continue to increase in both resolution and complexity in order that forecasts become more accurate and reliable. Moving to lower numerical precision may be an essential tool for coping with the demand for ever increasing model complexity in addition to increasing computing resources. However, there have been some concerns in the weather and climate modelling community over the suitability of lower precision for climate models, particularly for representing processes that change very slowly over long time-scales. These processes are difficult to represent using low precision due to time increments being systematically rounded to zero. Idealised simulations are used to demonstrate that a model of deep soil heat diffusion that fails when run in single precision can be modified to work correctly using low precision, by splitting up the model into a small higher precision part and a low precision part. This strategy retains the computational benefits of reduced precision whilst preserving accuracy. This same technique is also applied to a full complexity land surface model, resulting in rounding errors that are significantly smaller than initial condition and parameter uncertainties. Although lower precision will present some problems for the weather and climate modelling community, many of the problems can likely be overcome using a straightforward and physically motivated application of reduced precision.

  19. Reliability Measure Model for Assistive Care Loop Framework Using Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Venki Balasubramanian

    2010-01-01

    Full Text Available Body area wireless sensor networks (BAWSNs are time-critical systems that rely on the collective data of a group of sensor nodes. Reliable data received at the sink is based on the collective data provided by all the source sensor nodes and not on individual data. Unlike conventional reliability, the definition of retransmission is inapplicable in a BAWSN and would only lead to an elapsed data arrival that is not acceptable for time-critical application. Time-driven applications require high data reliability to maintain detection and responses. Hence, the transmission reliability for the BAWSN should be based on the critical time. In this paper, we develop a theoretical model to measure a BAWSN's transmission reliability, based on the critical time. The proposed model is evaluated through simulation and then compared with the experimental results conducted in our existing Active Care Loop Framework (ACLF. We further show the effect of the sink buffer in transmission reliability after a detailed study of various other co-existing parameters.

  20. Reliable critical sized defect rodent model for cleft palate research.

    Science.gov (United States)

    Mostafa, Nesrine Z; Doschak, Michael R; Major, Paul W; Talwar, Reena

    2014-12-01

    Suitable animal models are necessary to test the efficacy of new bone grafting therapies in cleft palate surgery. Rodent models of cleft palate are available but have limitations. This study compared and modified mid-palate cleft (MPC) and alveolar cleft (AC) models to determine the most reliable and reproducible model for bone grafting studies. Published MPC model (9 × 5 × 3 mm(3)) lacked sufficient information for tested rats. Our initial studies utilizing AC model (7 × 4 × 3 mm(3)) in 8 and 16 weeks old Sprague Dawley (SD) rats revealed injury to adjacent structures. After comparing anteroposterior and transverse maxillary dimensions in 16 weeks old SD and Wistar rats, virtual planning was performed to modify MPC and AC defects dimensions, taking the adjacent structures into consideration. Modified MPC (7 × 2.5 × 1 mm(3)) and AC (5 × 2.5 × 1 mm(3)) defects were employed in 16 weeks old Wistar rats and healing was monitored by micro-computed tomography and histology. Maxillary dimensions in SD and Wistar rats were not significantly different. Preoperative virtual planning enhanced postoperative surgical outcomes. Bone healing occurred at defect margin leaving central bone void confirming the critical size nature of the modified MPC and AC defects. Presented modifications for MPC and AC models created clinically relevant and reproducible defects. Copyright © 2014 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  1. Usage models in reliability assessment of software-based systems

    Energy Technology Data Exchange (ETDEWEB)

    Haapanen, P.; Pulkkinen, U. [VTT Automation, Espoo (Finland); Korhonen, J. [VTT Electronics, Espoo (Finland)

    1997-04-01

    This volume in the OHA-project report series deals with the statistical reliability assessment of software based systems on the basis of dynamic test results and qualitative evidence from the system design process. Other reports to be published later on in the OHA-project report series will handle the diversity requirements in safety critical software-based systems, generation of test data from operational profiles and handling of programmable automation in plant PSA-studies. In this report the issues related to the statistical testing and especially automated test case generation are considered. The goal is to find an efficient method for building usage models for the generation of statistically significant set of test cases and to gather practical experiences from this method by applying it in a case study. The scope of the study also includes the tool support for the method, as the models may grow quite large and complex. (32 refs., 30 figs.).

  2. Reliability model for offshore wind farms; Paalidelighedsmodel for havvindmoelleparker

    Energy Technology Data Exchange (ETDEWEB)

    Christensen, P.; Lundtang Paulsen, J.; Lybech Toegersen, M.; Krogh, T. [Risoe National Lab., Roskilde (Denmark); Raben, N.; Donovan, M.H.; Joergensen, L. [SEAS (Denmark); Winther-Jensen, M.

    2002-05-01

    A method for the prediction of the mean availability for an offshore windfarm has been developed. Factors comprised are the reliability of the single turbine, the strategy for preventive maintenance the climate, the number of repair teams, and the type of boats available for transport. The mean availability is defined as the sum of the fractions of time, where each turbine is available for production. The project has been carried out together with SEAS Wind Technique, and their site Roedsand has been chosen as the example of the work. A climate model has been created based on actual site measurements. The prediction of the availability is done with a Monte Carlo-simulation. Software was developed for the preparation of the climate model from weather measurements as well as for the Monte carlo-simulation. Three examples have been simulated, one with guessed parametres, and the other two with parameters more close to the Roedsand case. (au)

  3. Usage models in reliability assessment of software-based systems

    International Nuclear Information System (INIS)

    Haapanen, P.; Pulkkinen, U.; Korhonen, J.

    1997-04-01

    This volume in the OHA-project report series deals with the statistical reliability assessment of software based systems on the basis of dynamic test results and qualitative evidence from the system design process. Other reports to be published later on in the OHA-project report series will handle the diversity requirements in safety critical software-based systems, generation of test data from operational profiles and handling of programmable automation in plant PSA-studies. In this report the issues related to the statistical testing and especially automated test case generation are considered. The goal is to find an efficient method for building usage models for the generation of statistically significant set of test cases and to gather practical experiences from this method by applying it in a case study. The scope of the study also includes the tool support for the method, as the models may grow quite large and complex. (32 refs., 30 figs.)

  4. Scalable Joint Models for Reliable Uncertainty-Aware Event Prediction.

    Science.gov (United States)

    Soleimani, Hossein; Hensman, James; Saria, Suchi

    2017-08-21

    Missing data and noisy observations pose significant challenges for reliably predicting events from irregularly sampled multivariate time series (longitudinal) data. Imputation methods, which are typically used for completing the data prior to event prediction, lack a principled mechanism to account for the uncertainty due to missingness. Alternatively, state-of-the-art joint modeling techniques can be used for jointly modeling the longitudinal and event data and compute event probabilities conditioned on the longitudinal observations. These approaches, however, make strong parametric assumptions and do not easily scale to multivariate signals with many observations. Our proposed approach consists of several key innovations. First, we develop a flexible and scalable joint model based upon sparse multiple-output Gaussian processes. Unlike state-of-the-art joint models, the proposed model can explain highly challenging structure including non-Gaussian noise while scaling to large data. Second, we derive an optimal policy for predicting events using the distribution of the event occurrence estimated by the joint model. The derived policy trades-off the cost of a delayed detection versus incorrect assessments and abstains from making decisions when the estimated event probability does not satisfy the derived confidence criteria. Experiments on a large dataset show that the proposed framework significantly outperforms state-of-the-art techniques in event prediction.

  5. Development of web-based reliability data analysis algorithm model and its application

    International Nuclear Information System (INIS)

    Hwang, Seok-Won; Oh, Ji-Yong; Moosung-Jae

    2010-01-01

    For this study, a database model of plant reliability was developed for the effective acquisition and management of plant-specific data that can be used in various applications of plant programs as well as in Probabilistic Safety Assessment (PSA). Through the development of a web-based reliability data analysis algorithm, this approach systematically gathers specific plant data such as component failure history, maintenance history, and shift diary. First, for the application of the developed algorithm, this study reestablished the raw data types, data deposition procedures and features of the Enterprise Resource Planning (ERP) system process. The component codes and system codes were standardized to make statistical analysis between different types of plants possible. This standardization contributes to the establishment of a flexible database model that allows the customization of reliability data for the various applications depending on component types and systems. In addition, this approach makes it possible for users to perform trend analyses and data comparisons for the significant plant components and systems. The validation of the algorithm is performed through a comparison of the importance measure value (Fussel-Vesely) of the mathematical calculation and that of the algorithm application. The development of a reliability database algorithm is one of the best approaches for providing systemic management of plant-specific reliability data with transparency and continuity. This proposed algorithm reinforces the relationships between raw data and application results so that it can provide a comprehensive database that offers everything from basic plant-related data to final customized data.

  6. Development of web-based reliability data analysis algorithm model and its application

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Seok-Won, E-mail: swhwang@khnp.co.k [Korea Hydro and Nuclear Power Co. Ltd., Jang-Dong 25-1, Yuseong-Gu, 305-343 Daejeon (Korea, Republic of); Oh, Ji-Yong [Korea Hydro and Nuclear Power Co. Ltd., Jang-Dong 25-1, Yuseong-Gu, 305-343 Daejeon (Korea, Republic of); Moosung-Jae [Department of Nuclear Engineering Hanyang University 17 Haengdang, Sungdong, Seoul (Korea, Republic of)

    2010-02-15

    For this study, a database model of plant reliability was developed for the effective acquisition and management of plant-specific data that can be used in various applications of plant programs as well as in Probabilistic Safety Assessment (PSA). Through the development of a web-based reliability data analysis algorithm, this approach systematically gathers specific plant data such as component failure history, maintenance history, and shift diary. First, for the application of the developed algorithm, this study reestablished the raw data types, data deposition procedures and features of the Enterprise Resource Planning (ERP) system process. The component codes and system codes were standardized to make statistical analysis between different types of plants possible. This standardization contributes to the establishment of a flexible database model that allows the customization of reliability data for the various applications depending on component types and systems. In addition, this approach makes it possible for users to perform trend analyses and data comparisons for the significant plant components and systems. The validation of the algorithm is performed through a comparison of the importance measure value (Fussel-Vesely) of the mathematical calculation and that of the algorithm application. The development of a reliability database algorithm is one of the best approaches for providing systemic management of plant-specific reliability data with transparency and continuity. This proposed algorithm reinforces the relationships between raw data and application results so that it can provide a comprehensive database that offers everything from basic plant-related data to final customized data.

  7. Study on reliability analysis based on multilevel flow models and fault tree method

    International Nuclear Information System (INIS)

    Chen Qiang; Yang Ming

    2014-01-01

    Multilevel flow models (MFM) and fault tree method describe the system knowledge in different forms, so the two methods express an equivalent logic of the system reliability under the same boundary conditions and assumptions. Based on this and combined with the characteristics of MFM, a method mapping MFM to fault tree was put forward, thus providing a way to establish fault tree rapidly and realizing qualitative reliability analysis based on MFM. Taking the safety injection system of pressurized water reactor nuclear power plant as an example, its MFM was established and its reliability was analyzed qualitatively. The analysis result shows that the logic of mapping MFM to fault tree is correct. The MFM is easily understood, created and modified. Compared with the traditional fault tree analysis, the workload is greatly reduced and the modeling time is saved. (authors)

  8. A discrete-time Bayesian network reliability modeling and analysis framework

    International Nuclear Information System (INIS)

    Boudali, H.; Dugan, J.B.

    2005-01-01

    Dependability tools are becoming an indispensable tool for modeling and analyzing (critical) systems. However the growing complexity of such systems calls for increasing sophistication of these tools. Dependability tools need to not only capture the complex dynamic behavior of the system components, but they must be also easy to use, intuitive, and computationally efficient. In general, current tools have a number of shortcomings including lack of modeling power, incapacity to efficiently handle general component failure distributions, and ineffectiveness in solving large models that exhibit complex dependencies between their components. We propose a novel reliability modeling and analysis framework based on the Bayesian network (BN) formalism. The overall approach is to investigate timed Bayesian networks and to find a suitable reliability framework for dynamic systems. We have applied our methodology to two example systems and preliminary results are promising. We have defined a discrete-time BN reliability formalism and demonstrated its capabilities from a modeling and analysis point of view. This research shows that a BN based reliability formalism is a powerful potential solution to modeling and analyzing various kinds of system components behaviors and interactions. Moreover, being based on the BN formalism, the framework is easy to use and intuitive for non-experts, and provides a basis for more advanced and useful analyses such as system diagnosis

  9. The application of cognitive models to the evaluation and prediction of human reliability

    International Nuclear Information System (INIS)

    Embrey, D.E.; Reason, J.T.

    1986-01-01

    The first section of the paper provides a brief overview of a number of important principles relevant to human reliability modeling that have emerged from cognitive models, and presents a synthesis of these approaches in the form of a Generic Error Modeling System (GEMS). The next section illustrates the application of GEMS to some well known nuclear power plant (NPP) incidents in which human error was a major contributor. The way in which design recommendations can emerge from analyses of this type is illustrated. The third section describes the use of cognitive models in the classification of human errors for prediction and data collection purposes. The final section addresses the predictive modeling of human error as part of human reliability assessment in Probabilistic Risk Assessment

  10. Model reliability and software quality assurance in simulation of nuclear fuel waste management systems

    International Nuclear Information System (INIS)

    Oeren, T.I.; Elzas, M.S.; Sheng, G.; Wageningen Agricultural Univ., Netherlands; McMaster Univ., Hamilton, Ontario)

    1985-01-01

    As is the case with all scientific simulation studies, computerized simulation of nuclear fuel waste management systems can introduce and hide various types of errors. Frameworks to clarify issues of model reliability and software quality assurance are offered. Potential problems with reference to the main areas of concern for reliability and quality are discussed; e.g., experimental issues, decomposition, scope, fidelity, verification, requirements, testing, correctness, robustness are treated with reference to the experience gained in the past. A list comprising over 80 most common computerization errors is provided. Software tools and techniques used to detect and to correct computerization errors are discussed

  11. reliability reliability

    African Journals Online (AJOL)

    eobe

    Corresponding author, Tel: +234-703. RELIABILITY .... V , , given by the code of practice. However, checks must .... an optimization procedure over the failure domain F corresponding .... of Concrete Members based on Utility Theory,. Technical ...

  12. Comprehensive Care For Joint Replacement Model - Provider Data

    Data.gov (United States)

    U.S. Department of Health & Human Services — Comprehensive Care for Joint Replacement Model - provider data. This data set includes provider data for two quality measures tracked during an episode of care:...

  13. A Dialogue about MCQs, Reliability, and Item Response Modelling

    Science.gov (United States)

    Wright, Daniel B.; Skagerberg, Elin M.

    2006-01-01

    Multiple choice questions (MCQs) are becoming more common in UK psychology departments and the need to assess their reliability is apparent. Having examined the reliability of MCQs in our department we faced many questions from colleagues about why we were examining reliability, what it was that we were doing, and what should be reported when…

  14. Numerical Model based Reliability Estimation of Selective Laser Melting Process

    DEFF Research Database (Denmark)

    Mohanty, Sankhya; Hattel, Jesper Henri

    2014-01-01

    Selective laser melting is developing into a standard manufacturing technology with applications in various sectors. However, the process is still far from being at par with conventional processes such as welding and casting, the primary reason of which is the unreliability of the process. While...... of the selective laser melting process. A validated 3D finite-volume alternating-direction-implicit numerical technique is used to model the selective laser melting process, and is calibrated against results from single track formation experiments. Correlation coefficients are determined for process input...... parameters such as laser power, speed, beam profile, etc. Subsequently, uncertainties in the processing parameters are utilized to predict a range for the various outputs, using a Monte Carlo method based uncertainty analysis methodology, and the reliability of the process is established....

  15. Reliability of Coulomb stress changes inferred from correlated uncertainties of finite-fault source models

    KAUST Repository

    Woessner, J.

    2012-07-14

    Static stress transfer is one physical mechanism to explain triggered seismicity. Coseismic stress-change calculations strongly depend on the parameterization of the causative finite-fault source model. These models are uncertain due to uncertainties in input data, model assumptions, and modeling procedures. However, fault model uncertainties have usually been ignored in stress-triggering studies and have not been propagated to assess the reliability of Coulomb failure stress change (ΔCFS) calculations. We show how these uncertainties can be used to provide confidence intervals for co-seismic ΔCFS-values. We demonstrate this for the MW = 5.9 June 2000 Kleifarvatn earthquake in southwest Iceland and systematically map these uncertainties. A set of 2500 candidate source models from the full posterior fault-parameter distribution was used to compute 2500 ΔCFS maps. We assess the reliability of the ΔCFS-values from the coefficient of variation (CV) and deem ΔCFS-values to be reliable where they are at least twice as large as the standard deviation (CV ≤ 0.5). Unreliable ΔCFS-values are found near the causative fault and between lobes of positive and negative stress change, where a small change in fault strike causes ΔCFS-values to change sign. The most reliable ΔCFS-values are found away from the source fault in the middle of positive and negative ΔCFS-lobes, a likely general pattern. Using the reliability criterion, our results support the static stress-triggering hypothesis. Nevertheless, our analysis also suggests that results from previous stress-triggering studies not considering source model uncertainties may have lead to a biased interpretation of the importance of static stress-triggering.

  16. Providing all global energy with wind, water, and solar power, Part II: Reliability, system and transmission costs, and policies

    International Nuclear Information System (INIS)

    Delucchi, Mark A.; Jacobson, Mark Z.

    2011-01-01

    This is Part II of two papers evaluating the feasibility of providing all energy for all purposes (electric power, transportation, and heating/cooling), everywhere in the world, from wind, water, and the sun (WWS). In Part I, we described the prominent renewable energy plans that have been proposed and discussed the characteristics of WWS energy systems, the global demand for and availability of WWS energy, quantities and areas required for WWS infrastructure, and supplies of critical materials. Here, we discuss methods of addressing the variability of WWS energy to ensure that power supply reliably matches demand (including interconnecting geographically dispersed resources, using hydroelectricity, using demand-response management, storing electric power on site, over-sizing peak generation capacity and producing hydrogen with the excess, storing electric power in vehicle batteries, and forecasting weather to project energy supplies), the economics of WWS generation and transmission, the economics of WWS use in transportation, and policy measures needed to enhance the viability of a WWS system. We find that the cost of energy in a 100% WWS will be similar to the cost today. We conclude that barriers to a 100% conversion to WWS power worldwide are primarily social and political, not technological or even economic. - Research highlights: → We evaluate the feasibility of global energy supply from wind, water, and solar energy. → WWS energy can be supplied reliably and economically to all energy-use sectors. → The social cost of WWS energy generally is less than the cost of fossil-fuel energy. → Barriers to 100% WWS power worldwide are socio-political, not techno-economic.

  17. Role of frameworks, models, data, and judgment in human reliability analysis

    Energy Technology Data Exchange (ETDEWEB)

    Hannaman, G W

    1986-05-01

    Many advancements in the methods for treating human interactions in PRA studies have occurred in the last decade. These advancements appear to increase the capability of PRAs to extend beyond just the assessment of the human's importance to safety. However, variations in the application of these advanced models, data, and judgements in recent PRAs make quantitative comparisons among studies extremely difficult. This uncertainty in the analysis diminishes the usefulness of the PRA study for upgrading procedures, enhancing traning, simulator design, technical specification guidance, and for aid in designing the man-machine interface. Hence, there is a need for a framework to guide analysts in incorporating human interactions into the PRA systems analyses so that future users of a PRA study will have a clear understanding of the approaches, models, data, and assumptions which were employed in the initial study. This paper describes the role of the systematic human action reliability procedure (SHARP) in providing a road map through the complex terrain of human reliability that promises to improve the reproducibility of such analysis in the areas of selecting the models, data, representations, and assumptions. Also described is the role that a human cognitive reliability model can have in collecting data from simulators and helping analysts assign human reliability parameters in a PRA study. Use of these systematic approaches to perform or upgrade existing PRAs promises to make PRA studies more useful as risk management tools.

  18. Evaluating North American Electric Grid Reliability Using the Barabasi-Albert Network Model

    OpenAIRE

    Chassin, David P.; Posse, Christian

    2004-01-01

    The reliability of electric transmission systems is examined using a scale-free model of network structure and failure propagation. The topologies of the North American eastern and western electric networks are analyzed to estimate their reliability based on the Barabasi-Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using s...

  19. Customer-Provider Strategic Alignment: A Maturity Model

    Science.gov (United States)

    Luftman, Jerry; Brown, Carol V.; Balaji, S.

    This chapter presents a new model for assessing the maturity of a ­customer-provider relationship from a collaborative service delivery perspective: the Customer-Provider Strategic Alignment Maturity (CPSAM) Model. This model builds on recent research for effectively managing the customer-provider relationship in IT service outsourcing contexts and a validated model for assessing alignment across internal IT service units and their business customers within the same organization. After reviewing relevant literature by service science and information systems researchers, the six overarching components of the maturity model are presented: value measurements, governance, partnership, communications, human resources and skills, and scope and architecture. A key assumption of the model is that all of the components need be addressed to assess and improve customer-provider alignment. Examples of specific metrics for measuring the maturity level of each component over the five levels of maturity are also presented.

  20. Modeling Market Shares of Competing (e)Care Providers

    Science.gov (United States)

    van Ooteghem, Jan; Tesch, Tom; Verbrugge, Sofie; Ackaert, Ann; Colle, Didier; Pickavet, Mario; Demeester, Piet

    In order to address the increasing costs of providing care to the growing group of elderly, efficiency gains through eCare solutions seem an obvious solution. Unfortunately not many techno-economic business models to evaluate the return of these investments are available. The construction of a business case for care for the elderly as they move through different levels of dependency and the effect of introducing an eCare service, is the intended application of the model. The simulation model presented in this paper allows for modeling evolution of market shares of competing care providers. Four tiers are defined, based on the dependency level of the elderly, for which the market shares are determined. The model takes into account available capacity of the different care providers, in- and outflow distribution between tiers and churn between providers within tiers.

  1. On New Cautious Structural Reliability Models in the Framework of imprecise Probabilities

    DEFF Research Database (Denmark)

    Utkin, Lev V.; Kozine, Igor

    2010-01-01

    models and gen-eralizing conventional ones to imprecise probabili-ties. The theoretical setup employed for this purpose is imprecise statistical reasoning (Walley 1991), whose general framework is provided by upper and lower previsions (expectations). The appeal of this theory is its ability to capture......Uncertainty of parameters in engineering design has been modeled in different frameworks such as inter-val analysis, fuzzy set and possibility theories, ran-dom set theory and imprecise probability theory. The authors of this paper for many years have been de-veloping new imprecise reliability...... both aleatory (stochas-tic) and epistemic uncertainty and the flexibility with which information can be represented. The previous research of the authors related to generalizing structural reliability models to impre-cise statistical measures is summarized in Utkin & Kozine (2002) and Utkin (2004...

  2. Reliability and Maintainability model (RAM) user and maintenance manual. Part 2

    Science.gov (United States)

    Ebeling, Charles E.

    1995-01-01

    This report documents the procedures for utilizing and maintaining the Reliability and Maintainability Model (RAM) developed by the University of Dayton for the NASA Langley Research Center (LaRC). The RAM model predicts reliability and maintainability (R&M) parameters for conceptual space vehicles using parametric relationships between vehicle design and performance characteristics and subsystem mean time between maintenance actions (MTBM) and manhours per maintenance action (MH/MA). These parametric relationships were developed using aircraft R&M data from over thirty different military aircraft of all types. This report describes the general methodology used within the model, the execution and computational sequence, the input screens and data, the output displays and reports, and study analyses and procedures. A source listing is provided.

  3. Modeling Message Queueing Services with Reliability Guarantee in Cloud Computing Environment Using Colored Petri Nets

    Directory of Open Access Journals (Sweden)

    Jing Li

    2015-01-01

    Full Text Available Motivated by the need for loosely coupled and asynchronous dissemination of information, message queues are widely used in large-scale application areas. With the advent of virtualization technology, cloud-based message queueing services (CMQSs with distributed computing and storage are widely adopted to improve availability, scalability, and reliability; however, a critical issue is its performance and the quality of service (QoS. While numerous approaches evaluating system performance are available, there is no modeling approach for estimating and analyzing the performance of CMQSs. In this paper, we employ both the analytical and simulation modeling to address the performance of CMQSs with reliability guarantee. We present a visibility-based modeling approach (VMA for simulation model using colored Petri nets (CPN. Our model incorporates the important features of message queueing services in the cloud such as replication, message consistency, resource virtualization, and especially the mechanism named visibility timeout which is adopted in the services to guarantee system reliability. Finally, we evaluate our model through different experiments under varied scenarios to obtain important performance metrics such as total message delivery time, waiting number, and components utilization. Our results reveal considerable insights into resource scheduling and system configuration for service providers to estimate and gain performance optimization.

  4. Practical applications of age-dependent reliability models and analysis of operational data

    Energy Technology Data Exchange (ETDEWEB)

    Lannoy, A.; Nitoi, M.; Backstrom, O.; Burgazzi, L.; Couallier, V.; Nikulin, M.; Derode, A.; Rodionov, A.; Atwood, C.; Fradet, F.; Antonov, A.; Berezhnoy, A.; Choi, S.Y.; Starr, F.; Dawson, J.; Palmen, H.; Clerjaud, L

    2005-07-01

    The purpose of the workshop was to present the experience of practical application of time-dependent reliability models. The program of the workshop comprises the following sessions: -) aging management and aging PSA (Probabilistic Safety Assessment), -) modeling, -) operation experience, and -) accelerating aging tests. In order to introduce time aging effect of particular component to the PSA model, it has been proposed to use the constant unavailability values on the short period of time (one year for example) calculated on the basis of age-dependent reliability models. As for modeling, it appears that the problem of too detailed statistical models for application is the lack of data for required parameters. As for operating experience, several methods of operating experience analysis have been presented (algorithms for reliability data elaboration and statistical identification of aging trend). As for accelerated aging tests, it is demonstrated that a combination of operating experience analysis with the results of accelerated aging tests of naturally aged equipment could provide a good basis for continuous operation of instrumentation and control systems.

  5. Practical applications of age-dependent reliability models and analysis of operational data

    International Nuclear Information System (INIS)

    Lannoy, A.; Nitoi, M.; Backstrom, O.; Burgazzi, L.; Couallier, V.; Nikulin, M.; Derode, A.; Rodionov, A.; Atwood, C.; Fradet, F.; Antonov, A.; Berezhnoy, A.; Choi, S.Y.; Starr, F.; Dawson, J.; Palmen, H.; Clerjaud, L.

    2005-01-01

    The purpose of the workshop was to present the experience of practical application of time-dependent reliability models. The program of the workshop comprises the following sessions: -) aging management and aging PSA (Probabilistic Safety Assessment), -) modeling, -) operation experience, and -) accelerating aging tests. In order to introduce time aging effect of particular component to the PSA model, it has been proposed to use the constant unavailability values on the short period of time (one year for example) calculated on the basis of age-dependent reliability models. As for modeling, it appears that the problem of too detailed statistical models for application is the lack of data for required parameters. As for operating experience, several methods of operating experience analysis have been presented (algorithms for reliability data elaboration and statistical identification of aging trend). As for accelerated aging tests, it is demonstrated that a combination of operating experience analysis with the results of accelerated aging tests of naturally aged equipment could provide a good basis for continuous operation of instrumentation and control systems

  6. Reliability and Maintainability Model (RAM): User and Maintenance Manual. Part 2; Improved Supportability Analysis

    Science.gov (United States)

    Ebeling, Charles E.

    1996-01-01

    This report documents the procedures for utilizing and maintaining the Reliability & Maintainability Model (RAM) developed by the University of Dayton for the National Aeronautics and Space Administration (NASA) Langley Research Center (LaRC). The purpose of the grant is to provide support to NASA in establishing operational and support parameters and costs of proposed space systems. As part of this research objective, the model described here was developed. This Manual updates and supersedes the 1995 RAM User and Maintenance Manual. Changes and enhancements from the 1995 version of the model are primarily a result of the addition of more recent aircraft and shuttle R&M data.

  7. Approach for an integral power transformer reliability model

    NARCIS (Netherlands)

    Schijndel, van A.; Wouters, P.A.A.F.; Steennis, E.F.; Wetzer, J.M.

    2012-01-01

    In electrical power transmission and distribution networks power transformers represent a crucial group of assets both in terms of reliability and investments. In order to safeguard the required quality at acceptable costs, decisions must be based on a reliable forecast of future behaviour. The aim

  8. Modeling patients' acceptance of provider-delivered e-health.

    Science.gov (United States)

    Wilson, E Vance; Lankton, Nancy K

    2004-01-01

    Health care providers are beginning to deliver a range of Internet-based services to patients; however, it is not clear which of these e-health services patients need or desire. The authors propose that patients' acceptance of provider-delivered e-health can be modeled in advance of application development by measuring the effects of several key antecedents to e-health use and applying models of acceptance developed in the information technology (IT) field. This study tested three theoretical models of IT acceptance among patients who had recently registered for access to provider-delivered e-health. An online questionnaire administered items measuring perceptual constructs from the IT acceptance models (intrinsic motivation, perceived ease of use, perceived usefulness/extrinsic motivation, and behavioral intention to use e-health) and five hypothesized antecedents (satisfaction with medical care, health care knowledge, Internet dependence, information-seeking preference, and health care need). Responses were collected and stored in a central database. All tested IT acceptance models performed well in predicting patients' behavioral intention to use e-health. Antecedent factors of satisfaction with provider, information-seeking preference, and Internet dependence uniquely predicted constructs in the models. Information technology acceptance models provide a means to understand which aspects of e-health are valued by patients and how this may affect future use. In addition, antecedents to the models can be used to predict e-health acceptance in advance of system development.

  9. Reliability of multi-model and structurally different single-model ensembles

    Energy Technology Data Exchange (ETDEWEB)

    Yokohata, Tokuta [National Institute for Environmental Studies, Center for Global Environmental Research, Tsukuba, Ibaraki (Japan); Annan, James D.; Hargreaves, Julia C. [Japan Agency for Marine-Earth Science and Technology, Research Institute for Global Change, Yokohama, Kanagawa (Japan); Collins, Matthew [University of Exeter, College of Engineering, Mathematics and Physical Sciences, Exeter (United Kingdom); Jackson, Charles S.; Tobis, Michael [The University of Texas at Austin, Institute of Geophysics, 10100 Burnet Rd., ROC-196, Mail Code R2200, Austin, TX (United States); Webb, Mark J. [Met Office Hadley Centre, Exeter (United Kingdom)

    2012-08-15

    The performance of several state-of-the-art climate model ensembles, including two multi-model ensembles (MMEs) and four structurally different (perturbed parameter) single model ensembles (SMEs), are investigated for the first time using the rank histogram approach. In this method, the reliability of a model ensemble is evaluated from the point of view of whether the observations can be regarded as being sampled from the ensemble. Our analysis reveals that, in the MMEs, the climate variables we investigated are broadly reliable on the global scale, with a tendency towards overdispersion. On the other hand, in the SMEs, the reliability differs depending on the ensemble and variable field considered. In general, the mean state and historical trend of surface air temperature, and mean state of precipitation are reliable in the SMEs. However, variables such as sea level pressure or top-of-atmosphere clear-sky shortwave radiation do not cover a sufficiently wide range in some. It is not possible to assess whether this is a fundamental feature of SMEs generated with particular model, or a consequence of the algorithm used to select and perturb the values of the parameters. As under-dispersion is a potentially more serious issue when using ensembles to make projections, we recommend the application of rank histograms to assess reliability when designing and running perturbed physics SMEs. (orig.)

  10. Models and data requirements for human reliability analysis

    International Nuclear Information System (INIS)

    1989-03-01

    It has been widely recognised for many years that the safety of the nuclear power generation depends heavily on the human factors related to plant operation. This has been confirmed by the accidents at Three Mile Island and Chernobyl. Both these cases revealed how human actions can defeat engineered safeguards and the need for special operator training to cover the possibility of unexpected plant conditions. The importance of the human factor also stands out in the analysis of abnormal events and insights from probabilistic safety assessments (PSA's), which reveal a large proportion of cases having their origin in faulty operator performance. A consultants' meeting, organized jointly by the International Atomic Energy Agency (IAEA) and the International Institute for Applied Systems Analysis (IIASA) was held at IIASA in Laxenburg, Austria, December 7-11, 1987, with the aim of reviewing existing models used in Probabilistic Safety Assessment (PSA) for Human Reliability Analysis (HRA) and of identifying the data required. The report collects both the contributions offered by the members of the Expert Task Force and the findings of the extensive discussions that took place during the meeting. Refs, figs and tabs

  11. Reliability of Soft Tissue Model Based Implant Surgical Guides; A Methodological Mistake.

    Science.gov (United States)

    Sabour, Siamak; Dastjerdi, Elahe Vahid

    2012-08-20

    Abstract We were interested to read the paper by Maney P and colleagues published in the July 2012 issue of J Oral Implantol. The authors aimed to assess the reliability of soft tissue model based implant surgical guides reported that the accuracy was evaluated using software. 1 I found the manuscript title of Maney P, et al. incorrect and misleading. Moreover, they reported twenty-two sites (46.81%) were considered accurate (13 of 24 maxillary and 9 of 23 mandibular sites). As the authors point out in their conclusion, Soft tissue models do not always provide sufficient accuracy for implant surgical guide fabrication.Reliability (precision) and validity (accuracy) are two different methodological issues in researches. Sensitivity, specificity, PPV, NPV, likelihood ratio positive (true positive/false negative) and likelihood ratio negative (false positive/ true negative) as well as odds ratio (true results\\false results - preferably more than 50) are among the tests to evaluate the validity (accuracy) of a single test compared to a gold standard.2-4 It is not clear that the reported twenty-two sites (46.81%) which were considered accurate related to which of the above mentioned estimates for validity analysis. Reliability (repeatability or reproducibility) is being assessed by different statistical tests such as Pearson r, least square and paired t.test which all of them are among common mistakes in reliability analysis 5. Briefly, for quantitative variable Intra Class Correlation Coefficient (ICC) and for qualitative variables weighted kappa should be used with caution because kappa has its own limitation too. Regarding reliability or agreement, it is good to know that for computing kappa value, just concordant cells are being considered, whereas discordant cells should also be taking into account in order to reach a correct estimation of agreement (Weighted kappa).2-4 As a take home message, for reliability and validity analysis, appropriate tests should be

  12. Integrating software reliability concepts into risk and reliability modeling of digital instrumentation and control systems used in nuclear power plants

    International Nuclear Information System (INIS)

    Arndt, S. A.

    2006-01-01

    As software-based digital systems are becoming more and more common in all aspects of industrial process control, including the nuclear power industry, it is vital that the current state of the art in quality, reliability, and safety analysis be advanced to support the quantitative review of these systems. Several research groups throughout the world are working on the development and assessment of software-based digital system reliability methods and their applications in the nuclear power, aerospace, transportation, and defense industries. However, these groups are hampered by the fact that software experts and probabilistic safety assessment experts view reliability engineering very differently. This paper discusses the characteristics of a common vocabulary and modeling framework. (authors)

  13. Accounting for Model Uncertainties Using Reliability Methods - Application to Carbon Dioxide Geologic Sequestration System. Final Report

    International Nuclear Information System (INIS)

    Mok, Chin Man; Doughty, Christine; Zhang, Keni; Pruess, Karsten; Kiureghian, Armen; Zhang, Miao; Kaback, Dawn

    2010-01-01

    A new computer code, CALRELTOUGH, which uses reliability methods to incorporate parameter sensitivity and uncertainty analysis into subsurface flow and transport models, was developed by Geomatrix Consultants, Inc. in collaboration with Lawrence Berkeley National Laboratory and University of California at Berkeley. The CALREL reliability code was developed at the University of California at Berkely for geotechnical applications and the TOUGH family of codes was developed at Lawrence Berkeley National Laboratory for subsurface flow and tranport applications. The integration of the two codes provides provides a new approach to deal with uncertainties in flow and transport modeling of the subsurface, such as those uncertainties associated with hydrogeology parameters, boundary conditions, and initial conditions of subsurface flow and transport using data from site characterization and monitoring for conditioning. The new code enables computation of the reliability of a system and the components that make up the system, instead of calculating the complete probability distributions of model predictions at all locations at all times. The new CALRELTOUGH code has tremendous potential to advance subsurface understanding for a variety of applications including subsurface energy storage, nuclear waste disposal, carbon sequestration, extraction of natural resources, and environmental remediation. The new code was tested on a carbon sequestration problem as part of the Phase I project. Phase iI was not awarded.

  14. Modeling Parameters of Reliability of Technological Processes of Hydrocarbon Pipeline Transportation

    Directory of Open Access Journals (Sweden)

    Shalay Viktor

    2016-01-01

    Full Text Available On the basis of methods of system analysis and parametric reliability theory, the mathematical modeling of processes of oil and gas equipment operation in reliability monitoring was conducted according to dispatching data. To check the quality of empiric distribution coordination , an algorithm and mathematical methods of analysis are worked out in the on-line mode in a changing operating conditions. An analysis of physical cause-and-effect relations mechanism between the key factors and changing parameters of technical systems of oil and gas facilities is made, the basic types of technical distribution parameters are defined. Evaluation of the adequacy the analyzed parameters of the type of distribution is provided by using a criterion A.Kolmogorov, as the most universal, accurate and adequate to verify the distribution of continuous processes of complex multiple-technical systems. Methods of calculation are provided for supervising by independent bodies for risk assessment and safety facilities.

  15. Modeling, implementation, and validation of arterial travel time reliability : [summary].

    Science.gov (United States)

    2013-11-01

    Travel time reliability (TTR) has been proposed as : a better measure of a facilitys performance than : a statistical measure like peak hour demand. TTR : is based on more information about average traffic : flows and longer time periods, thus inc...

  16. Modeling, implementation, and validation of arterial travel time reliability.

    Science.gov (United States)

    2013-11-01

    Previous research funded by Florida Department of Transportation (FDOT) developed a method for estimating : travel time reliability for arterials. This method was not initially implemented or validated using field data. This : project evaluated and r...

  17. Study of redundant Models in reliability prediction of HXMT's HES

    International Nuclear Information System (INIS)

    Wang Jinming; Liu Congzhan; Zhang Zhi; Ji Jianfeng

    2010-01-01

    Two redundant equipment structures of HXMT's HES are proposed firstly, the block backup and dual system cold-redundancy. Then prediction of the reliability is made by using parts count method. Research of comparison and analysis is also performed on the two proposals. A conclusion is drawn that a higher reliability and longer service life could be offered by taking a redundant equipment structure of block backup. (authors)

  18. An auto-focusing heuristic model to increase the reliability of a scientific mission

    International Nuclear Information System (INIS)

    Gualdesi, Lavinio

    2006-01-01

    Researchers invest a lot of time and effort on the design and development of components used in a scientific mission. To capitalize on this investment and on the operational experience of the researchers, it is useful to adopt a quantitative data base to monitor the history and usage of the components. This work describes a model to monitor the reliability level of components. The model is very flexible and allows users to compose systems using the same components in different configurations as required by each mission. This tool provides availability and reliability figures for the configuration requested, derived from historical data of the components' previous performance. The system is based on preliminary checklists to establish standard operating procedures (SOP) for all components life phases. When an infringement to the SOP occurs, a quantitative ranking is provided in order to quantify the risk associated with this deviation. The final agreement between field data and expected performance of the component makes the model converge onto a heuristic monitoring system. The model automatically focuses on points of failure at the detailed component element level, calculates risks, provides alerts when a demonstrated risk to safety is encountered, and advises when there is a mismatch between component performance and mission requirements. This model also helps the mission to focus resources on critical tasks where they are most needed

  19. Physics-based process modeling, reliability prediction, and design guidelines for flip-chip devices

    Science.gov (United States)

    Michaelides, Stylianos

    Flip Chip on Board (FCOB) and Chip-Scale Packages (CSPs) are relatively new technologies that are being increasingly used in the electronic packaging industry. Compared to the more widely used face-up wirebonding and TAB technologies, flip-chips and most CSPs provide the shortest possible leads, lower inductance, higher frequency, better noise control, higher density, greater input/output (I/O), smaller device footprint and lower profile. However, due to the short history and due to the introduction of several new electronic materials, designs, and processing conditions, very limited work has been done to understand the role of material, geometry, and processing parameters on the reliability of flip-chip devices. Also, with the ever-increasing complexity of semiconductor packages and with the continued reduction in time to market, it is too costly to wait until the later stages of design and testing to discover that the reliability is not satisfactory. The objective of the research is to develop integrated process-reliability models that will take into consideration the mechanics of assembly processes to be able to determine the reliability of face-down devices under thermal cycling and long-term temperature dwelling. The models incorporate the time and temperature-dependent constitutive behavior of various materials in the assembly to be able to predict failure modes such as die cracking and solder cracking. In addition, the models account for process-induced defects and macro-micro features of the assembly. Creep-fatigue and continuum-damage mechanics models for the solder interconnects and fracture-mechanics models for the die have been used to determine the reliability of the devices. The results predicted by the models have been successfully validated against experimental data. The validated models have been used to develop qualification and test procedures for implantable medical devices. In addition, the research has helped develop innovative face

  20. Modeling the bathtub shape hazard rate function in terms of reliability

    International Nuclear Information System (INIS)

    Wang, K.S.; Hsu, F.S.; Liu, P.P.

    2002-01-01

    In this paper, a general form of bathtub shape hazard rate function is proposed in terms of reliability. The degradation of system reliability comes from different failure mechanisms, in particular those related to (1) random failures, (2) cumulative damage, (3) man-machine interference, and (4) adaptation. The first item is referred to the modeling of unpredictable failures in a Poisson process, i.e. it is shown by a constant. Cumulative damage emphasizes the failures owing to strength deterioration and therefore the possibility of system sustaining the normal operation load decreases with time. It depends on the failure probability, 1-R. This representation denotes the memory characteristics of the second failure cause. Man-machine interference may lead to a positive effect in the failure rate due to learning and correction, or negative from the consequence of human inappropriate habit in system operations, etc. It is suggested that this item is correlated to the reliability, R, as well as the failure probability. Adaptation concerns with continuous adjusting between the mating subsystems. When a new system is set on duty, some hidden defects are explored and disappeared eventually. Therefore, the reliability decays combined with decreasing failure rate, which is expressed as a power of reliability. Each of these phenomena brings about the failures independently and is described by an additive term in the hazard rate function h(R), thus the overall failure behavior governed by a number of parameters is found by fitting the evidence data. The proposed model is meaningful in capturing the physical phenomena occurring during the system lifetime and provides for simpler and more effective parameter fitting than the usually adopted 'bathtub' procedures. Five examples of different type of failure mechanisms are taken in the validation of the proposed model. Satisfactory results are found from the comparisons

  1. Parameter estimation of component reliability models in PSA model of Krsko NPP

    International Nuclear Information System (INIS)

    Jordan Cizelj, R.; Vrbanic, I.

    2001-01-01

    In the paper, the uncertainty analysis of component reliability models for independent failures is shown. The present approach for parameter estimation of component reliability models in NPP Krsko is presented. Mathematical approaches for different types of uncertainty analyses are introduced and used in accordance with some predisposed requirements. Results of the uncertainty analyses are shown in an example for time-related components. As the most appropriate uncertainty analysis proved the Bayesian estimation with the numerical estimation of a posterior, which can be approximated with some appropriate probability distribution, in this paper with lognormal distribution.(author)

  2. Reliability model analysis and primary experimental evaluation of laser triggered pulse trigger

    International Nuclear Information System (INIS)

    Chen Debiao; Yang Xinglin; Li Yuan; Li Jin

    2012-01-01

    High performance pulse trigger can enhance performance and stability of the PPS. It is necessary to evaluate the reliability of the LTGS pulse trigger, so we establish the reliability analysis model of this pulse trigger based on CARMES software, the reliability evaluation is accord with the statistical results. (authors)

  3. 78 FR 45447 - Revisions to Modeling, Data, and Analysis Reliability Standard

    Science.gov (United States)

    2013-07-29

    ...; Order No. 782] Revisions to Modeling, Data, and Analysis Reliability Standard AGENCY: Federal Energy... Analysis (MOD) Reliability Standard MOD- 028-2, submitted to the Commission for approval by the North... Organization. The Commission finds that the proposed Reliability Standard represents an improvement over the...

  4. Computer Model to Estimate Reliability Engineering for Air Conditioning Systems

    International Nuclear Information System (INIS)

    Afrah Al-Bossly, A.; El-Berry, A.; El-Berry, A.

    2012-01-01

    Reliability engineering is used to predict the performance and optimize design and maintenance of air conditioning systems. Air conditioning systems are expose to a number of failures. The failures of an air conditioner such as turn on, loss of air conditioner cooling capacity, reduced air conditioning output temperatures, loss of cool air supply and loss of air flow entirely can be due to a variety of problems with one or more components of an air conditioner or air conditioning system. Forecasting for system failure rates are very important for maintenance. This paper focused on the reliability of the air conditioning systems. Statistical distributions that were commonly applied in reliability settings: the standard (2 parameter) Weibull and Gamma distributions. After distributions parameters had been estimated, reliability estimations and predictions were used for evaluations. To evaluate good operating condition in a building, the reliability of the air conditioning system that supplies conditioned air to the several The company's departments. This air conditioning system is divided into two, namely the main chilled water system and the ten air handling systems that serves the ten departments. In a chilled-water system the air conditioner cools water down to 40-45 degree F (4-7 degree C). The chilled water is distributed throughout the building in a piping system and connected to air condition cooling units wherever needed. Data analysis has been done with support a computer aided reliability software, this is due to the Weibull and Gamma distributions indicated that the reliability for the systems equal to 86.012% and 77.7% respectively. A comparison between the two important families of distribution functions, namely, the Weibull and Gamma families was studied. It was found that Weibull method performed for decision making.

  5. Possibilities and limitations of applying software reliability growth models to safety-critical software

    International Nuclear Information System (INIS)

    Kim, Man Cheol; Jang, Seung Cheol; Ha, Jae Joo

    2007-01-01

    It is generally known that software reliability growth models such as the Jelinski-Moranda model and the Goel-Okumoto's Non-Homogeneous Poisson Process (NHPP) model cannot be applied to safety-critical software due to a lack of software failure data. In this paper, by applying two of the most widely known software reliability growth models to sample software failure data, we demonstrate the possibility of using the software reliability growth models to prove the high reliability of safety-critical software. The high sensitivity of a piece of software's reliability to software failure data, as well as a lack of sufficient software failure data, is also identified as a possible limitation when applying the software reliability growth models to safety-critical software

  6. Competing risk models in reliability systems, a Weibull distribution model with Bayesian analysis approach

    International Nuclear Information System (INIS)

    Iskandar, Ismed; Gondokaryono, Yudi Satria

    2016-01-01

    In reliability theory, the most important problem is to determine the reliability of a complex system from the reliability of its components. The weakness of most reliability theories is that the systems are described and explained as simply functioning or failed. In many real situations, the failures may be from many causes depending upon the age and the environment of the system and its components. Another problem in reliability theory is one of estimating the parameters of the assumed failure models. The estimation may be based on data collected over censored or uncensored life tests. In many reliability problems, the failure data are simply quantitatively inadequate, especially in engineering design and maintenance system. The Bayesian analyses are more beneficial than the classical one in such cases. The Bayesian estimation analyses allow us to combine past knowledge or experience in the form of an apriori distribution with life test data to make inferences of the parameter of interest. In this paper, we have investigated the application of the Bayesian estimation analyses to competing risk systems. The cases are limited to the models with independent causes of failure by using the Weibull distribution as our model. A simulation is conducted for this distribution with the objectives of verifying the models and the estimators and investigating the performance of the estimators for varying sample size. The simulation data are analyzed by using Bayesian and the maximum likelihood analyses. The simulation results show that the change of the true of parameter relatively to another will change the value of standard deviation in an opposite direction. For a perfect information on the prior distribution, the estimation methods of the Bayesian analyses are better than those of the maximum likelihood. The sensitivity analyses show some amount of sensitivity over the shifts of the prior locations. They also show the robustness of the Bayesian analysis within the range

  7. Reliability modeling and analysis for a novel design of modular converter system of wind turbines

    International Nuclear Information System (INIS)

    Zhang, Cai Wen; Zhang, Tieling; Chen, Nan; Jin, Tongdan

    2013-01-01

    Converters play a vital role in wind turbines. The concept of modularity is gaining in popularity in converter design for modern wind turbines in order to achieve high reliability as well as cost-effectiveness. In this study, we are concerned with a novel topology of modular converter invented by Hjort, Modular converter system with interchangeable converter modules. World Intellectual Property Organization, Pub. No. WO29027520 A2; 5 March 2009, in this architecture, the converter comprises a number of identical and interchangeable basic modules. Each module can operate in either AC/DC or DC/AC mode, depending on whether it functions on the generator or the grid side. Moreover, each module can be reconfigured from one side to the other, depending on the system’s operational requirements. This is a shining example of full-modular design. This paper aims to model and analyze the reliability of such a modular converter. A Markov modeling approach is applied to the system reliability analysis. In particular, six feasible converter system models based on Hjort’s architecture are investigated. Through numerical analyses and comparison, we provide insights and guidance for converter designers in their decision-making.

  8. An Assessment of the VHTR Safety Distance Using the Reliability Physics Model

    International Nuclear Information System (INIS)

    Lee, Joeun; Kim, Jintae; Jae, Moosung

    2015-01-01

    In Korea planning the production of hydrogen using high temperature from nuclear power is in progress. To produce hydrogen from nuclear plants, supplying temperature above 800 .deg. C is required. Therefore, Very High Temperature Reactor (VHTR) which is able to provide about 950 .deg. C is suitable. In situation of high temperature and corrosion where hydrogen might be released easily, hydrogen production facility using VHTR has a danger of explosion. Moreover explosion not only has a bad influence upon facility itself but also on VHTR. Those explosions result in unsafe situation that cause serious damage. However, In terms of thermal-hydraulics view, long distance makes low efficiency Thus, in this study, a methodology for the safety assessment of safety distance between the hydrogen production facilities and the VHTR is developed with reliability physics model. Based on the standard safety criteria which is a value of 1 x 10 -6 , the safety distance between the hydrogen production facilities and the VHTR using reliability physics model are calculated to be a value of 60m - 100m. In the future, assessment for characteristic of VHTR, the capacity to resist pressure from outside hydrogen explosion and the overpressure for the large amount of detonation volume in detail is expected to identify more precise safety distance using this reliability physics model

  9. Adjoint sensitivity analysis of dynamic reliability models based on Markov chains - II: Application to IFMIF reliability assessment

    Energy Technology Data Exchange (ETDEWEB)

    Cacuci, D. G. [Commiss Energy Atom, Direct Energy Nucl, Saclay, (France); Cacuci, D. G.; Balan, I. [Univ Karlsruhe, Inst Nucl Technol and Reactor Safetly, Karlsruhe, (Germany); Ionescu-Bujor, M. [Forschungszentrum Karlsruhe, Fus Program, D-76021 Karlsruhe, (Germany)

    2008-07-01

    In Part II of this work, the adjoint sensitivity analysis procedure developed in Part I is applied to perform sensitivity analysis of several dynamic reliability models of systems of increasing complexity, culminating with the consideration of the International Fusion Materials Irradiation Facility (IFMIF) accelerator system. Section II presents the main steps of a procedure for the automated generation of Markov chains for reliability analysis, including the abstraction of the physical system, construction of the Markov chain, and the generation and solution of the ensuing set of differential equations; all of these steps have been implemented in a stand-alone computer code system called QUEFT/MARKOMAG-S/MCADJSEN. This code system has been applied to sensitivity analysis of dynamic reliability measures for a paradigm '2-out-of-3' system comprising five components and also to a comprehensive dynamic reliability analysis of the IFMIF accelerator system facilities for the average availability and, respectively, the system's availability at the final mission time. The QUEFT/MARKOMAG-S/MCADJSEN has been used to efficiently compute sensitivities to 186 failure and repair rates characterizing components and subsystems of the first-level fault tree of the IFMIF accelerator system. (authors)

  10. Adjoint sensitivity analysis of dynamic reliability models based on Markov chains - II: Application to IFMIF reliability assessment

    International Nuclear Information System (INIS)

    Cacuci, D. G.; Cacuci, D. G.; Balan, I.; Ionescu-Bujor, M.

    2008-01-01

    In Part II of this work, the adjoint sensitivity analysis procedure developed in Part I is applied to perform sensitivity analysis of several dynamic reliability models of systems of increasing complexity, culminating with the consideration of the International Fusion Materials Irradiation Facility (IFMIF) accelerator system. Section II presents the main steps of a procedure for the automated generation of Markov chains for reliability analysis, including the abstraction of the physical system, construction of the Markov chain, and the generation and solution of the ensuing set of differential equations; all of these steps have been implemented in a stand-alone computer code system called QUEFT/MARKOMAG-S/MCADJSEN. This code system has been applied to sensitivity analysis of dynamic reliability measures for a paradigm '2-out-of-3' system comprising five components and also to a comprehensive dynamic reliability analysis of the IFMIF accelerator system facilities for the average availability and, respectively, the system's availability at the final mission time. The QUEFT/MARKOMAG-S/MCADJSEN has been used to efficiently compute sensitivities to 186 failure and repair rates characterizing components and subsystems of the first-level fault tree of the IFMIF accelerator system. (authors)

  11. Probabilistic risk assessment for a loss of coolant accident in McMaster Nuclear Reactor and application of reliability physics model for modeling human reliability

    Science.gov (United States)

    Ha, Taesung

    A probabilistic risk assessment (PRA) was conducted for a loss of coolant accident, (LOCA) in the McMaster Nuclear Reactor (MNR). A level 1 PRA was completed including event sequence modeling, system modeling, and quantification. To support the quantification of the accident sequence identified, data analysis using the Bayesian method and human reliability analysis (HRA) using the accident sequence evaluation procedure (ASEP) approach were performed. Since human performance in research reactors is significantly different from that in power reactors, a time-oriented HRA model (reliability physics model) was applied for the human error probability (HEP) estimation of the core relocation. This model is based on two competing random variables: phenomenological time and performance time. The response surface and direct Monte Carlo simulation with Latin Hypercube sampling were applied for estimating the phenomenological time, whereas the performance time was obtained from interviews with operators. An appropriate probability distribution for the phenomenological time was assigned by statistical goodness-of-fit tests. The human error probability (HEP) for the core relocation was estimated from these two competing quantities: phenomenological time and operators' performance time. The sensitivity of each probability distribution in human reliability estimation was investigated. In order to quantify the uncertainty in the predicted HEPs, a Bayesian approach was selected due to its capability of incorporating uncertainties in model itself and the parameters in that model. The HEP from the current time-oriented model was compared with that from the ASEP approach. Both results were used to evaluate the sensitivity of alternative huinan reliability modeling for the manual core relocation in the LOCA risk model. This exercise demonstrated the applicability of a reliability physics model supplemented with a. Bayesian approach for modeling human reliability and its potential

  12. Reliability models for a nonrepairable system with heterogeneous components having a phase-type time-to-failure distribution

    International Nuclear Information System (INIS)

    Kim, Heungseob; Kim, Pansoo

    2017-01-01

    This research paper presents practical stochastic models for designing and analyzing the time-dependent reliability of nonrepairable systems. The models are formulated for nonrepairable systems with heterogeneous components having phase-type time-to-failure distributions by a structured continuous time Markov chain (CTMC). The versatility of the phase-type distributions enhances the flexibility and practicality of the systems. By virtue of these benefits, studies in reliability engineering can be more advanced than the previous studies. This study attempts to solve a redundancy allocation problem (RAP) by using these new models. The implications of mixing components, redundancy levels, and redundancy strategies are simultaneously considered to maximize the reliability of a system. An imperfect switching case in a standby redundant system is also considered. Furthermore, the experimental results for a well-known RAP benchmark problem are presented to demonstrate the approximating error of the previous reliability function for a standby redundant system and the usefulness of the current research. - Highlights: • Phase-type time-to-failure distribution is used for components. • Reliability model for nonrepairable system is developed using Markov chain. • System is composed of heterogeneous components. • Model provides the real value of standby system reliability not an approximation. • Redundancy allocation problem is used to show usefulness of this model.

  13. Levels of Interaction Provided by Online Distance Education Models

    Science.gov (United States)

    Alhih, Mohammed; Ossiannilsson, Ebba; Berigel, Muhammet

    2017-01-01

    Interaction plays a significant role to foster usability and quality in online education. It is one of the quality standard to reveal the evidence of practice in online distance education models. This research study aims to evaluate levels of interaction in the practices of distance education centres. It is aimed to provide online distance…

  14. Modeling Sensor Reliability in Fault Diagnosis Based on Evidence Theory

    Directory of Open Access Journals (Sweden)

    Kaijuan Yuan

    2016-01-01

    Full Text Available Sensor data fusion plays an important role in fault diagnosis. Dempster–Shafer (D-R evidence theory is widely used in fault diagnosis, since it is efficient to combine evidence from different sensors. However, under the situation where the evidence highly conflicts, it may obtain a counterintuitive result. To address the issue, a new method is proposed in this paper. Not only the statistic sensor reliability, but also the dynamic sensor reliability are taken into consideration. The evidence distance function and the belief entropy are combined to obtain the dynamic reliability of each sensor report. A weighted averaging method is adopted to modify the conflict evidence by assigning different weights to evidence according to sensor reliability. The proposed method has better performance in conflict management and fault diagnosis due to the fact that the information volume of each sensor report is taken into consideration. An application in fault diagnosis based on sensor fusion is illustrated to show the efficiency of the proposed method. The results show that the proposed method improves the accuracy of fault diagnosis from 81.19% to 89.48% compared to the existing methods.

  15. Impact of Loss Synchronization on Reliable High Speed Networks: A Model Based Simulation

    Directory of Open Access Journals (Sweden)

    Suman Kumar

    2014-01-01

    Full Text Available Contemporary nature of network evolution demands for simulation models which are flexible, scalable, and easily implementable. In this paper, we propose a fluid based model for performance analysis of reliable high speed networks. In particular, this paper aims to study the dynamic relationship between congestion control algorithms and queue management schemes, in order to develop a better understanding of the causal linkages between the two. We propose a loss synchronization module which is user configurable. We validate our model through simulations under controlled settings. Also, we present a performance analysis to provide insights into two important issues concerning 10 Gbps high speed networks: (i impact of bottleneck buffer size on the performance of 10 Gbps high speed network and (ii impact of level of loss synchronization on link utilization-fairness tradeoffs. The practical impact of the proposed work is to provide design guidelines along with a powerful simulation tool to protocol designers and network developers.

  16. Modeling the reliability and maintenance costs of wind turbines using Weibull analysis

    Energy Technology Data Exchange (ETDEWEB)

    Vachon, W.A. [W.A. Vachon & Associates, Inc., Manchester, MA (United States)

    1996-12-31

    A general description is provided of the basic mathematics and use of Weibull statistical models for modeling component failures and maintenance costs as a function of time. The applicability of the model to wind turbine components and subsystems is discussed with illustrative examples of typical component reliabilities drawn from actual field experiences. Example results indicate the dominant role of key subsystems based on a combination of their failure frequency and repair/replacement costs. The value of the model is discussed as a means of defining (1) maintenance practices, (2) areas in which to focus product improvements, (3) spare parts inventory, and (4) long-term trends in maintenance costs as an important element in project cash flow projections used by developers, investors, and lenders. 6 refs., 8 figs., 3 tabs.

  17. Modeling of seismic hazards for dynamic reliability analysis

    International Nuclear Information System (INIS)

    Mizutani, M.; Fukushima, S.; Akao, Y.; Katukura, H.

    1993-01-01

    This paper investigates the appropriate indices of seismic hazard curves (SHCs) for seismic reliability analysis. In the most seismic reliability analyses of structures, the seismic hazards are defined in the form of the SHCs of peak ground accelerations (PGAs). Usually PGAs play a significant role in characterizing ground motions. However, PGA is not always a suitable index of seismic motions. When random vibration theory developed in the frequency domain is employed to obtain statistics of responses, it is more convenient for the implementation of dynamic reliability analysis (DRA) to utilize an index which can be determined in the frequency domain. In this paper, we summarize relationships among the indices which characterize ground motions. The relationships between the indices and the magnitude M are arranged as well. In this consideration, duration time plays an important role in relating two distinct class, i.e. energy class and power class. Fourier and energy spectra are involved in the energy class, and power and response spectra and PGAs are involved in the power class. These relationships are also investigated by using ground motion records. Through these investigations, we have shown the efficiency of employing the total energy as an index of SHCs, which can be determined in the time and frequency domains and has less variance than the other indices. In addition, we have proposed the procedure of DRA based on total energy. (author)

  18. Sensitivity of Reliability Estimates in Partially Damaged RC Structures subject to Earthquakes, using Reduced Hysteretic Models

    DEFF Research Database (Denmark)

    Iwankiewicz, R.; Nielsen, Søren R. K.; Skjærbæk, P. S.

    The subject of the paper is the investigation of the sensitivity of structural reliability estimation by a reduced hysteretic model for a reinforced concrete frame under an earthquake excitation.......The subject of the paper is the investigation of the sensitivity of structural reliability estimation by a reduced hysteretic model for a reinforced concrete frame under an earthquake excitation....

  19. A Model for the Growth of Network Service Providers

    Science.gov (United States)

    2011-12-01

    Service Provider O-D Origin-Destination POP Point of Presence UCG Unilateral Connection Game xiv THIS PAGE INTENTIONALLY LEFT BLANK xv EXECUTIVE...xvi We make use of the Abilene dataset as input to the network provisioning model and assume that the NSP is new to the market and is building an...has to decide on the connections to build and the markets to serve in order to maximize its profits. The NSP makes these decisions based on the market

  20. Protein Based Molecular Markers Provide Reliable Means to Understand Prokaryotic Phylogeny and Support Darwinian Mode of Evolution

    Directory of Open Access Journals (Sweden)

    Vaibhav eBhandari

    2012-07-01

    Full Text Available The analyses of genome sequences have led to the proposal that lateral gene transfers (LGTs among prokaryotes are so widespread that they disguise the interrelationships among these organisms. This has led to questioning whether the Darwinian model of evolution is applicable to the prokaryotic organisms. In this review, we discuss the usefulness of taxon-specific molecular markers such as conserved signature indels (CSIs and conserved signature proteins (CSPs for understanding the evolutionary relationships among prokaryotes and to assess the influence of LGTs on prokaryotic evolution. The analyses of genomic sequences have identified large numbers of CSIs and CSPs that are unique properties of different groups of prokaryotes ranging from phylum to genus levels. The species distribution patterns of these molecular signatures strongly support a tree-like vertical inheritance of the genes containing these molecular signatures that is consistent with phylogenetic trees. Recent detailed studies in this regard on Thermotogae and Archaea, which are reviewed here, have identified large numbers of CSIs and CSPs that are specific for the species from these two taxa and a number of their major clades. The genetic changes responsible for these CSIs (and CSPs initially likely occurred in the common ancestors of these taxa and then vertically transferred to various descendants. Although some CSIs and CSPs in unrelated groups of prokaryotes were identified, their small numbers and random occurrence has no apparent influence on the consistent tree-like branching pattern emerging from other markers. These results provide evidence that although LGT is an important evolutionary force, it does not mask the tree-like branching pattern of prokaryotes or understanding of their evolutionary relationships. The identified CSIs and CSPs also provide novel and highly specific means for identification of different groups of microbes and for taxonomical and biochemical

  1. Protein based molecular markers provide reliable means to understand prokaryotic phylogeny and support Darwinian mode of evolution.

    Science.gov (United States)

    Bhandari, Vaibhav; Naushad, Hafiz S; Gupta, Radhey S

    2012-01-01

    The analyses of genome sequences have led to the proposal that lateral gene transfers (LGTs) among prokaryotes are so widespread that they disguise the interrelationships among these organisms. This has led to questioning of whether the Darwinian model of evolution is applicable to prokaryotic organisms. In this review, we discuss the usefulness of taxon-specific molecular markers such as conserved signature indels (CSIs) and conserved signature proteins (CSPs) for understanding the evolutionary relationships among prokaryotes and to assess the influence of LGTs on prokaryotic evolution. The analyses of genomic sequences have identified large numbers of CSIs and CSPs that are unique properties of different groups of prokaryotes ranging from phylum to genus levels. The species distribution patterns of these molecular signatures strongly support a tree-like vertical inheritance of the genes containing these molecular signatures that is consistent with phylogenetic trees. Recent detailed studies in this regard on the Thermotogae and Archaea, which are reviewed here, have identified large numbers of CSIs and CSPs that are specific for the species from these two taxa and a number of their major clades. The genetic changes responsible for these CSIs (and CSPs) initially likely occurred in the common ancestors of these taxa and then vertically transferred to various descendants. Although some CSIs and CSPs in unrelated groups of prokaryotes were identified, their small numbers and random occurrence has no apparent influence on the consistent tree-like branching pattern emerging from other markers. These results provide evidence that although LGT is an important evolutionary force, it does not mask the tree-like branching pattern of prokaryotes or understanding of their evolutionary relationships. The identified CSIs and CSPs also provide novel and highly specific means for identification of different groups of microbes and for taxonomical and biochemical studies.

  2. Cloud-based calculators for fast and reliable access to NOAA's geomagnetic field models

    Science.gov (United States)

    Woods, A.; Nair, M. C.; Boneh, N.; Chulliat, A.

    2017-12-01

    While the Global Positioning System (GPS) provides accurate point locations, it does not provide pointing directions. Therefore, the absolute directional information provided by the Earth's magnetic field is of primary importance for navigation and for the pointing of technical devices such as aircrafts, satellites and lately, mobile phones. The major magnetic sources that affect compass-based navigation are the Earth's core, its magnetized crust and the electric currents in the ionosphere and magnetosphere. NOAA/CIRES Geomagnetism (ngdc.noaa.gov/geomag/) group develops and distributes models that describe all these important sources to aid navigation. Our geomagnetic models are used in variety of platforms including airplanes, ships, submarines and smartphones. While the magnetic field from Earth's core can be described in relatively fewer parameters and is suitable for offline computation, the magnetic sources from Earth's crust, ionosphere and magnetosphere require either significant computational resources or real-time capabilities and are not suitable for offline calculation. This is especially important for small navigational devices or embedded systems, where computational resources are limited. Recognizing the need for a fast and reliable access to our geomagnetic field models, we developed cloud-based application program interfaces (APIs) for NOAA's ionospheric and magnetospheric magnetic field models. In this paper we will describe the need for reliable magnetic calculators, the challenges faced in running geomagnetic field models in the cloud in real-time and the feedback from our user community. We discuss lessons learned harvesting and validating the data which powers our cloud services, as well as our strategies for maintaining near real-time service, including load-balancing, real-time monitoring, and instance cloning. We will also briefly talk about the progress we achieved on NOAA's Big Earth Data Initiative (BEDI) funded project to develop API

  3. Conceptual Models of the Individual Public Service Provider

    DEFF Research Database (Denmark)

    Andersen, Lotte Bøgh; Pedersen, Lene Holm; Bhatti, Yosef

    are used to gain insight on the motivation of public service providers; namely principal-agent theory, self-determination theory and public service motivation theory. We situate the theoretical discussions in the context of public service providers being transferred to private organizations......Individual public service providers’ motivation can be conceptualized as either extrinsic, autonomous or prosocial, and the question is how we can best theoretically understand this complexity without losing too much coherence and parsimony. Drawing on Allison’s approach (1969), three perspectives...... theoretical – to develop a coherent model of individual public service providers – but the empirical illustration also contributes to our understanding of motivation in the context of public sector outsourcing....

  4. Model of Providing Assistive Technologies in Special Education Schools.

    Science.gov (United States)

    Lersilp, Suchitporn; Putthinoi, Supawadee; Chakpitak, Nopasit

    2015-05-14

    Most students diagnosed with disabilities in Thai special education schools received assistive technologies, but this did not guarantee the greatest benefits. The purpose of this study was to survey the provision, use and needs of assistive technologies, as well as the perspectives of key informants regarding a model of providing them in special education schools. The participants were selected by the purposive sampling method, and they comprised 120 students with visual, physical, hearing or intellectual disabilities from four special education schools in Chiang Mai, Thailand; and 24 key informants such as parents or caregivers, teachers, school principals and school therapists. The instruments consisted of an assistive technology checklist and a semi-structured interview. Results showed that a category of assistive technologies was provided for students with disabilities, with the highest being "services", followed by "media" and then "facilities". Furthermore, mostly students with physical disabilities were provided with assistive technologies, but those with visual disabilities needed it more. Finally, the model of providing assistive technologies was composed of 5 components: Collaboration; Holistic perspective; Independent management of schools; Learning systems and a production manual for users; and Development of an assistive technology center, driven by 3 major sources such as Government and Private organizations, and Schools.

  5. Evaluating a nurse-led model for providing neonatal care.

    Science.gov (United States)

    2004-07-01

    The paper presents an overview of a multi-dimensional, prospective, comparative 5-year audit of the quality of the neonatal care provided by a maternity unit in the UK delivering 2000 babies a year, where all neonatal care after 1995 was provided by advanced neonatal nurse practitioners, in relation to that provided by a range of other medically staffed comparator units. The audit includes 11 separate comparative studies supervised by a panel of independent external advisors. Data on intrapartum and neonatal mortality is reported. A review of resuscitation at birth, and a two-tier confidential inquiry into sentinel events in six units were carried out. The reliability of the routine predischarge neonatal examination was studied and, in particular, the recognition of congenital heart disease. A review of the quality of postdischarge letters was undertaken alongside an interview survey to elicit parental views on care provision. An audit of all hospital readmissions within 28 days of birth is reported. Other areas of study include management of staff stress, perceived adequacy of the training of nurse practitioners coming into post, and an assessment of unit costs. Intrapartum and neonatal death among women with a singleton pregnancy originally booked for delivery in Ashington fell 39% between 1991-1995 and 1996-2000 (5.12 vs. 3.11 deaths per 1000 births); the decline for the whole region was 27% (4.10 vs. 2.99). By all other indicators the quality of care in the nurse-managed unit was as good as, or better than, that in the medically staffed comparator units. An appropriately trained, stable team with a store of experience can deliver cot-side care of a higher quality than staff rostered to this task for a few months to gain experience, and this is probably more important than their medical or nursing background. Factors limiting the on-site availability of medical staff with paediatric expertise do not need to dictate the future disposition of maternity services.

  6. Skill and reliability of climate model ensembles at the Last Glacial Maximum and mid-Holocene

    Directory of Open Access Journals (Sweden)

    J. C. Hargreaves

    2013-03-01

    Full Text Available Paleoclimate simulations provide us with an opportunity to critically confront and evaluate the performance of climate models in simulating the response of the climate system to changes in radiative forcing and other boundary conditions. Hargreaves et al. (2011 analysed the reliability of the Paleoclimate Modelling Intercomparison Project, PMIP2 model ensemble with respect to the MARGO sea surface temperature data synthesis (MARGO Project Members, 2009 for the Last Glacial Maximum (LGM, 21 ka BP. Here we extend that work to include a new comprehensive collection of land surface data (Bartlein et al., 2011, and introduce a novel analysis of the predictive skill of the models. We include output from the PMIP3 experiments, from the two models for which suitable data are currently available. We also perform the same analyses for the PMIP2 mid-Holocene (6 ka BP ensembles and available proxy data sets. Our results are predominantly positive for the LGM, suggesting that as well as the global mean change, the models can reproduce the observed pattern of change on the broadest scales, such as the overall land–sea contrast and polar amplification, although the more detailed sub-continental scale patterns of change remains elusive. In contrast, our results for the mid-Holocene are substantially negative, with the models failing to reproduce the observed changes with any degree of skill. One cause of this problem could be that the globally- and annually-averaged forcing anomaly is very weak at the mid-Holocene, and so the results are dominated by the more localised regional patterns in the parts of globe for which data are available. The root cause of the model-data mismatch at these scales is unclear. If the proxy calibration is itself reliable, then representativity error in the data-model comparison, and missing climate feedbacks in the models are other possible sources of error.

  7. Circuit design for reliability

    CERN Document Server

    Cao, Yu; Wirth, Gilson

    2015-01-01

    This book presents physical understanding, modeling and simulation, on-chip characterization, layout solutions, and design techniques that are effective to enhance the reliability of various circuit units.  The authors provide readers with techniques for state of the art and future technologies, ranging from technology modeling, fault detection and analysis, circuit hardening, and reliability management. Provides comprehensive review on various reliability mechanisms at sub-45nm nodes; Describes practical modeling and characterization techniques for reliability; Includes thorough presentation of robust design techniques for major VLSI design units; Promotes physical understanding with first-principle simulations.

  8. Possibilities and Limitations of Applying Software Reliability Growth Models to Safety- Critical Software

    International Nuclear Information System (INIS)

    Kim, Man Cheol; Jang, Seung Cheol; Ha, Jae Joo

    2006-01-01

    As digital systems are gradually introduced to nuclear power plants (NPPs), the need of quantitatively analyzing the reliability of the digital systems is also increasing. Kang and Sung identified (1) software reliability, (2) common-cause failures (CCFs), and (3) fault coverage as the three most critical factors in the reliability analysis of digital systems. For the estimation of the safety-critical software (the software that is used in safety-critical digital systems), the use of Bayesian Belief Networks (BBNs) seems to be most widely used. The use of BBNs in reliability estimation of safety-critical software is basically a process of indirectly assigning a reliability based on various observed information and experts' opinions. When software testing results or software failure histories are available, we can use a process of directly estimating the reliability of the software using various software reliability growth models such as Jelinski- Moranda model and Goel-Okumoto's nonhomogeneous Poisson process (NHPP) model. Even though it is generally known that software reliability growth models cannot be applied to safety-critical software due to small number of expected failure data from the testing of safety-critical software, we try to find possibilities and corresponding limitations of applying software reliability growth models to safety critical software

  9. Comparison of Model Reliabilities from Single-Step and Bivariate Blending Methods

    DEFF Research Database (Denmark)

    Taskinen, Matti; Mäntysaari, Esa; Lidauer, Martin

    2013-01-01

    Model based reliabilities in genetic evaluation are compared between three methods: animal model BLUP, single-step BLUP, and bivariate blending after genomic BLUP. The original bivariate blending is revised in this work to better account animal models. The study data is extracted from...... be calculated. Model reliabilities by the single-step and the bivariate blending methods were higher than by animal model due to genomic information. Compared to the single-step method, the bivariate blending method reliability estimates were, in general, lower. Computationally bivariate blending method was......, on the other hand, lighter than the single-step method....

  10. Modelling catchment areas for secondary care providers: a case study.

    Science.gov (United States)

    Jones, Simon; Wardlaw, Jessica; Crouch, Susan; Carolan, Michelle

    2011-09-01

    Hospitals need to understand patient flows in an increasingly competitive health economy. New initiatives like Patient Choice and the Darzi Review further increase this demand. Essential to understanding patient flows are demographic and geographic profiles of health care service providers, known as 'catchment areas' and 'catchment populations'. This information helps Primary Care Trusts (PCTs) to review how their populations are accessing services, measure inequalities and commission services; likewise it assists Secondary Care Providers (SCPs) to measure and assess potential gains in market share, redesign services, evaluate admission thresholds and plan financial budgets. Unlike PCTs, SCPs do not operate within fixed geographic boundaries. Traditionally, SCPs have used administrative boundaries or arbitrary drive times to model catchment areas. Neither approach satisfactorily represents current patient flows. Furthermore, these techniques are time-consuming and can be challenging for healthcare managers to exploit. This paper presents three different approaches to define catchment areas, each more detailed than the previous method. The first approach 'First Past the Post' defines catchment areas by allocating a dominant SCP to each Census Output Area (OA). The SCP with the highest proportion of activity within each OA is considered the dominant SCP. The second approach 'Proportional Flow' allocates activity proportionally to each OA. This approach allows for cross-boundary flows to be captured in a catchment area. The third and final approach uses a gravity model to define a catchment area, which incorporates drive or travel time into the analysis. Comparing approaches helps healthcare providers to understand whether using more traditional and simplistic approaches to define catchment areas and populations achieves the same or similar results as complex mathematical modelling. This paper has demonstrated, using a case study of Manchester, that when estimating

  11. Procedure for Application of Software Reliability Growth Models to NPP PSA

    International Nuclear Information System (INIS)

    Son, Han Seong; Kang, Hyun Gook; Chang, Seung Cheol

    2009-01-01

    As the use of software increases at nuclear power plants (NPPs), the necessity for including software reliability and/or safety into the NPP Probabilistic Safety Assessment (PSA) rises. This work proposes an application procedure of software reliability growth models (RGMs), which are most widely used to quantify software reliability, to NPP PSA. Through the proposed procedure, it can be determined if a software reliability growth model can be applied to the NPP PSA before its real application. The procedure proposed in this work is expected to be very helpful for incorporating software into NPP PSA

  12. Time-dependent reliability analysis of nuclear reactor operators using probabilistic network models

    International Nuclear Information System (INIS)

    Oka, Y.; Miyata, K.; Kodaira, H.; Murakami, S.; Kondo, S.; Togo, Y.

    1987-01-01

    Human factors are very important for the reliability of a nuclear power plant. Human behavior has essentially a time-dependent nature. The details of thinking and decision making processes are important for detailed analysis of human reliability. They have, however, not been well considered by the conventional methods of human reliability analysis. The present paper describes the models for the time-dependent and detailed human reliability analysis. Recovery by an operator is taken into account and two-operators models are also presented

  13. Piping reliability model development, validation and its applications to light water reactor piping

    International Nuclear Information System (INIS)

    Woo, H.H.

    1983-01-01

    A brief description is provided of a three-year effort undertaken by the Lawrence Livermore National Laboratory for the piping reliability project. The ultimate goal of this project is to provide guidance for nuclear piping design so that high-reliability piping systems can be built. Based on the results studied so far, it is concluded that the reliability approach can undoubtedly help in understanding not only how to assess and improve the safety of the piping systems but also how to design more reliable piping systems

  14. Dynamic reliability modeling of three-state networks

    OpenAIRE

    Ashrafi, S.; Asadi, M.

    2014-01-01

    This paper is an investigation into the reliability and stochastic properties of three-state networks. We consider a single-step network consisting of n links and we assume that the links are subject to failure. We assume that the network can be in three states, up (K = 2), partial performance (K = 1), and down (K = 0). Using the concept of the two-dimensional signature, we study the residual lifetimes of the networks under different scenarios on the states and the number of...

  15. Suncor maintenance and reliability

    Energy Technology Data Exchange (ETDEWEB)

    Little, S. [Suncor Energy, Calgary, AB (Canada)

    2006-07-01

    Fleet maintenance and reliability at Suncor Energy was discussed in this presentation, with reference to Suncor Energy's primary and support equipment fleets. This paper also discussed Suncor Energy's maintenance and reliability standard involving people, processes and technology. An organizational maturity chart that graphed organizational learning against organizational performance was illustrated. The presentation also reviewed the maintenance and reliability framework; maintenance reliability model; the process overview of the maintenance and reliability standard; a process flow chart of maintenance strategies and programs; and an asset reliability improvement process flow chart. An example of an improvement initiative was included, with reference to a shovel reliability review; a dipper trip reliability investigation; bucket related failures by type and frequency; root cause analysis of the reliability process; and additional actions taken. Last, the presentation provided a graph of the results of the improvement initiative and presented the key lessons learned. tabs., figs.

  16. The cognitive environment simulation as a tool for modeling human performance and reliability

    International Nuclear Information System (INIS)

    Woods, D.D.; Pople, H. Jr.; Roth, E.M.

    1990-01-01

    The US Nuclear Regulatory Commission is sponsoring a research program to develop improved methods to model the cognitive behavior of nuclear power plant (NPP) personnel. Under this program, a tool for simulating how people form intentions to act in NPP emergency situations was developed using artificial intelligence (AI) techniques. This tool is called Cognitive Environment Simulation (CES). The Cognitive Reliability Assessment Technique (or CREATE) was also developed to specify how CBS can be used to enhance the measurement of the human contribution to risk in probabilistic risk assessment (PRA) studies. The next step in the research program was to evaluate the modeling tool and the method for using the tool for Human Reliability Analysis (HRA) in PRAs. Three evaluation activities were conducted. First, a panel of highly distinguished experts in cognitive modeling, AI, PRA and HRA provided a technical review of the simulation development work. Second, based on panel recommendations, CES was exercised on a family of steam generator tube rupture incidents where empirical data on operator performance already existed. Third, a workshop with HRA practitioners was held to analyze a worked example of the CREATE method to evaluate the role of CES/CREATE in HRA. The results of all three evaluations indicate that CES/CREATE represents a promising approach to modeling operator intention formation during emergency operations

  17. Recommendations for Assessment of the Reliability, Sensitivity, and Validity of Data Provided by Wearable Sensors Designed for Monitoring Physical Activity.

    Science.gov (United States)

    Düking, Peter; Fuss, Franz Konstantin; Holmberg, Hans-Christer; Sperlich, Billy

    2018-04-30

    Although it is becoming increasingly popular to monitor parameters related to training, recovery, and health with wearable sensor technology (wearables), scientific evaluation of the reliability, sensitivity, and validity of such data is limited and, where available, has involved a wide variety of approaches. To improve the trustworthiness of data collected by wearables and facilitate comparisons, we have outlined recommendations for standardized evaluation. We discuss the wearable devices themselves, as well as experimental and statistical considerations. Adherence to these recommendations should be beneficial not only for the individual, but also for regulatory organizations and insurance companies. ©Peter Düking, Franz Konstantin Fuss, Hans-Christer Holmberg, Billy Sperlich. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 30.04.2018.

  18. Digital photography provides a fast, reliable, and noninvasive method to estimate anthocyanin pigment concentration in reproductive and vegetative plant tissues.

    Science.gov (United States)

    Del Valle, José C; Gallardo-López, Antonio; Buide, Mª Luisa; Whittall, Justen B; Narbona, Eduardo

    2018-03-01

    Anthocyanin pigments have become a model trait for evolutionary ecology as they often provide adaptive benefits for plants. Anthocyanins have been traditionally quantified biochemically or more recently using spectral reflectance. However, both methods require destructive sampling and can be labor intensive and challenging with small samples. Recent advances in digital photography and image processing make it the method of choice for measuring color in the wild. Here, we use digital images as a quick, noninvasive method to estimate relative anthocyanin concentrations in species exhibiting color variation. Using a consumer-level digital camera and a free image processing toolbox, we extracted RGB values from digital images to generate color indices. We tested petals, stems, pedicels, and calyces of six species, which contain different types of anthocyanin pigments and exhibit different pigmentation patterns. Color indices were assessed by their correlation to biochemically determined anthocyanin concentrations. For comparison, we also calculated color indices from spectral reflectance and tested the correlation with anthocyanin concentration. Indices perform differently depending on the nature of the color variation. For both digital images and spectral reflectance, the most accurate estimates of anthocyanin concentration emerge from anthocyanin content-chroma ratio, anthocyanin content-chroma basic, and strength of green indices. Color indices derived from both digital images and spectral reflectance strongly correlate with biochemically determined anthocyanin concentration; however, the estimates from digital images performed better than spectral reflectance in terms of r 2 and normalized root-mean-square error. This was particularly noticeable in a species with striped petals, but in the case of striped calyces, both methods showed a comparable relationship with anthocyanin concentration. Using digital images brings new opportunities to accurately quantify the

  19. Power electronics reliability analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Mark A.; Atcitty, Stanley

    2009-12-01

    This report provides the DOE and industry with a general process for analyzing power electronics reliability. The analysis can help with understanding the main causes of failures, downtime, and cost and how to reduce them. One approach is to collect field maintenance data and use it directly to calculate reliability metrics related to each cause. Another approach is to model the functional structure of the equipment using a fault tree to derive system reliability from component reliability. Analysis of a fictitious device demonstrates the latter process. Optimization can use the resulting baseline model to decide how to improve reliability and/or lower costs. It is recommended that both electric utilities and equipment manufacturers make provisions to collect and share data in order to lay the groundwork for improving reliability into the future. Reliability analysis helps guide reliability improvements in hardware and software technology including condition monitoring and prognostics and health management.

  20. MAPPS (Maintenance Personnel Performance Simulation): a computer simulation model for human reliability analysis

    International Nuclear Information System (INIS)

    Knee, H.E.; Haas, P.M.

    1985-01-01

    A computer model has been developed, sensitivity tested, and evaluated capable of generating reliable estimates of human performance measures in the nuclear power plant (NPP) maintenance context. The model, entitled MAPPS (Maintenance Personnel Performance Simulation), is of the simulation type and is task-oriented. It addresses a number of person-machine, person-environment, and person-person variables and is capable of providing the user with a rich spectrum of important performance measures including mean time for successful task performance by a maintenance team and maintenance team probability of task success. These two measures are particularly important for input to probabilistic risk assessment (PRA) studies which were the primary impetus for the development of MAPPS. The simulation nature of the model along with its generous input parameters and output variables allows its usefulness to extend beyond its input to PRA

  1. Theory model and experiment research about the cognition reliability of nuclear power plant operators

    International Nuclear Information System (INIS)

    Fang Xiang; Zhao Bingquan

    2000-01-01

    In order to improve the reliability of NPP operation, the simulation research on the reliability of nuclear power plant operators is needed. Making use of simulator of nuclear power plant as research platform, and taking the present international reliability research model-human cognition reliability for reference, the part of the model is modified according to the actual status of Chinese nuclear power plant operators and the research model of Chinese nuclear power plant operators obtained based on two-parameter Weibull distribution. Experiments about the reliability of nuclear power plant operators are carried out using the two-parameter Weibull distribution research model. Compared with those in the world, the same results are achieved. The research would be beneficial to the operation safety of nuclear power plant

  2. Stochastic modeling for reliability shocks, burn-in and heterogeneous populations

    CERN Document Server

    Finkelstein, Maxim

    2013-01-01

    Focusing on shocks modeling, burn-in and heterogeneous populations, Stochastic Modeling for Reliability naturally combines these three topics in the unified stochastic framework and presents numerous practical examples that illustrate recent theoretical findings of the authors.  The populations of manufactured items in industry are usually heterogeneous. However, the conventional reliability analysis is performed under the implicit assumption of homogeneity, which can result in distortion of the corresponding reliability indices and various misconceptions. Stochastic Modeling for Reliability fills this gap and presents the basics and further developments of reliability theory for heterogeneous populations. Specifically, the authors consider burn-in as a method of elimination of ‘weak’ items from heterogeneous populations. The real life objects are operating in a changing environment. One of the ways to model an impact of this environment is via the external shocks occurring in accordance with some stocha...

  3. Research on cognitive reliability model for main control room considering human factors in nuclear power plants

    International Nuclear Information System (INIS)

    Jiang Jianjun; Zhang Li; Wang Yiqun; Zhang Kun; Peng Yuyuan; Zhou Cheng

    2012-01-01

    Facing the shortcomings of the traditional cognitive factors and cognitive model, this paper presents a Bayesian networks cognitive reliability model by taking the main control room as a reference background and human factors as the key points. The model mainly analyzes the cognitive reliability affected by the human factors, and for the cognitive node and influence factors corresponding to cognitive node, a series of methods and function formulas to compute the node cognitive reliability is proposed. The model and corresponding methods can be applied to the evaluation of cognitive process for the nuclear power plant operators and have a certain significance for the prevention of safety accidents in nuclear power plants. (authors)

  4. Model case IRS-RWE for the determination of reliability data in practical operation

    Energy Technology Data Exchange (ETDEWEB)

    Hoemke, P; Krause, H

    1975-11-01

    Reliability und availability analyses are carried out to assess the safety of nuclear power plants. The paper deals in the first part with the requirement of accuracy for the input data of such analyses and in the second part with the prototype data collection of reliability data 'Model case IRS-RWE'. The objectives and the structure of the data collection are described. The present results show that the estimation of reliability data in power plants is possible and gives reasonable results.

  5. Estimating the Parameters of Software Reliability Growth Models Using the Grey Wolf Optimization Algorithm

    OpenAIRE

    Alaa F. Sheta; Amal Abdel-Raouf

    2016-01-01

    In this age of technology, building quality software is essential to competing in the business market. One of the major principles required for any quality and business software product for value fulfillment is reliability. Estimating software reliability early during the software development life cycle saves time and money as it prevents spending larger sums fixing a defective software product after deployment. The Software Reliability Growth Model (SRGM) can be used to predict the number of...

  6. National Water Model: Providing the Nation with Actionable Water Intelligence

    Science.gov (United States)

    Aggett, G. R.; Bates, B.

    2017-12-01

    The National Water Model (NWM) provides national, street-level detail of water movement through time and space. Operating hourly, this flood of information offers enormous benefits in the form of water resource management, natural disaster preparedness, and the protection of life and property. The Geo-Intelligence Division at the NOAA National Water Center supplies forecasters and decision-makers with timely, actionable water intelligence through the processing of billions of NWM data points every hour. These datasets include current streamflow estimates, short and medium range streamflow forecasts, and many other ancillary datasets. The sheer amount of NWM data produced yields a dataset too large to allow for direct human comprehension. As such, it is necessary to undergo model data post-processing, filtering, and data ingestion by visualization web apps that make use of cartographic techniques to bring attention to the areas of highest urgency. This poster illustrates NWM output post-processing and cartographic visualization techniques being developed and employed by the Geo-Intelligence Division at the NOAA National Water Center to provide national actionable water intelligence.

  7. The Healthcare Improvement Scotland evidence note rapid review process: providing timely, reliable evidence to inform imperative decisions on healthcare.

    Science.gov (United States)

    McIntosh, Heather M; Calvert, Julie; Macpherson, Karen J; Thompson, Lorna

    2016-06-01

    Rapid review has become widely adopted by health technology assessment agencies in response to demand for evidence-based information to support imperative decisions. Concern about the credibility of rapid reviews and the reliability of their findings has prompted a call for wider publication of their methods. In publishing this overview of the accredited rapid review process developed by Healthcare Improvement Scotland, we aim to raise awareness of our methods and advance the discourse on best practice. Healthcare Improvement Scotland produces rapid reviews called evidence notes using a process that has achieved external accreditation through the National Institute for Health and Care Excellence. Key components include a structured approach to topic selection, initial scoping, considered stakeholder involvement, streamlined systematic review, internal quality assurance, external peer review and updating. The process was introduced in 2010 and continues to be refined over time in response to user feedback and operational experience. Decision-makers value the responsiveness of the process and perceive it as being a credible source of unbiased evidence-based information supporting advice for NHSScotland. Many agencies undertaking rapid reviews are striving to balance efficiency with methodological rigour. We agree that there is a need for methodological guidance and that it should be informed by better understanding of current approaches and the consequences of different approaches to streamlining systematic review methods. Greater transparency in the reporting of rapid review methods is essential to enable that to happen.

  8. Reliability-cost models for the power switching devices of wind power converters

    DEFF Research Database (Denmark)

    Ma, Ke; Blaabjerg, Frede

    2012-01-01

    In order to satisfy the growing reliability requirements for the wind power converters with more cost-effective solution, the target of this paper is to establish a new reliability-cost model which can connect the relationship between reliability performances and corresponding semiconductor cost...... temperature mean value Tm and fluctuation amplitude ΔTj of power devices, are presented. With the proposed reliability-cost model, it is possible to enable future reliability-oriented design of the power switching devices for wind power converters, and also an evaluation benchmark for different wind power...... for power switching devices. First the conduction loss, switching loss as well as thermal impedance models of power switching devices (IGBT module) are related to the semiconductor chip number information respectively. Afterwards simplified analytical solutions, which can directly extract the junction...

  9. The explicit treatment of model uncertainties in the presence of aleatory and epistemic parameter uncertainties in risk and reliability analysis

    International Nuclear Information System (INIS)

    Ahn, Kwang Il; Yang, Joon Eon

    2003-01-01

    In the risk and reliability analysis of complex technological systems, the primary concern of formal uncertainty analysis is to understand why uncertainties arise, and to evaluate how they impact the results of the analysis. In recent times, many of the uncertainty analyses have focused on parameters of the risk and reliability analysis models, whose values are uncertain in an aleatory or an epistemic way. As the field of parametric uncertainty analysis matures, however, more attention is being paid to the explicit treatment of uncertainties that are addressed in the predictive model itself as well as the accuracy of the predictive model. The essential steps for evaluating impacts of these model uncertainties in the presence of parameter uncertainties are to determine rigorously various sources of uncertainties to be addressed in an underlying model itself and in turn model parameters, based on our state-of-knowledge and relevant evidence. Answering clearly the question of how to characterize and treat explicitly the forgoing different sources of uncertainty is particularly important for practical aspects such as risk and reliability optimization of systems as well as more transparent risk information and decision-making under various uncertainties. The main purpose of this paper is to provide practical guidance for quantitatively treating various model uncertainties that would often be encountered in the risk and reliability modeling process of complex technological systems

  10. Reliability Based Optimal Design of Vertical Breakwaters Modelled as a Series System Failure

    DEFF Research Database (Denmark)

    Christiani, E.; Burcharth, H. F.; Sørensen, John Dalsgaard

    1996-01-01

    Reliability based design of monolithic vertical breakwaters is considered. Probabilistic models of important failure modes such as sliding and rupture failure in the rubble mound and the subsoil are described. Characterisation of the relevant stochastic parameters are presented, and relevant design...... variables are identified and an optimal system reliability formulation is presented. An illustrative example is given....

  11. Model correction factor method for reliability problems involving integrals of non-Gaussian random fields

    DEFF Research Database (Denmark)

    Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der

    2002-01-01

    The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals ...

  12. Reliability Models Applied to a System of Power Converters in Particle Accelerators

    OpenAIRE

    Siemaszko, D; Speiser, M; Pittet, S

    2012-01-01

    Several reliability models are studied when applied to a power system containing a large number of power converters. A methodology is proposed and illustrated in the case study of a novel linear particle accelerator designed for reaching high energies. The proposed methods result in the prediction of both reliability and availability of the considered system for optimisation purposes.

  13. Governance, Government, and the Search for New Provider Models

    Directory of Open Access Journals (Sweden)

    Richard B. Saltman

    2016-01-01

    Full Text Available A central problem in designing effective models of provider governance in health systems has been to ensure an appropriate balance between the concerns of public sector and/or government decision-makers, on the one hand, and of non-governmental health services actors in civil society and private life, on the other. In tax-funded European health systems up to the 1980s, the state and other public sector decision-makers played a dominant role over health service provision, typically operating hospitals through national or regional governments on a command-and-control basis. In a number of countries, however, this state role has started to change, with governments first stepping out of direct service provision and now de facto pushed to focus more on steering provider organizations rather than on direct public management. In this new approach to provider governance, the state has pulled back into a regulatory role that introduces market-like incentives and management structures, which then apply to both public and private sector providers alike. This article examines some of the main operational complexities in implementing this new governance reality/strategy, specifically from a service provision (as opposed to mostly a financing or even regulatory perspective. After briefly reviewing some of the key theoretical dilemmas, the paper presents two case studies where this new approach was put into practice: primary care in Sweden and hospitals in Spain. The article concludes that good governance today needs to reflect practical operational realities if it is to have the desired effect on health sector reform outcome.

  14. Governance, Government, and the Search for New Provider Models.

    Science.gov (United States)

    Saltman, Richard B; Duran, Antonio

    2015-11-03

    A central problem in designing effective models of provider governance in health systems has been to ensure an appropriate balance between the concerns of public sector and/or government decision-makers, on the one hand, and of non-governmental health services actors in civil society and private life, on the other. In tax-funded European health systems up to the 1980s, the state and other public sector decision-makers played a dominant role over health service provision, typically operating hospitals through national or regional governments on a command-and-control basis. In a number of countries, however, this state role has started to change, with governments first stepping out of direct service provision and now de facto pushed to focus more on steering provider organizations rather than on direct public management. In this new approach to provider governance, the state has pulled back into a regulatory role that introduces market-like incentives and management structures, which then apply to both public and private sector providers alike. This article examines some of the main operational complexities in implementing this new governance reality/strategy, specifically from a service provision (as opposed to mostly a financing or even regulatory) perspective. After briefly reviewing some of the key theoretical dilemmas, the paper presents two case studies where this new approach was put into practice: primary care in Sweden and hospitals in Spain. The article concludes that good governance today needs to reflect practical operational realities if it is to have the desired effect on health sector reform outcome. © 2016 by Kerman University of Medical Sciences.

  15. Analysis of Statistical Distributions Used for Modeling Reliability and Failure Rate of Temperature Alarm Circuit

    International Nuclear Information System (INIS)

    EI-Shanshoury, G.I.

    2011-01-01

    Several statistical distributions are used to model various reliability and maintainability parameters. The applied distribution depends on the' nature of the data being analyzed. The presented paper deals with analysis of some statistical distributions used in reliability to reach the best fit of distribution analysis. The calculations rely on circuit quantity parameters obtained by using Relex 2009 computer program. The statistical analysis of ten different distributions indicated that Weibull distribution gives the best fit distribution for modeling the reliability of the data set of Temperature Alarm Circuit (TAC). However, the Exponential distribution is found to be the best fit distribution for modeling the failure rate

  16. Reliability Modeling of Electromechanical System with Meta-Action Chain Methodology

    Directory of Open Access Journals (Sweden)

    Genbao Zhang

    2018-01-01

    Full Text Available To establish a more flexible and accurate reliability model, the reliability modeling and solving algorithm based on the meta-action chain thought are used in this thesis. Instead of estimating the reliability of the whole system only in the standard operating mode, this dissertation adopts the structure chain and the operating action chain for the system reliability modeling. The failure information and structure information for each component are integrated into the model to overcome the given factors applied in the traditional modeling. In the industrial application, there may be different operating modes for a multicomponent system. The meta-action chain methodology can estimate the system reliability under different operating modes by modeling the components with varieties of failure sensitivities. This approach has been identified by computing some electromechanical system cases. The results indicate that the process could improve the system reliability estimation. It is an effective tool to solve the reliability estimation problem in the system under various operating modes.

  17. Development of an Environment for Software Reliability Model Selection

    Science.gov (United States)

    1992-09-01

    now is directed to other related problems such as tools for model selection, multiversion programming, and software fault tolerance modeling... multiversion programming, 7. Hlardware can be repaired by spare modules, which is not. the case for software, 2-6 N. Preventive maintenance is very important

  18. Fatigue reliability and effective turbulence models in wind farms

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Frandsen, Sten Tronæs; Tarp-Johansen, N.J.

    2007-01-01

    behind wind turbines can imply a significant reduction in the fatigue lifetime of wind turbines placed in wakes. In this paper the design code model in the wind turbine code IEC 61400-1 (2005) is evaluated from a probabilistic point of view, including the importance of modeling the SN-curve by linear...

  19. Powering stochastic reliability models by discrete event simulation

    DEFF Research Database (Denmark)

    Kozine, Igor; Wang, Xiaoyun

    2012-01-01

    it difficult to find a solution to the problem. The power of modern computers and recent developments in discrete-event simulation (DES) software enable to diminish some of the drawbacks of stochastic models. In this paper we describe the insights we have gained based on using both Markov and DES models...

  20. Wind Farm Reliability Modelling Using Bayesian Networks and Semi-Markov Processes

    Directory of Open Access Journals (Sweden)

    Robert Adam Sobolewski

    2015-09-01

    Full Text Available Technical reliability plays an important role among factors affecting the power output of a wind farm. The reliability is determined by an internal collection grid topology and reliability of its electrical components, e.g. generators, transformers, cables, switch breakers, protective relays, and busbars. A wind farm reliability’s quantitative measure can be the probability distribution of combinations of operating and failed states of the farm’s wind turbines. The operating state of a wind turbine is its ability to generate power and to transfer it to an external power grid, which means the availability of the wind turbine and other equipment necessary for the power transfer to the external grid. This measure can be used for quantitative analysis of the impact of various wind farm topologies and the reliability of individual farm components on the farm reliability, and for determining the expected farm output power with consideration of the reliability. This knowledge may be useful in an analysis of power generation reliability in power systems. The paper presents probabilistic models that quantify the wind farm reliability taking into account the above-mentioned technical factors. To formulate the reliability models Bayesian networks and semi-Markov processes were used. Using Bayesian networks the wind farm structural reliability was mapped, as well as quantitative characteristics describing equipment reliability. To determine the characteristics semi-Markov processes were used. The paper presents an example calculation of: (i probability distribution of the combination of both operating and failed states of four wind turbines included in the wind farm, and (ii expected wind farm output power with consideration of its reliability.

  1. Cognitive modelling: a basic complement of human reliability analysis

    International Nuclear Information System (INIS)

    Bersini, U.; Cacciabue, P.C.; Mancini, G.

    1988-01-01

    In this paper the issues identified in modelling humans and machines are discussed in the perspective of the consideration of human errors managing complex plants during incidental as well as normal conditions. The dichotomy between the use of a cognitive versus a behaviouristic model approach is discussed and the complementarity aspects rather than the differences of the two methods are identified. A cognitive model based on a hierarchical goal-oriented approach and driven by fuzzy logic methodology is presented as the counterpart to the 'classical' THERP methodology for studying human errors. Such a cognitive model is discussed at length and its fundamental components, i.e. the High Level Decision Making and the Low Level Decision Making models, are reviewed. Finally, the inadequacy of the 'classical' THERP methodology to deal with cognitive errors is discussed on the basis of a simple test case. For the same case the cognitive model is then applied showing the flexibility and adequacy of the model to dynamic configuration with time-dependent failures of components and with consequent need for changing of strategy during the transient itself. (author)

  2. Construction of a reliable model pyranometer for irradiance ...

    African Journals Online (AJOL)

    USER

    2010-03-22

    Mar 22, 2010 ... hour, latitude and cloud cover are the most widely or commonly used ... models in the Nigerian environment include that of Burari and Sambo .... influence the stability of the assembly (reducing its phase ... earth's surface.

  3. Reliability of Current Biokinetic and Dosimetric Models for Radionuclides: A Pilot Study

    Energy Technology Data Exchange (ETDEWEB)

    Leggett, Richard Wayne [ORNL; Eckerman, Keith F [ORNL; Meck, Robert A. [U.S. Nuclear Regulatory Commission

    2008-10-01

    This report describes the results of a pilot study of the reliability of the biokinetic and dosimetric models currently used by the U.S. Nuclear Regulatory Commission (NRC) as predictors of dose per unit internal or external exposure to radionuclides. The study examines the feasibility of critically evaluating the accuracy of these models for a comprehensive set of radionuclides of concern to the NRC. Each critical evaluation would include: identification of discrepancies between the models and current databases; characterization of uncertainties in model predictions of dose per unit intake or unit external exposure; characterization of variability in dose per unit intake or unit external exposure; and evaluation of prospects for development of more accurate models. Uncertainty refers here to the level of knowledge of a central value for a population, and variability refers to quantitative differences between different members of a population. This pilot study provides a critical assessment of models for selected radionuclides representing different levels of knowledge of dose per unit exposure. The main conclusions of this study are as follows: (1) To optimize the use of available NRC resources, the full study should focus on radionuclides most frequently encountered in the workplace or environment. A list of 50 radionuclides is proposed. (2) The reliability of a dose coefficient for inhalation or ingestion of a radionuclide (i.e., an estimate of dose per unit intake) may depend strongly on the specific application. Multiple characterizations of the uncertainty in a dose coefficient for inhalation or ingestion of a radionuclide may be needed for different forms of the radionuclide and different levels of information of that form available to the dose analyst. (3) A meaningful characterization of variability in dose per unit intake of a radionuclide requires detailed information on the biokinetics of the radionuclide and hence is not feasible for many infrequently

  4. Providing surgical care in Somalia: A model of task shifting.

    Science.gov (United States)

    Chu, Kathryn M; Ford, Nathan P; Trelles, Miguel

    2011-07-15

    Somalia is one of the most political unstable countries in the world. Ongoing insecurity has forced an inconsistent medical response by the international community, with little data collection. This paper describes the "remote" model of surgical care by Medecins Sans Frontieres, in Guri-El, Somalia. The challenges of providing the necessary prerequisites for safe surgery are discussed as well as the successes and limitations of task shifting in this resource-limited context. In January 2006, MSF opened a project in Guri-El located between Mogadishu and Galcayo. The objectives were to reduce mortality due to complications of pregnancy and childbirth and from violent and non-violent trauma. At the start of the program, expatriate surgeons and anesthesiologists established safe surgical practices and performed surgical procedures. After January 2008, expatriates were evacuated due to insecurity and surgical care has been provided by local Somalian doctors and nurses with periodic supervisory visits from expatriate staff. Between October 2006 and December 2009, 2086 operations were performed on 1602 patients. The majority (1049, 65%) were male and the median age was 22 (interquartile range, 17-30). 1460 (70%) of interventions were emergent. Trauma accounted for 76% (1585) of all surgical pathology; gunshot wounds accounted for 89% (584) of violent injuries. Operative mortality (0.5% of all surgical interventions) was not higher when Somalian staff provided care compared to when expatriate surgeons and anesthesiologists. The delivery of surgical care in any conflict-settings is difficult, but in situations where international support is limited, the challenges are more extreme. In this model, task shifting, or the provision of services by less trained cadres, was utilized and peri-operative mortality remained low demonstrating that safe surgical practices can be accomplished even without the presence of fully trained surgeon and anesthesiologists. If security improves

  5. Reliability Estimation of Aero-engine Based on Mixed Weibull Distribution Model

    Science.gov (United States)

    Yuan, Zhongda; Deng, Junxiang; Wang, Dawei

    2018-02-01

    Aero-engine is a complex mechanical electronic system, based on analysis of reliability of mechanical electronic system, Weibull distribution model has an irreplaceable role. Till now, only two-parameter Weibull distribution model and three-parameter Weibull distribution are widely used. Due to diversity of engine failure modes, there is a big error with single Weibull distribution model. By contrast, a variety of engine failure modes can be taken into account with mixed Weibull distribution model, so it is a good statistical analysis model. Except the concept of dynamic weight coefficient, in order to make reliability estimation result more accurately, three-parameter correlation coefficient optimization method is applied to enhance Weibull distribution model, thus precision of mixed distribution reliability model is improved greatly. All of these are advantageous to popularize Weibull distribution model in engineering applications.

  6. Observation Likelihood Model Design and Failure Recovery Scheme toward Reliable Localization of Mobile Robots

    Directory of Open Access Journals (Sweden)

    Chang-bae Moon

    2011-01-01

    Full Text Available Although there have been many researches on mobile robot localization, it is still difficult to obtain reliable localization performance in a human co-existing real environment. Reliability of localization is highly dependent upon developer's experiences because uncertainty is caused by a variety of reasons. We have developed a range sensor based integrated localization scheme for various indoor service robots. Through the experience, we found out that there are several significant experimental issues. In this paper, we provide useful solutions for following questions which are frequently faced with in practical applications: 1 How to design an observation likelihood model? 2 How to detect the localization failure? 3 How to recover from the localization failure? We present design guidelines of observation likelihood model. Localization failure detection and recovery schemes are presented by focusing on abrupt wheel slippage. Experiments were carried out in a typical office building environment. The proposed scheme to identify the localizer status is useful in practical environments. Moreover, the semi-global localization is a computationally efficient recovery scheme from localization failure. The results of experiments and analysis clearly present the usefulness of proposed solutions.

  7. Observation Likelihood Model Design and Failure Recovery Scheme Toward Reliable Localization of Mobile Robots

    Directory of Open Access Journals (Sweden)

    Chang-bae Moon

    2010-12-01

    Full Text Available Although there have been many researches on mobile robot localization, it is still difficult to obtain reliable localization performance in a human co-existing real environment. Reliability of localization is highly dependent upon developer's experiences because uncertainty is caused by a variety of reasons. We have developed a range sensor based integrated localization scheme for various indoor service robots. Through the experience, we found out that there are several significant experimental issues. In this paper, we provide useful solutions for following questions which are frequently faced with in practical applications: 1 How to design an observation likelihood model? 2 How to detect the localization failure? 3 How to recover from the localization failure? We present design guidelines of observation likelihood model. Localization failure detection and recovery schemes are presented by focusing on abrupt wheel slippage. Experiments were carried out in a typical office building environment. The proposed scheme to identify the localizer status is useful in practical environments. Moreover, the semi-global localization is a computationally efficient recovery scheme from localization failure. The results of experiments and analysis clearly present the usefulness of proposed solutions.

  8. Modeling and reliability analysis of three phase z-source AC-AC converter

    Directory of Open Access Journals (Sweden)

    Prasad Hanuman

    2017-12-01

    Full Text Available This paper presents the small signal modeling using the state space averaging technique and reliability analysis of a three-phase z-source ac-ac converter. By controlling the shoot-through duty ratio, it can operate in buck-boost mode and maintain desired output voltage during voltage sag and surge condition. It has faster dynamic response and higher efficiency as compared to the traditional voltage regulator. Small signal analysis derives different control transfer functions and this leads to design a suitable controller for a closed loop system during supply voltage variation. The closed loop system of the converter with a PID controller eliminates the transients in output voltage and provides steady state regulated output. The proposed model designed in the RT-LAB and executed in a field programming gate array (FPGA-based real-time digital simulator at a fixedtime step of 10 μs and a constant switching frequency of 10 kHz. The simulator was developed using very high speed integrated circuit hardware description language (VHDL, making it versatile and moveable. Hardware-in-the-loop (HIL simulation results are presented to justify the MATLAB simulation results during supply voltage variation of the three phase z-source ac-ac converter. The reliability analysis has been applied to the converter to find out the failure rate of its different components.

  9. Software reliability

    CERN Document Server

    Bendell, A

    1986-01-01

    Software Reliability reviews some fundamental issues of software reliability as well as the techniques, models, and metrics used to predict the reliability of software. Topics covered include fault avoidance, fault removal, and fault tolerance, along with statistical methods for the objective assessment of predictive accuracy. Development cost models and life-cycle cost models are also discussed. This book is divided into eight sections and begins with a chapter on adaptive modeling used to predict software reliability, followed by a discussion on failure rate in software reliability growth mo

  10. Charge transport models for reliability engineering of semiconductor devices

    International Nuclear Information System (INIS)

    Bina, M.

    2014-01-01

    The simulation of semiconductor devices is important for the assessment of device lifetimes before production. In this context, this work investigates the influence of the charge carrier transport model on the accuracy of bias temperature instability and hot-carrier degradation models in MOS devices. For this purpose, a four-state defect model based on a non-radiative multi phonon (NMP) theory is implemented to study the bias temperature instability. However, the doping concentrations typically used in nano-scale devices correspond to only a small number of dopants in the channel, leading to fluctuations of the electrostatic potential. Thus, the granularity of the doping cannot be ignored in these devices. To study the bias temperature instability in the presence of fluctuations of the electrostatic potential, the advanced drift diffusion device simulator Minimos-NT is employed. In a first effort to understand the bias temperature instability in p-channel MOSFETs at elevated temperatures, data from direct-current-current-voltage measurements is successfully reproduced using a four-state defect model. Differences between the four-state defect model and the commonly employed trapping model from Shockley, Read and Hall (SRH) have been investigated showing that the SRH model is incapable of reproducing the measurement data. This is in good agreement with the literature, where it has been extensively shown that a model based on SRH theory cannot reproduce the characteristic time constants found in BTI recovery traces. Upon inspection of recorded recovery traces after bias temperature stress in n-channel MOSFETs it is found that the gate current is strongly correlated with the drain current (recovery trace). Using a random discrete dopant model and non-equilibrium greens functions it is shown that direct tunnelling cannot explain the magnitude of the gate current reduction. Instead it is found that trap-assisted tunnelling, modelled using NMP theory, is the cause of this

  11. Reliability modeling of digital component in plant protection system with various fault-tolerant techniques

    International Nuclear Information System (INIS)

    Kim, Bo Gyung; Kang, Hyun Gook; Kim, Hee Eun; Lee, Seung Jun; Seong, Poong Hyun

    2013-01-01

    Highlights: • Integrated fault coverage is introduced for reflecting characteristics of fault-tolerant techniques in the reliability model of digital protection system in NPPs. • The integrated fault coverage considers the process of fault-tolerant techniques from detection to fail-safe generation process. • With integrated fault coverage, the unavailability of repairable component of DPS can be estimated. • The new developed reliability model can reveal the effects of fault-tolerant techniques explicitly for risk analysis. • The reliability model makes it possible to confirm changes of unavailability according to variation of diverse factors. - Abstract: With the improvement of digital technologies, digital protection system (DPS) has more multiple sophisticated fault-tolerant techniques (FTTs), in order to increase fault detection and to help the system safely perform the required functions in spite of the possible presence of faults. Fault detection coverage is vital factor of FTT in reliability. However, the fault detection coverage is insufficient to reflect the effects of various FTTs in reliability model. To reflect characteristics of FTTs in the reliability model, integrated fault coverage is introduced. The integrated fault coverage considers the process of FTT from detection to fail-safe generation process. A model has been developed to estimate the unavailability of repairable component of DPS using the integrated fault coverage. The new developed model can quantify unavailability according to a diversity of conditions. Sensitivity studies are performed to ascertain important variables which affect the integrated fault coverage and unavailability

  12. Reliability Assessment of IGBT Modules Modeled as Systems with Correlated Components

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2013-01-01

    configuration. The estimated system reliability by the proposed method is a conservative estimate. Application of the suggested method could be extended for reliability estimation of systems composing of welding joints, bolts, bearings, etc. The reliability model incorporates the correlation between...... was applied for the systems failure functions estimation. It is desired to compare the results with the true system failure function, which is possible to estimate using simulation techniques. Theoretical model development should be applied for the further research. One of the directions for it might...... be modeling the system based on the Sequential Order Statistics, by considering the failure of the minimum (weakest component) at each loading level. The proposed idea to represent the system by the independent components could also be used for modeling reliability by Sequential Order Statistics....

  13. Reliability Modeling Development and Its Applications for Ceramic Capacitors with Base-Metal Electrodes (BMEs)

    Science.gov (United States)

    Liu, Donhang

    2014-01-01

    This presentation includes a summary of NEPP-funded deliverables for the Base-Metal Electrodes (BMEs) capacitor task, development of a general reliability model for BME capacitors, and a summary and future work.

  14. Microstructural Modeling of Brittle Materials for Enhanced Performance and Reliability.

    Energy Technology Data Exchange (ETDEWEB)

    Teague, Melissa Christine [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Teague, Melissa Christine [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rodgers, Theron [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rodgers, Theron [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grutzik, Scott Joseph [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grutzik, Scott Joseph [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Meserole, Stephen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Meserole, Stephen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-08-01

    Brittle failure is often influenced by difficult to measure and variable microstructure-scale stresses. Recent advances in photoluminescence spectroscopy (PLS), including improved confocal laser measurement and rapid spectroscopic data collection have established the potential to map stresses with microscale spatial resolution (%3C2 microns). Advanced PLS was successfully used to investigate both residual and externally applied stresses in polycrystalline alumina at the microstructure scale. The measured average stresses matched those estimated from beam theory to within one standard deviation, validating the technique. Modeling the residual stresses within the microstructure produced general agreement in comparison with the experimentally measured results. Microstructure scale modeling is primed to take advantage of advanced PLS to enable its refinement and validation, eventually enabling microstructure modeling to become a predictive tool for brittle materials.

  15. Modeling human intention formation for human reliability assessment

    International Nuclear Information System (INIS)

    Woods, D.D.; Roth, E.M.; Pople, H. Jr.

    1988-01-01

    This paper describes a dynamic simulation capability for modeling how people form intentions to act in nuclear power plant emergency situations. This modeling tool, Cognitive Environment Simulation or CES, was developed based on techniques from artificial intelligence. It simulates the cognitive processes that determine situation assessment and intention formation. It can be used to investigate analytically what situations and factors lead to intention failures, what actions follow from intention failures (e.g. errors of omission, errors of commission, common mode errors), the ability to recover from errors or additional machine failures, and the effects of changes in the NPP person machine system. One application of the CES modeling environment is to enhance the measurement of the human contribution to risk in probabilistic risk assessment studies. (author)

  16. Modelling Reliability of Supply and Infrastructural Dependency in Energy Distribution Systems

    OpenAIRE

    Helseth, Arild

    2008-01-01

    This thesis presents methods and models for assessing reliability of supply and infrastructural dependency in energy distribution systems with multiple energy carriers. The three energy carriers of electric power, natural gas and district heating are considered. Models and methods for assessing reliability of supply in electric power systems are well documented, frequently applied in the industry and continuously being subject to research and improvement. On the contrary, there are compar...

  17. Parametric and semiparametric models with applications to reliability, survival analysis, and quality of life

    CERN Document Server

    Nikulin, M; Mesbah, M; Limnios, N

    2004-01-01

    Parametric and semiparametric models are tools with a wide range of applications to reliability, survival analysis, and quality of life. This self-contained volume examines these tools in survey articles written by experts currently working on the development and evaluation of models and methods. While a number of chapters deal with general theory, several explore more specific connections and recent results in "real-world" reliability theory, survival analysis, and related fields.

  18. Appraisal and Reliability of Variable Engagement Model Prediction ...

    African Journals Online (AJOL)

    The variable engagement model based on the stress - crack opening displacement relationship and, which describes the behaviour of randomly oriented steel fibres composite subjected to uniaxial tension has been evaluated so as to determine the safety indices associated when the fibres are subjected to pullout and with ...

  19. Reliability Evaluation for the Surface to Air Missile Weapon Based on Cloud Model

    Directory of Open Access Journals (Sweden)

    Deng Jianjun

    2015-01-01

    Full Text Available The fuzziness and randomness is integrated by using digital characteristics, such as Expected value, Entropy and Hyper entropy. The cloud model adapted to reliability evaluation is put forward based on the concept of the surface to air missile weapon. The cloud scale of the qualitative evaluation is constructed, and the quantitative variable and the qualitative variable in the system reliability evaluation are corresponded. The practical calculation result shows that it is more effective to analyze the reliability of the surface to air missile weapon by this way. The practical calculation result also reflects the model expressed by cloud theory is more consistent with the human thinking style of uncertainty.

  20. Modeling reliability measurement of interface on information system: Towards the forensic of rules

    Science.gov (United States)

    Nasution, M. K. M.; Sitompul, Darwin; Harahap, Marwan

    2018-02-01

    Today almost all machines depend on the software. As a software and hardware system depends also on the rules that are the procedures for its use. If the procedure or program can be reliably characterized by involving the concept of graph, logic, and probability, then regulatory strength can also be measured accordingly. Therefore, this paper initiates an enumeration model to measure the reliability of interfaces based on the case of information systems supported by the rules of use by the relevant agencies. An enumeration model is obtained based on software reliability calculation.

  1. Herbarium records are reliable sources of phenological change driven by climate and provide novel insights into species' phenological cueing mechanisms.

    Science.gov (United States)

    Davis, Charles C; Willis, Charles G; Connolly, Bryan; Kelly, Courtland; Ellison, Aaron M

    2015-10-01

    Climate change has resulted in major changes in the phenology of some species but not others. Long-term field observational records provide the best assessment of these changes, but geographic and taxonomic biases limit their utility. Plant specimens in herbaria have been hypothesized to provide a wealth of additional data for studying phenological responses to climatic change. However, no study to our knowledge has comprehensively addressed whether herbarium data are accurate measures of phenological response and thus applicable to addressing such questions. We compared flowering phenology determined from field observations (years 1852-1858, 1875, 1878-1908, 2003-2006, 2011-2013) and herbarium records (1852-2013) of 20 species from New England, United States. Earliest flowering date estimated from herbarium records faithfully reflected field observations of first flowering date and substantially increased the sampling range across climatic conditions. Additionally, although most species demonstrated a response to interannual temperature variation, long-term temporal changes in phenological response were not detectable. Our findings support the use of herbarium records for understanding plant phenological responses to changes in temperature, and also importantly establish a new use of herbarium collections: inferring primary phenological cueing mechanisms of individual species (e.g., temperature, winter chilling, photoperiod). These latter data are lacking from most investigations of phenological change, but are vital for understanding differential responses of individual species to ongoing climate change. © 2015 Botanical Society of America.

  2. Reliability prediction system based on the failure rate model for electronic components

    International Nuclear Information System (INIS)

    Lee, Seung Woo; Lee, Hwa Ki

    2008-01-01

    Although many methodologies for predicting the reliability of electronic components have been developed, their reliability might be subjective according to a particular set of circumstances, and therefore it is not easy to quantify their reliability. Among the reliability prediction methods are the statistical analysis based method, the similarity analysis method based on an external failure rate database, and the method based on the physics-of-failure model. In this study, we developed a system by which the reliability of electronic components can be predicted by creating a system for the statistical analysis method of predicting reliability most easily. The failure rate models that were applied are MILHDBK- 217F N2, PRISM, and Telcordia (Bellcore), and these were compared with the general purpose system in order to validate the effectiveness of the developed system. Being able to predict the reliability of electronic components from the stage of design, the system that we have developed is expected to contribute to enhancing the reliability of electronic components

  3. Model uncertainty and multimodel inference in reliability estimation within a longitudinal framework.

    Science.gov (United States)

    Alonso, Ariel; Laenen, Annouschka

    2013-05-01

    Laenen, Alonso, and Molenberghs (2007) and Laenen, Alonso, Molenberghs, and Vangeneugden (2009) proposed a method to assess the reliability of rating scales in a longitudinal context. The methodology is based on hierarchical linear models, and reliability coefficients are derived from the corresponding covariance matrices. However, finding a good parsimonious model to describe complex longitudinal data is a challenging task. Frequently, several models fit the data equally well, raising the problem of model selection uncertainty. When model uncertainty is high one may resort to model averaging, where inferences are based not on one but on an entire set of models. We explored the use of different model building strategies, including model averaging, in reliability estimation. We found that the approach introduced by Laenen et al. (2007, 2009) combined with some of these strategies may yield meaningful results in the presence of high model selection uncertainty and when all models are misspecified, in so far as some of them manage to capture the most salient features of the data. Nonetheless, when all models omit prominent regularities in the data, misleading results may be obtained. The main ideas are further illustrated on a case study in which the reliability of the Hamilton Anxiety Rating Scale is estimated. Importantly, the ambit of model selection uncertainty and model averaging transcends the specific setting studied in the paper and may be of interest in other areas of psychometrics. © 2012 The British Psychological Society.

  4. Quantification of Wave Model Uncertainties Used for Probabilistic Reliability Assessments of Wave Energy Converters

    DEFF Research Database (Denmark)

    Ambühl, Simon; Kofoed, Jens Peter; Sørensen, John Dalsgaard

    2015-01-01

    Wave models used for site assessments are subjected to model uncertainties, which need to be quantified when using wave model results for probabilistic reliability assessments. This paper focuses on determination of wave model uncertainties. Four different wave models are considered, and validation...... data are collected from published scientific research. The bias and the root-mean-square error, as well as the scatter index, are considered for the significant wave height as well as the mean zero-crossing wave period. Based on an illustrative generic example, this paper presents how the quantified...... uncertainties can be implemented in probabilistic reliability assessments....

  5. Determination of Wave Model Uncertainties used for Probabilistic Reliability Assessments of Wave Energy Devices

    DEFF Research Database (Denmark)

    Ambühl, Simon; Kofoed, Jens Peter; Sørensen, John Dalsgaard

    2014-01-01

    Wave models used for site assessments are subject to model uncertainties, which need to be quantified when using wave model results for probabilistic reliability assessments. This paper focuses on determination of wave model uncertainties. Considered are four different wave models and validation...... data is collected from published scientific research. The bias, the root-mean-square error as well as the scatter index are considered for the significant wave height as well as the mean zero-crossing wave period. Based on an illustrative generic example it is shown how the estimated uncertainties can...... be implemented in probabilistic reliability assessments....

  6. On new cautious structural reliability models in the framework of imprecise probabilities

    DEFF Research Database (Denmark)

    Utkin, Lev; Kozine, Igor

    2010-01-01

    measures when the number of events of interest or observations is very small. The main feature of the models is that prior ignorance is not modelled by a fixed single prior distribution, but by a class of priors which is defined by upper and lower probabilities that can converge as statistical data......New imprecise structural reliability models are described in this paper. They are developed based on the imprecise Bayesian inference and are imprecise Dirichlet, imprecise negative binomial, gamma-exponential and normal models. The models are applied to computing cautious structural reliability...

  7. A Structural Reliability Business Process Modelling with System Dynamics Simulation

    OpenAIRE

    Lam, C. Y.; Chan, S. L.; Ip, W. H.

    2010-01-01

    Business activity flow analysis enables organizations to manage structured business processes, and can thus help them to improve performance. The six types of business activities identified here (i.e., SOA, SEA, MEA, SPA, MSA and FIA) are correlated and interact with one another, and the decisions from any business activity form feedback loops with previous and succeeding activities, thus allowing the business process to be modelled and simulated. For instance, for any company that is eager t...

  8. Solitary mammals provide an animal model for autism spectrum disorders.

    Science.gov (United States)

    Reser, Jared Edward

    2014-02-01

    Species of solitary mammals are known to exhibit specialized, neurological adaptations that prepare them to focus working memory on food procurement and survival rather than on social interaction. Solitary and nonmonogamous mammals, which do not form strong social bonds, have been documented to exhibit behaviors and biomarkers that are similar to endophenotypes in autism. Both individuals on the autism spectrum and certain solitary mammals have been reported to be low on measures of affiliative need, bodily expressiveness, bonding and attachment, direct and shared gazing, emotional engagement, conspecific recognition, partner preference, separation distress, and social approach behavior. Solitary mammals also exhibit certain biomarkers that are characteristic of autism, including diminished oxytocin and vasopressin signaling, dysregulation of the endogenous opioid system, increased Hypothalamic-pituitary-adrenal axis (HPA) activity to social encounters, and reduced HPA activity to separation and isolation. The extent of these similarities suggests that solitary mammals may offer a useful model of autism spectrum disorders and an opportunity for investigating genetic and epigenetic etiological factors. If the brain in autism can be shown to exhibit distinct homologous or homoplastic similarities to the brains of solitary animals, it will reveal that they may be central to the phenotype and should be targeted for further investigation. Research of the neurological, cellular, and molecular basis of these specializations in other mammals may provide insight for behavioral analysis, communication intervention, and psychopharmacology for autism.

  9. Intraclass reliability for assessing how well Taiwan constrained hospital-provided medical services using statistical process control chart techniques.

    Science.gov (United States)

    Chien, Tsair-Wei; Chou, Ming-Ting; Wang, Wen-Chung; Tsai, Li-Shu; Lin, Weir-Sen

    2012-05-15

    Few studies discuss the indicators used to assess the effect on cost containment in healthcare across hospitals in a single-payer national healthcare system with constrained medical resources. We present the intraclass correlation coefficient (ICC) to assess how well Taiwan constrained hospital-provided medical services in such a system. A custom Excel-VBA routine to record the distances of standard deviations (SDs) from the central line (the mean over the previous 12 months) of a control chart was used to construct and scale annual medical expenditures sequentially from 2000 to 2009 for 421 hospitals in Taiwan to generate the ICC. The ICC was then used to evaluate Taiwan's year-based convergent power to remain unchanged in hospital-provided constrained medical services. A bubble chart of SDs for a specific month was generated to present the effects of using control charts in a national healthcare system. ICCs were generated for Taiwan's year-based convergent power to constrain its medical services from 2000 to 2009. All hospital groups showed a gradually well-controlled supply of services that decreased from 0.772 to 0.415. The bubble chart identified outlier hospitals that required investigation of possible excessive reimbursements in a specific time period. We recommend using the ICC to annually assess a nation's year-based convergent power to constrain medical services across hospitals. Using sequential control charts to regularly monitor hospital reimbursements is required to achieve financial control in a single-payer nationwide healthcare system.

  10. A General Reliability Model for Ni-BaTiO3-Based Multilayer Ceramic Capacitors

    Science.gov (United States)

    Liu, Donhang

    2014-01-01

    The evaluation of multilayer ceramic capacitors (MLCCs) with Ni electrode and BaTiO3 dielectric material for potential space project applications requires an in-depth understanding of their reliability. A general reliability model for Ni-BaTiO3 MLCC is developed and discussed. The model consists of three parts: a statistical distribution; an acceleration function that describes how a capacitor's reliability life responds to the external stresses, and an empirical function that defines contribution of the structural and constructional characteristics of a multilayer capacitor device, such as the number of dielectric layers N, dielectric thickness d, average grain size, and capacitor chip size A. Application examples are also discussed based on the proposed reliability model for Ni-BaTiO3 MLCCs.

  11. Thiamine primed defense provides reliable alternative to systemic fungicide carbendazim against sheath blight disease in rice (Oryza sativa L.).

    Science.gov (United States)

    Bahuguna, Rajeev Nayan; Joshi, Rohit; Shukla, Alok; Pandey, Mayank; Kumar, J

    2012-08-01

    A novel pathogen defense strategy by thiamine priming was evaluated for its efficacy against sheath blight pathogen, Rhizoctonia solani AG-1A, of rice and compared with that of systemic fungicide, carbendazim (BCM). Seeds of semidwarf, high yielding, basmati rice variety Vasumati were treated with thiamine (50 mM) and BCM (4 mM). The pot cultured plants were challenge inoculated with R. solani after 40 days of sowing and effect of thiamine and BCM on rice growth and yield traits was examined. Higher hydrogen peroxide content, total phenolics accumulation, phenylalanine ammonia lyase (PAL) activity and superoxide dismutase (SOD) activity under thiamine treatment displayed elevated level of systemic resistance, which was further augmented under challenging pathogen infection. High transcript level of phenylalanine ammonia lyase (PAL) and manganese superoxide dismutase (MnSOD) validated mode of thiamine primed defense. Though minimum disease severity was observed under BCM treatment, thiamine produced comparable results, with 18.12 per cent lower efficacy. Along with fortifying defense components and minor influence on photosynthetic pigments and nitrate reductase (NR) activity, thiamine treatment significantly reduced pathogen-induced loss in photosynthesis, stomatal conductance, chlorophyll fluorescence, NR activity and NR transcript level. Physiological traits affected under pathogen infection were found signatory for characterizing plant's response under disease and were detectable at early stage of infection. These findings provide a novel paradigm for developing alternative, environmentally safe strategies to control plant diseases. Copyright © 2012 Elsevier Masson SAS. All rights reserved.

  12. Tracking reliability for space cabin-borne equipment in development by Crow model.

    Science.gov (United States)

    Chen, J D; Jiao, S J; Sun, H L

    2001-12-01

    Objective. To study and track the reliability growth of manned spaceflight cabin-borne equipment in the course of its development. Method. A new technique of reliability growth estimation and prediction, which is composed of the Crow model and test data conversion (TDC) method was used. Result. The estimation and prediction value of the reliability growth conformed to its expectations. Conclusion. The method could dynamically estimate and predict the reliability of the equipment by making full use of various test information in the course of its development. It offered not only a possibility of tracking the equipment reliability growth, but also the reference for quality control in manned spaceflight cabin-borne equipment design and development process.

  13. Can diligent and extensive mapping of faults provide reliable estimates of the expected maximum earthquakes at these faults? No. (Invited)

    Science.gov (United States)

    Bird, P.

    2010-12-01

    [Bird, 2009, JGR] to model neotectonics of the active fault network in the western United States found that only 2/3 of Pacific-North America relative motion in California occurs by slip on faults included in seismic hazard models by the 2007 Working Group on California Earthquake Probabilities [2008; USGS OFR 2007-1437]. (Whether the missing distributed permanent deformation is seismogenic has not yet been determined.) 5. Even outside of broad orogens, dangerous intraplate faulting is evident in catalogs: (a) About 3% of shallow earthquakes in the Global CMT catalog are Intraplate [Bird et al., 2010, SRL]; (b) Intraplate earthquakes have higher stress-drops by about a factor-of-two [Kanamori & Anderson, 1975, BSSA; Allmann & Shearer, 2009, JGR]; (c) The corner magnitude of intraplate earthquakes is >7.6, and unconstrained from above, on the moment magnitude scale [Bird & Kagan, 2004, BSSA]. For some intraplate earthquakes, the causitive fault is mapped only (if at all) by its aftershocks.

  14. A Review on VSC-HVDC Reliability Modeling and Evaluation Techniques

    Science.gov (United States)

    Shen, L.; Tang, Q.; Li, T.; Wang, Y.; Song, F.

    2017-05-01

    With the fast development of power electronics, voltage-source converter (VSC) HVDC technology presents cost-effective ways for bulk power transmission. An increasing number of VSC-HVDC projects has been installed worldwide. Their reliability affects the profitability of the system and therefore has a major impact on the potential investors. In this paper, an overview of the recent advances in the area of reliability evaluation for VSC-HVDC systems is provided. Taken into account the latest multi-level converter topology, the VSC-HVDC system is categorized into several sub-systems and the reliability data for the key components is discussed based on sources with academic and industrial backgrounds. The development of reliability evaluation methodologies is reviewed and the issues surrounding the different computation approaches are briefly analysed. A general VSC-HVDC reliability evaluation procedure is illustrated in this paper.

  15. Assessing the reliability of predictive activity coefficient models for molecules consisting of several functional groups

    Directory of Open Access Journals (Sweden)

    R. P. Gerber

    2013-03-01

    Full Text Available Currently, the most successful predictive models for activity coefficients are those based on functional groups such as UNIFAC. In contrast, these models require a large amount of experimental data for the determination of their parameter matrix. A more recent alternative is the models based on COSMO, for which only a small set of universal parameters must be calibrated. In this work, a recalibrated COSMO-SAC model was compared with the UNIFAC (Do model employing experimental infinite dilution activity coefficient data for 2236 non-hydrogen-bonding binary mixtures at different temperatures. As expected, UNIFAC (Do presented better overall performance, with a mean absolute error of 0.12 ln-units against 0.22 for our COSMO-SAC implementation. However, in cases involving molecules with several functional groups or when functional groups appear in an unusual way, the deviation for UNIFAC was 0.44 as opposed to 0.20 for COSMO-SAC. These results show that COSMO-SAC provides more reliable predictions for multi-functional or more complex molecules, reaffirming its future prospects.

  16. Modeling Manufacturing Impacts on Aging and Reliability of Polyurethane Foams

    Energy Technology Data Exchange (ETDEWEB)

    Rao, Rekha R.; Roberts, Christine Cardinal; Mondy, Lisa Ann; Soehnel, Melissa Marie; Johnson, Kyle; Lorenzo, Henry T.

    2016-10-01

    Polyurethane is a complex multiphase material that evolves from a viscous liquid to a system of percolating bubbles, which are created via a CO2 generating reaction. The continuous phase polymerizes to a solid during the foaming process generating heat. Foams introduced into a mold increase their volume up to tenfold, and the dynamics of the expansion process may lead to voids and will produce gradients in density and degree of polymerization. These inhomogeneities can lead to structural stability issues upon aging. For instance, structural components in weapon systems have been shown to change shape as they age depending on their molding history, which can threaten critical tolerances. The purpose of this project is to develop a Cradle-to-Grave multiphysics model, which allows us to predict the material properties of foam from its birth through aging in the stockpile, where its dimensional stability is important.

  17. Microgrid Reliability Modeling and Battery Scheduling Using Stochastic Linear Programming

    Energy Technology Data Exchange (ETDEWEB)

    Cardoso, Goncalo; Stadler, Michael; Siddiqui, Afzal; Marnay, Chris; DeForest, Nicholas; Barbosa-Povoa, Ana; Ferrao, Paulo

    2013-05-23

    This paper describes the introduction of stochastic linear programming into Operations DER-CAM, a tool used to obtain optimal operating schedules for a given microgrid under local economic and environmental conditions. This application follows previous work on optimal scheduling of a lithium-iron-phosphate battery given the output uncertainty of a 1 MW molten carbonate fuel cell. Both are in the Santa Rita Jail microgrid, located in Dublin, California. This fuel cell has proven unreliable, partially justifying the consideration of storage options. Several stochastic DER-CAM runs are executed to compare different scenarios to values obtained by a deterministic approach. Results indicate that using a stochastic approach provides a conservative yet more lucrative battery schedule. Lower expected energy bills result, given fuel cell outages, in potential savings exceeding 6percent.

  18. Assessing Reliability of Cellulose Hydrolysis Models to Support Biofuel Process Design – Identifiability and Uncertainty Analysis

    DEFF Research Database (Denmark)

    Sin, Gürkan; Meyer, Anne S.; Gernaey, Krist

    2010-01-01

    The reliability of cellulose hydrolysis models is studied using the NREL model. An identifiability analysis revealed that only 6 out of 26 parameters are identifiable from the available data (typical hydrolysis experiments). Attempting to identify a higher number of parameters (as done in the ori......The reliability of cellulose hydrolysis models is studied using the NREL model. An identifiability analysis revealed that only 6 out of 26 parameters are identifiable from the available data (typical hydrolysis experiments). Attempting to identify a higher number of parameters (as done...

  19. A data-informed PIF hierarchy for model-based Human Reliability Analysis

    International Nuclear Information System (INIS)

    Groth, Katrina M.; Mosleh, Ali

    2012-01-01

    This paper addresses three problems associated with the use of Performance Shaping Factors in Human Reliability Analysis. (1) There are more than a dozen Human Reliability Analysis (HRA) methods that use Performance Influencing Factors (PIFs) or Performance Shaping Factors (PSFs) to model human performance, but there is not a standard set of PIFs used among the methods, nor is there a framework available to compare the PIFs used in various methods. (2) The PIFs currently in use are not defined specifically enough to ensure consistent interpretation of similar PIFs across methods. (3) There are few rules governing the creation, definition, and usage of PIF sets. This paper introduces a hierarchical set of PIFs that can be used for both qualitative and quantitative HRA. The proposed PIF set is arranged in a hierarchy that can be collapsed or expanded to meet multiple objectives. The PIF hierarchy has been developed with respect to a set fundamental principles necessary for PIF sets, which are also introduced in this paper. This paper includes definitions of the PIFs to allow analysts to map the proposed PIFs onto current and future HRA methods. The standardized PIF hierarchy will allow analysts to combine different types of data and will therefore make the best use of the limited data in HRA. The collapsible hierarchy provides the structure necessary to combine multiple types of information without reducing the quality of the information.

  20. Polarimetric SAR interferometry-based decomposition modelling for reliable scattering retrieval

    Science.gov (United States)

    Agrawal, Neeraj; Kumar, Shashi; Tolpekin, Valentyn

    2016-05-01

    Fully Polarimetric SAR (PolSAR) data is used for scattering information retrieval from single SAR resolution cell. Single SAR resolution cell may contain contribution from more than one scattering objects. Hence, single or dual polarized data does not provide all the possible scattering information. So, to overcome this problem fully Polarimetric data is used. It was observed in previous study that fully Polarimetric data of different dates provide different scattering values for same object and coefficient of determination obtained from linear regression between volume scattering and aboveground biomass (AGB) shows different values for the SAR dataset of different dates. Scattering values are important input elements for modelling of forest aboveground biomass. In this research work an approach is proposed to get reliable scattering from interferometric pair of fully Polarimetric RADARSAT-2 data. The field survey for data collection was carried out for Barkot forest during November 10th to December 5th, 2014. Stratified random sampling was used to collect field data for circumference at breast height (CBH) and tree height measurement. Field-measured AGB was compared with the volume scattering elements obtained from decomposition modelling of individual PolSAR images and PolInSAR coherency matrix. Yamaguchi 4-component decomposition was implemented to retrieve scattering elements from SAR data. PolInSAR based decomposition was the great challenge in this work and it was implemented with certain assumptions to create Hermitian coherency matrix with co-registered polarimetric interferometric pair of SAR data. Regression analysis between field-measured AGB and volume scattering element obtained from PolInSAR data showed highest (0.589) coefficient of determination. The same regression with volume scattering elements of individual SAR images showed 0.49 and 0.50 coefficients of determination for master and slave images respectively. This study recommends use of

  1. Maintenance personnel performance simulation (MAPPS): a model for predicting maintenance performance reliability in nuclear power plants

    International Nuclear Information System (INIS)

    Knee, H.E.; Krois, P.A.; Haas, P.M.; Siegel, A.I.; Ryan, T.G.

    1983-01-01

    The NRC has developed a structured, quantitative, predictive methodology in the form of a computerized simulation model for assessing maintainer task performance. Objective of the overall program is to develop, validate, and disseminate a practical, useful, and acceptable methodology for the quantitative assessment of NPP maintenance personnel reliability. The program was organized into four phases: (1) scoping study, (2) model development, (3) model evaluation, and (4) model dissemination. The program is currently nearing completion of Phase 2 - Model Development

  2. On reliability and maintenance modelling of ageing equipment in electric power systems

    International Nuclear Information System (INIS)

    Lindquist, Tommie

    2008-04-01

    Maintenance optimisation is essential to achieve cost-efficiency, availability and reliability of supply in electric power systems. The process of maintenance optimisation requires information about the costs of preventive and corrective maintenance, as well as the costs of failures borne by both electricity suppliers and customers. To calculate expected costs, information is needed about equipment reliability characteristics and the way in which maintenance affects equipment reliability. The aim of this Ph.D. work has been to develop equipment reliability models taking the effect of maintenance into account. The research has focussed on the interrelated areas of condition estimation, reliability modelling and maintenance modelling, which have been investigated in a number of case studies. In the area of condition estimation two methods to quantitatively estimate the condition of disconnector contacts have been developed, which utilise results from infrared thermography inspections and contact resistance measurements. The accuracy of these methods were investigated in two case studies. Reliability models have been developed and implemented for SF6 circuit-breakers, disconnector contacts and XLPE cables in three separate case studies. These models were formulated using both empirical and physical modelling approaches. To improve confidence in such models a Bayesian statistical method incorporating information from the equipment design process was also developed. This method was illustrated in a case study of SF6 circuit-breaker operating rods. Methods for quantifying the effect of maintenance on equipment condition and reliability have been investigated in case studies on disconnector contacts and SF6 circuit-breakers. The input required by these methods are condition measurements and historical failure and maintenance data, respectively. This research has demonstrated that the effect of maintenance on power system equipment may be quantified using available data

  3. Damage Model for Reliability Assessment of Solder Joints in Wind Turbines

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2012-01-01

    environmental factors. Reliability assessment for such type of products conventionally is performed by classical reliability techniques based on test data. Usually conventional reliability approaches are time and resource consuming activities. Thus in this paper we choose a physics of failure approach to define...... damage model by Miner’s rule. Our attention is focused on crack propagation in solder joints of electrical components due to the temperature loadings. Based on the proposed method it is described how to find the damage level for a given temperature loading profile. The proposed method is discussed...

  4. The model case IRS-RWE for the determination of reliability data in practical operation

    International Nuclear Information System (INIS)

    Hoemke, P.; Krause, H.

    1975-11-01

    Reliability und availability analyses are carried out to assess the safety of nuclear power plants. This paper deals in the first part with the requirement of accuracy for the input data of such analyses and in the second part with the prototype data collection of reliability data 'Model case IRS-RWE'. The objectives and the structure of the data collection will be described. The present results show that the estimation of reliability data in power plants is possible and gives reasonable results. (orig.) [de

  5. Investigation of reliability indicators of information analysis systems based on Markov’s absorbing chain model

    Science.gov (United States)

    Gilmanshin, I. R.; Kirpichnikov, A. P.

    2017-09-01

    In the result of study of the algorithm of the functioning of the early detection module of excessive losses, it is proven the ability to model it by using absorbing Markov chains. The particular interest is in the study of probability characteristics of early detection module functioning algorithm of losses in order to identify the relationship of indicators of reliability of individual elements, or the probability of occurrence of certain events and the likelihood of transmission of reliable information. The identified relations during the analysis allow to set thresholds reliability characteristics of the system components.

  6. Data Applicability of Heritage and New Hardware For Launch Vehicle Reliability Models

    Science.gov (United States)

    Al Hassan, Mohammad; Novack, Steven

    2015-01-01

    Bayesian reliability requires the development of a prior distribution to represent degree of belief about the value of a parameter (such as a component's failure rate) before system specific data become available from testing or operations. Generic failure data are often provided in reliability databases as point estimates (mean or median). A component's failure rate is considered a random variable where all possible values are represented by a probability distribution. The applicability of the generic data source is a significant source of uncertainty that affects the spread of the distribution. This presentation discusses heuristic guidelines for quantifying uncertainty due to generic data applicability when developing prior distributions mainly from reliability predictions.

  7. Reliable design of a closed loop supply chain network under uncertainty: An interval fuzzy possibilistic chance-constrained model

    Science.gov (United States)

    Vahdani, Behnam; Tavakkoli-Moghaddam, Reza; Jolai, Fariborz; Baboli, Arman

    2013-06-01

    This article seeks to offer a systematic approach to establishing a reliable network of facilities in closed loop supply chains (CLSCs) under uncertainties. Facilities that are located in this article concurrently satisfy both traditional objective functions and reliability considerations in CLSC network designs. To attack this problem, a novel mathematical model is developed that integrates the network design decisions in both forward and reverse supply chain networks. The model also utilizes an effective reliability approach to find a robust network design. In order to make the results of this article more realistic, a CLSC for a case study in the iron and steel industry has been explored. The considered CLSC is multi-echelon, multi-facility, multi-product and multi-supplier. Furthermore, multiple facilities exist in the reverse logistics network leading to high complexities. Since the collection centres play an important role in this network, the reliability concept of these facilities is taken into consideration. To solve the proposed model, a novel interactive hybrid solution methodology is developed by combining a number of efficient solution approaches from the recent literature. The proposed solution methodology is a bi-objective interval fuzzy possibilistic chance-constraint mixed integer linear programming (BOIFPCCMILP). Finally, computational experiments are provided to demonstrate the applicability and suitability of the proposed model in a supply chain environment and to help decision makers facilitate their analyses.

  8. Modeling and simulation of a controlled steam generator in the context of dynamic reliability using a Stochastic Hybrid Automaton

    International Nuclear Information System (INIS)

    Babykina, Génia; Brînzei, Nicolae; Aubry, Jean-François; Deleuze, Gilles

    2016-01-01

    The paper proposes a modeling framework to support Monte Carlo simulations of the behavior of a complex industrial system. The aim is to analyze the system dependability in the presence of random events, described by any type of probability distributions. Continuous dynamic evolutions of physical parameters are taken into account by a system of differential equations. Dynamic reliability is chosen as theoretical framework. Based on finite state automata theory, the formal model is built by parallel composition of elementary sub-models using a bottom-up approach. Considerations of a stochastic nature lead to a model called the Stochastic Hybrid Automaton. The Scilab/Scicos open source environment is used for implementation. The case study is carried out on an example of a steam generator of a nuclear power plant. The behavior of the system is studied by exploring its trajectories. Possible system trajectories are analyzed both empirically, using the results of Monte Carlo simulations, and analytically, using the formal system model. The obtained results are show to be relevant. The Stochastic Hybrid Automaton appears to be a suitable tool to address the dynamic reliability problem and to model real systems of high complexity; the bottom-up design provides precision and coherency of the system model. - Highlights: • A part of a nuclear power plant is modeled in the context of dynamic reliability. • Stochastic Hybrid Automaton is used as an input model for Monte Carlo simulations. • The model is formally built using a bottom-up approach. • The behavior of the system is analyzed empirically and analytically. • A formally built SHA shows to be a suitable tool to approach dynamic reliability.

  9. On modeling human reliability in space flights - Redundancy and recovery operations

    Science.gov (United States)

    Aarset, M.; Wright, J. F.

    The reliability of humans is of paramount importance to the safety of space flight systems. This paper describes why 'back-up' operators might not be the best solution, and in some cases, might even degrade system reliability. The problem associated with human redundancy calls for special treatment in reliability analyses. The concept of Standby Redundancy is adopted, and psychological and mathematical models are introduced to improve the way such problems can be estimated and handled. In the past, human reliability has practically been neglected in most reliability analyses, and, when included, the humans have been modeled as a component and treated numerically the way technical components are. This approach is not wrong in itself, but it may lead to systematic errors if too simple analogies from the technical domain are used in the modeling of human behavior. In this paper redundancy in a man-machine system will be addressed. It will be shown how simplification from the technical domain, when applied to human components of a system, may give non-conservative estimates of system reliability.

  10. Phoenix – A model-based Human Reliability Analysis methodology: Qualitative Analysis Procedure

    International Nuclear Information System (INIS)

    Ekanem, Nsimah J.; Mosleh, Ali; Shen, Song-Hua

    2016-01-01

    Phoenix method is an attempt to address various issues in the field of Human Reliability Analysis (HRA). Built on a cognitive human response model, Phoenix incorporates strong elements of current HRA good practices, leverages lessons learned from empirical studies, and takes advantage of the best features of existing and emerging HRA methods. Its original framework was introduced in previous publications. This paper reports on the completed methodology, summarizing the steps and techniques of its qualitative analysis phase. The methodology introduces the “Crew Response Tree” which provides a structure for capturing the context associated with Human Failure Events (HFEs), including errors of omission and commission. It also uses a team-centered version of the Information, Decision and Action cognitive model and “macro-cognitive” abstractions of crew behavior, as well as relevant findings from cognitive psychology literature and operating experience, to identify potential causes of failures and influencing factors during procedure-driven and knowledge-supported crew-plant interactions. The result is the set of identified HFEs and likely scenarios leading to each. The methodology itself is generic in the sense that it is compatible with various quantification methods, and can be adapted for use across different environments including nuclear, oil and gas, aerospace, aviation, and healthcare. - Highlights: • Produces a detailed, consistent, traceable, reproducible and properly documented HRA. • Uses “Crew Response Tree” to capture context associated with Human Failure Events. • Models dependencies between Human Failure Events and influencing factors. • Provides a human performance model for relating context to performance. • Provides a framework for relating Crew Failure Modes to its influencing factors.

  11. Model organoids provide new research opportunities for ductal pancreatic cancer

    NARCIS (Netherlands)

    Boj, Sylvia F; Hwang, Chang-Il; Baker, Lindsey A; Engle, Dannielle D; Tuveson, David A; Clevers, Hans

    We recently established organoid models from normal and neoplastic murine and human pancreas tissues. These organoids exhibit ductal- and disease stage-specific characteristics and, after orthotopic transplantation, recapitulate the full spectrum of tumor progression. Pancreatic organoid technology

  12. Statistical and RBF NN models : providing forecasts and risk assessment

    OpenAIRE

    Marček, Milan

    2009-01-01

    Forecast accuracy of economic and financial processes is a popular measure for quantifying the risk in decision making. In this paper, we develop forecasting models based on statistical (stochastic) methods, sometimes called hard computing, and on a soft method using granular computing. We consider the accuracy of forecasting models as a measure for risk evaluation. It is found that the risk estimation process based on soft methods is simplified and less critical to the question w...

  13. Stochastic models and reliability parameter estimation applicable to nuclear power plant safety

    International Nuclear Information System (INIS)

    Mitra, S.P.

    1979-01-01

    A set of stochastic models and related estimation schemes for reliability parameters are developed. The models are applicable for evaluating reliability of nuclear power plant systems. Reliability information is extracted from model parameters which are estimated from the type and nature of failure data that is generally available or could be compiled in nuclear power plants. Principally, two aspects of nuclear power plant reliability have been investigated: (1) The statistical treatment of inplant component and system failure data; (2) The analysis and evaluation of common mode failures. The model inputs are failure data which have been classified as either the time type of failure data or the demand type of failure data. Failures of components and systems in nuclear power plant are, in general, rare events.This gives rise to sparse failure data. Estimation schemes for treating sparse data, whenever necessary, have been considered. The following five problems have been studied: 1) Distribution of sparse failure rate component data. 2) Failure rate inference and reliability prediction from time type of failure data. 3) Analyses of demand type of failure data. 4) Common mode failure model applicable to time type of failure data. 5) Estimation of common mode failures from 'near-miss' demand type of failure data

  14. Designing the database for a reliability aware Model-Based System Engineering process

    International Nuclear Information System (INIS)

    Cressent, Robin; David, Pierre; Idasiak, Vincent; Kratz, Frederic

    2013-01-01

    This article outlines the need for a reliability database to implement model-based description of components failure modes and dysfunctional behaviors. We detail the requirements such a database should honor and describe our own solution: the Dysfunctional Behavior Database (DBD). Through the description of its meta-model, the benefits of integrating the DBD in the system design process is highlighted. The main advantages depicted are the possibility to manage feedback knowledge at various granularity and semantic levels and to ease drastically the interactions between system engineering activities and reliability studies. The compliance of the DBD with other reliability database such as FIDES is presented and illustrated. - Highlights: ► Model-Based System Engineering is more and more used in the industry. ► It results in a need for a reliability database able to deal with model-based description of dysfunctional behavior. ► The Dysfunctional Behavior Database aims to fulfill that need. ► It helps dealing with feedback management thanks to its structured meta-model. ► The DBD can profit from other reliability database such as FIDES.

  15. Value-Added Models for Teacher Preparation Programs: Validity and Reliability Threats, and a Manageable Alternative

    Science.gov (United States)

    Brady, Michael P.; Heiser, Lawrence A.; McCormick, Jazarae K.; Forgan, James

    2016-01-01

    High-stakes standardized student assessments are increasingly used in value-added evaluation models to connect teacher performance to P-12 student learning. These assessments are also being used to evaluate teacher preparation programs, despite validity and reliability threats. A more rational model linking student performance to candidates who…

  16. An adaptive neuro fuzzy model for estimating the reliability of component-based software systems

    Directory of Open Access Journals (Sweden)

    Kirti Tyagi

    2014-01-01

    Full Text Available Although many algorithms and techniques have been developed for estimating the reliability of component-based software systems (CBSSs, much more research is needed. Accurate estimation of the reliability of a CBSS is difficult because it depends on two factors: component reliability and glue code reliability. Moreover, reliability is a real-world phenomenon with many associated real-time problems. Soft computing techniques can help to solve problems whose solutions are uncertain or unpredictable. A number of soft computing approaches for estimating CBSS reliability have been proposed. These techniques learn from the past and capture existing patterns in data. The two basic elements of soft computing are neural networks and fuzzy logic. In this paper, we propose a model for estimating CBSS reliability, known as an adaptive neuro fuzzy inference system (ANFIS, that is based on these two basic elements of soft computing, and we compare its performance with that of a plain FIS (fuzzy inference system for different data sets.

  17. Life cycle reliability assessment of new products—A Bayesian model updating approach

    International Nuclear Information System (INIS)

    Peng, Weiwen; Huang, Hong-Zhong; Li, Yanfeng; Zuo, Ming J.; Xie, Min

    2013-01-01

    The rapidly increasing pace and continuously evolving reliability requirements of new products have made life cycle reliability assessment of new products an imperative yet difficult work. While much work has been done to separately estimate reliability of new products in specific stages, a gap exists in carrying out life cycle reliability assessment throughout all life cycle stages. We present a Bayesian model updating approach (BMUA) for life cycle reliability assessment of new products. Novel features of this approach are the development of Bayesian information toolkits by separately including “reliability improvement factor” and “information fusion factor”, which allow the integration of subjective information in a specific life cycle stage and the transition of integrated information between adjacent life cycle stages. They lead to the unique characteristics of the BMUA in which information generated throughout life cycle stages are integrated coherently. To illustrate the approach, an application to the life cycle reliability assessment of a newly developed Gantry Machining Center is shown

  18. A multi-state reliability evaluation model for P2P networks

    International Nuclear Information System (INIS)

    Fan Hehong; Sun Xiaohan

    2010-01-01

    The appearance of new service types and the convergence tendency of the communication networks have endowed the networks more and more P2P (peer to peer) properties. These networks can be more robust and tolerant for a series of non-perfect operational states due to the non-deterministic server-client distributions. Thus a reliability model taking into account of the multi-state and non-deterministic server-client distribution properties is needed for appropriate evaluation of the networks. In this paper, two new performance measures are defined to quantify the overall and local states of the networks. A new time-evolving state-transition Monte Carlo (TEST-MC) simulation model is presented for the reliability analysis of P2P networks in multiple states. The results show that the model is not only valid for estimating the traditional binary-state network reliability parameters, but also adequate for acquiring the parameters in a series of non-perfect operational states, with good efficiencies, especially for highly reliable networks. Furthermore, the model is versatile for the reliability and maintainability analyses in that both the links and the nodes can be failure-prone with arbitrary life distributions, and various maintainability schemes can be applied.

  19. Modelling of nuclear power plant control and instrumentation elements for automatic disturbance and reliability analysis

    International Nuclear Information System (INIS)

    Hollo, E.

    1985-08-01

    Present Final Report summarizes results of R/D work done within IAEA-VEIKI (Institute for Electrical Power Research, Budapest, Hungary) Research Contract No. 3210 during 3 years' period of 01.08.1982 - 31.08.1985. Chapter 1 lists main research objectives of the project. Main results obtained are summarized in Chapters 2 and 3. Outcomes from development of failure modelling methodologies and their application for C/I components of WWER-440 units are as follows (Chapter 2): improvement of available ''failure mode and effect analysis'' methods and mini-fault tree structures usable for automatic disturbance (DAS) and reliability (RAS) analysis; general classification and determination of functional failure modes of WWER-440 NPP C/I components; set up of logic models for motor operated control valves and rod control/drive mechanism. Results of development of methods and their application for reliability modelling of NPP components and systems cover (Chapter 3): development of an algorithm (computer code COMPREL) for component-related failure and reliability parameter calculation; reliability analysis of PAKS II NPP diesel system; definition of functional requirements for reliability data bank (RDB) in WWER-440 units. Determination of RDB input/output data structure and data manipulation services. Methods used are a-priori failure mode and effect analysis, combined fault tree/event tree modelling technique, structural computer programming, probability theory application to nuclear field

  20. Modeling Optimal Scheduling for Pumping System to Minimize Operation Cost and Enhance Operation Reliability

    Directory of Open Access Journals (Sweden)

    Yin Luo

    2012-01-01

    Full Text Available Traditional pump scheduling models neglect the operation reliability which directly relates with the unscheduled maintenance cost and the wear cost during the operation. Just for this, based on the assumption that the vibration directly relates with the operation reliability and the degree of wear, it could express the operation reliability as the normalization of the vibration level. The characteristic of the vibration with the operation point was studied, it could be concluded that idealized flow versus vibration plot should be a distinct bathtub shape. There is a narrow sweet spot (80 to 100 percent BEP to obtain low vibration levels in this shape, and the vibration also follows similar law with the square of the rotation speed without resonance phenomena. Then, the operation reliability could be modeled as the function of the capacity and rotation speed of the pump and add this function to the traditional model to form the new. And contrast with the tradition method, the result shown that the new model could fix the result produced by the traditional, make the pump operate in low vibration, then the operation reliability could increase and the maintenance cost could decrease.

  1. Do Lumped-Parameter Models Provide the Correct Geometrical Damping?

    DEFF Research Database (Denmark)

    Andersen, Lars

    response during excitation and the geometrical damping related to free vibrations of a hexagonal footing. The optimal order of a lumped-parameter model is determined for each degree of freedom, i.e. horizontal and vertical translation as well as torsion and rocking. In particular, the necessity of coupling...... between horizontal sliding and rocking is discussed....

  2. [Reliability study in the measurement of the cusp inclination angle of a chairside digital model].

    Science.gov (United States)

    Xinggang, Liu; Xiaoxian, Chen

    2018-02-01

    This study aims to evaluate the reliability of the software Picpick in the measurement of the cusp inclination angle of a digital model. Twenty-one trimmed models were used as experimental objects. The chairside digital impression was then used for the acquisition of 3D digital models, and the software Picpick was employed for the measurement of the cusp inclination of these models. The measurements were repeated three times, and the results were compared with a gold standard, which was a manually measured experimental model cusp angle. The intraclass correlation coefficient (ICC) was calculated. The paired t test value of the two measurement methods was 0.91. The ICCs between the two measurement methods and three repeated measurements were greater than 0.9. The digital model achieved a smaller coefficient of variation (9.9%). The software Picpick is reliable in measuring the cusp inclination of a digital model.

  3. Bayesian Hierarchical Scale Mixtures of Log-Normal Models for Inference in Reliability with Stochastic Constraint

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2017-06-01

    Full Text Available This paper develops Bayesian inference in reliability of a class of scale mixtures of log-normal failure time (SMLNFT models with stochastic (or uncertain constraint in their reliability measures. The class is comprehensive and includes existing failure time (FT models (such as log-normal, log-Cauchy, and log-logistic FT models as well as new models that are robust in terms of heavy-tailed FT observations. Since classical frequency approaches to reliability analysis based on the SMLNFT model with stochastic constraint are intractable, the Bayesian method is pursued utilizing a Markov chain Monte Carlo (MCMC sampling based approach. This paper introduces a two-stage maximum entropy (MaxEnt prior, which elicits a priori uncertain constraint and develops Bayesian hierarchical SMLNFT model by using the prior. The paper also proposes an MCMC method for Bayesian inference in the SMLNFT model reliability and calls attention to properties of the MaxEnt prior that are useful for method development. Finally, two data sets are used to illustrate how the proposed methodology works.

  4. Reliability modeling of degradation of products with multiple performance characteristics based on gamma processes

    International Nuclear Information System (INIS)

    Pan Zhengqiang; Balakrishnan, Narayanaswamy

    2011-01-01

    Many highly reliable products usually have complex structure, with their reliability being evaluated by two or more performance characteristics. In certain physical situations, the degradation of these performance characteristics would be always positive and strictly increasing. In such a case, the gamma process is usually considered as a degradation process due to its independent and non-negative increments properties. In this paper, we suppose that a product has two dependent performance characteristics and that their degradation can be modeled by gamma processes. For such a bivariate degradation involving two performance characteristics, we propose to use a bivariate Birnbaum-Saunders distribution and its marginal distributions to approximate the reliability function. Inferential method for the corresponding model parameters is then developed. Finally, for an illustration of the proposed model and method, a numerical example about fatigue cracks is discussed and some computational results are presented.

  5. Inter-arch digital model vs. manual cast measurements: Accuracy and reliability.

    Science.gov (United States)

    Kiviahde, Heikki; Bukovac, Lea; Jussila, Päivi; Pesonen, Paula; Sipilä, Kirsi; Raustia, Aune; Pirttiniemi, Pertti

    2017-06-28

    The purpose of this study was to evaluate the accuracy and reliability of inter-arch measurements using digital dental models and conventional dental casts. Thirty sets of dental casts with permanent dentition were examined. Manual measurements were done with a digital caliper directly on the dental casts, and digital measurements were made on 3D models by two independent examiners. Intra-class correlation coefficients (ICC), a paired sample t-test or Wilcoxon signed-rank test, and Bland-Altman plots were used to evaluate intra- and inter-examiner error and to determine the accuracy and reliability of the measurements. The ICC values were generally good for manual and excellent for digital measurements. The Bland-Altman plots of all the measurements showed good agreement between the manual and digital methods and excellent inter-examiner agreement using the digital method. Inter-arch occlusal measurements on digital models are accurate and reliable and are superior to manual measurements.

  6. Comparative analysis among deterministic and stochastic collision damage models for oil tanker and bulk carrier reliability

    Directory of Open Access Journals (Sweden)

    A. Campanile

    2018-01-01

    Full Text Available The incidence of collision damage models on oil tanker and bulk carrier reliability is investigated considering the IACS deterministic model against GOALDS/IMO database statistics for collision events, substantiating the probabilistic model. Statistical properties of hull girder residual strength are determined by Monte Carlo simulation, based on random generation of damage dimensions and a modified form of incremental-iterative method, to account for neutral axis rotation and equilibrium of horizontal bending moment, due to cross-section asymmetry after collision events. Reliability analysis is performed, to investigate the incidence of collision penetration depth and height statistical properties on hull girder sagging/hogging failure probabilities. Besides, the incidence of corrosion on hull girder residual strength and reliability is also discussed, focussing on gross, hull girder net and local net scantlings, respectively. The ISSC double hull oil tanker and single side bulk carrier, assumed as test cases in the ISSC 2012 report, are taken as reference ships.

  7. PCA as a practical indicator of OPLS-DA model reliability.

    Science.gov (United States)

    Worley, Bradley; Powers, Robert

    Principal Component Analysis (PCA) and Orthogonal Projections to Latent Structures Discriminant Analysis (OPLS-DA) are powerful statistical modeling tools that provide insights into separations between experimental groups based on high-dimensional spectral measurements from NMR, MS or other analytical instrumentation. However, when used without validation, these tools may lead investigators to statistically unreliable conclusions. This danger is especially real for Partial Least Squares (PLS) and OPLS, which aggressively force separations between experimental groups. As a result, OPLS-DA is often used as an alternative method when PCA fails to expose group separation, but this practice is highly dangerous. Without rigorous validation, OPLS-DA can easily yield statistically unreliable group separation. A Monte Carlo analysis of PCA group separations and OPLS-DA cross-validation metrics was performed on NMR datasets with statistically significant separations in scores-space. A linearly increasing amount of Gaussian noise was added to each data matrix followed by the construction and validation of PCA and OPLS-DA models. With increasing added noise, the PCA scores-space distance between groups rapidly decreased and the OPLS-DA cross-validation statistics simultaneously deteriorated. A decrease in correlation between the estimated loadings (added noise) and the true (original) loadings was also observed. While the validity of the OPLS-DA model diminished with increasing added noise, the group separation in scores-space remained basically unaffected. Supported by the results of Monte Carlo analyses of PCA group separations and OPLS-DA cross-validation metrics, we provide practical guidelines and cross-validatory recommendations for reliable inference from PCA and OPLS-DA models.

  8. Centralised gaming models: providing optimal gambling behaviour controls

    OpenAIRE

    Griffiths, MD; Wood, RTA

    2009-01-01

    The expansion in the gaming industry and its widening attraction points to the need for ever more verifiable means of controlling problem gambling. Various strategies have been built into casino venue operations to address this, but recently, following a new focus on social responsibility, a group of experts considered the possibilities of a centralised gaming model as a more effective control mechanism for dealing with gambling behaviours.

  9. A Flexible Collaborative Innovation Model for SOA Services Providers

    OpenAIRE

    Santanna-Filho , João ,; Rabelo , Ricardo ,; Pereira-Klen , Alexandra ,

    2015-01-01

    Part 5: Innovation Networks; International audience; Software sector plays a very relevant role in current world economy. One of its characteristics is that they are mostly composed of SMEs. SMEs have been pushed to invest in innovation to keep competitive. Service Oriented Architecture (SOA) is a recent and powerful ICT paradigm for more sustainable business models. A SOA product has many differences when compared to manufacturing sector. Besides that, SOA projects are however very complex, ...

  10. Diverse Data Sets Can Yield Reliable Information through Mechanistic Modeling: Salicylic Acid Clearance.

    Science.gov (United States)

    Raymond, G M; Bassingthwaighte, J B

    This is a practical example of a powerful research strategy: putting together data from studies covering a diversity of conditions can yield a scientifically sound grasp of the phenomenon when the individual observations failed to provide definitive understanding. The rationale is that defining a realistic, quantitative, explanatory hypothesis for the whole set of studies, brings about a "consilience" of the often competing hypotheses considered for individual data sets. An internally consistent conjecture linking multiple data sets simultaneously provides stronger evidence on the characteristics of a system than does analysis of individual data sets limited to narrow ranges of conditions. Our example examines three very different data sets on the clearance of salicylic acid from humans: a high concentration set from aspirin overdoses; a set with medium concentrations from a research study on the influences of the route of administration and of sex on the clearance kinetics, and a set on low dose aspirin for cardiovascular health. Three models were tested: (1) a first order reaction, (2) a Michaelis-Menten (M-M) approach, and (3) an enzyme kinetic model with forward and backward reactions. The reaction rates found from model 1 were distinctly different for the three data sets, having no commonality. The M-M model 2 fitted each of the three data sets but gave a reliable estimates of the Michaelis constant only for the medium level data (K m = 24±5.4 mg/L); analyzing the three data sets together with model 2 gave K m = 18±2.6 mg/L. (Estimating parameters using larger numbers of data points in an optimization increases the degrees of freedom, constraining the range of the estimates). Using the enzyme kinetic model (3) increased the number of free parameters but nevertheless improved the goodness of fit to the combined data sets, giving tighter constraints, and a lower estimated K m = 14.6±2.9 mg/L, demonstrating that fitting diverse data sets with a single model

  11. A case study review of technical and technology issues for transition of a utility load management program to provide system reliability resources in restructured electricity markets

    Energy Technology Data Exchange (ETDEWEB)

    Weller, G.H.

    2001-07-15

    Utility load management programs--including direct load control and interruptible load programs--were employed by utilities in the past as system reliability resources. With electricity industry restructuring, the context for these programs has changed; the market that was once controlled by vertically integrated utilities has become competitive, raising the question: can existing load management programs be modified so that they can effectively participate in competitive energy markets? In the short run, modified and/or improved operation of load management programs may be the most effective form of demand-side response available to the electricity system today. However, in light of recent technological advances in metering, communication, and load control, utility load management programs must be carefully reviewed in order to determine appropriate investments to support this transition. This report investigates the feasibility of and options for modifying an existing utility load management system so that it might provide reliability services (i.e. ancillary services) in the competitive markets that have resulted from electricity industry restructuring. The report is a case study of Southern California Edison's (SCE) load management programs. SCE was chosen because it operates one of the largest load management programs in the country and it operates them within a competitive wholesale electricity market. The report describes a wide range of existing and soon-to-be-available communication, control, and metering technologies that could be used to facilitate the evolution of SCE's load management programs and systems to provision of reliability services. The fundamental finding of this report is that, with modifications, SCE's load management infrastructure could be transitioned to provide critical ancillary services in competitive electricity markets, employing currently or soon-to-be available load control technologies.

  12. Development of Markov model of emergency diesel generator for dynamic reliability analysis

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Young Ho; Choi, Sun Yeong; Yang, Joon Eon [Korea Atomic Energy Research Institute, Taejon (Korea)

    1999-02-01

    The EDG (Emergency Diesal Generator) of nuclear power plant is one of the most important equipments in mitigating accidents. The FT (Fault Tree) method is widely used to assess the reliability of safety systems like an EDG in nuclear power plant. This method, however, has limitations in modeling dynamic features of safety systems exactly. We, hence, have developed a Markov model to represent the stochastic process of dynamic systems whose states change as time moves on. The Markov model enables us to develop a dynamic reliability model of EDG. This model can represent all possible states of EDG comparing to the FRANTIC code developed by U.S. NRC for the reliability analysis of standby systems. to access the regulation policy for test interval, we performed two simulations based on the generic data and plant specific data of YGN 3, respectively by using the developed model. We also estimate the effects of various repair rates and the fractions of starting failures by demand shock to the reliability of EDG. And finally, Aging effect is analyzed. (author). 23 refs., 19 figs., 9 tabs.

  13. Linear and evolutionary polynomial regression models to forecast coastal dynamics: Comparison and reliability assessment

    Science.gov (United States)

    Bruno, Delia Evelina; Barca, Emanuele; Goncalves, Rodrigo Mikosz; de Araujo Queiroz, Heithor Alexandre; Berardi, Luigi; Passarella, Giuseppe

    2018-01-01

    In this paper, the Evolutionary Polynomial Regression data modelling strategy has been applied to study small scale, short-term coastal morphodynamics, given its capability for treating a wide database of known information, non-linearly. Simple linear and multilinear regression models were also applied to achieve a balance between the computational load and reliability of estimations of the three models. In fact, even though it is easy to imagine that the more complex the model, the more the prediction improves, sometimes a "slight" worsening of estimations can be accepted in exchange for the time saved in data organization and computational load. The models' outcomes were validated through a detailed statistical, error analysis, which revealed a slightly better estimation of the polynomial model with respect to the multilinear model, as expected. On the other hand, even though the data organization was identical for the two models, the multilinear one required a simpler simulation setting and a faster run time. Finally, the most reliable evolutionary polynomial regression model was used in order to make some conjecture about the uncertainty increase with the extension of extrapolation time of the estimation. The overlapping rate between the confidence band of the mean of the known coast position and the prediction band of the estimated position can be a good index of the weakness in producing reliable estimations when the extrapolation time increases too much. The proposed models and tests have been applied to a coastal sector located nearby Torre Colimena in the Apulia region, south Italy.

  14. ARA and ARI imperfect repair models: Estimation, goodness-of-fit and reliability prediction

    International Nuclear Information System (INIS)

    Toledo, Maria Luíza Guerra de; Freitas, Marta A.; Colosimo, Enrico A.; Gilardoni, Gustavo L.

    2015-01-01

    An appropriate maintenance policy is essential to reduce expenses and risks related to equipment failures. A fundamental aspect to be considered when specifying such policies is to be able to predict the reliability of the systems under study, based on a well fitted model. In this paper, the classes of models Arithmetic Reduction of Age and Arithmetic Reduction of Intensity are explored. Likelihood functions for such models are derived, and a graphical method is proposed for model selection. A real data set involving failures in trucks used by a Brazilian mining is analyzed considering models with different memories. Parameters, namely, shape and scale for Power Law Process, and the efficiency of repair were estimated for the best fitted model. Estimation of model parameters allowed us to derive reliability estimators to predict the behavior of the failure process. These results are a valuable information for the mining company and can be used to support decision making regarding preventive maintenance policy. - Highlights: • Likelihood functions for imperfect repair models are derived. • A goodness-of-fit technique is proposed as a tool for model selection. • Failures in trucks owned by a Brazilian mining are modeled. • Estimation allowed deriving reliability predictors to forecast the future failure process of the trucks

  15. The transparency, reliability and utility of tropical rainforest land-use and land-cover change models.

    Science.gov (United States)

    Rosa, Isabel M D; Ahmed, Sadia E; Ewers, Robert M

    2014-06-01

    Land-use and land-cover (LULC) change is one of the largest drivers of biodiversity loss and carbon emissions globally. We use the tropical rainforests of the Amazon, the Congo basin and South-East Asia as a case study to investigate spatial predictive models of LULC change. Current predictions differ in their modelling approaches, are highly variable and often poorly validated. We carried out a quantitative review of 48 modelling methodologies, considering model spatio-temporal scales, inputs, calibration and validation methods. In addition, we requested model outputs from each of the models reviewed and carried out a quantitative assessment of model performance for tropical LULC predictions in the Brazilian Amazon. We highlight existing shortfalls in the discipline and uncover three key points that need addressing to improve the transparency, reliability and utility of tropical LULC change models: (1) a lack of openness with regard to describing and making available the model inputs and model code; (2) the difficulties of conducting appropriate model validations; and (3) the difficulty that users of tropical LULC models face in obtaining the model predictions to help inform their own analyses and policy decisions. We further draw comparisons between tropical LULC change models in the tropics and the modelling approaches and paradigms in other disciplines, and suggest that recent changes in the climate change and species distribution modelling communities may provide a pathway that tropical LULC change modellers may emulate to further improve the discipline. Climate change models have exerted considerable influence over public perceptions of climate change and now impact policy decisions at all political levels. We suggest that tropical LULC change models have an equally high potential to influence public opinion and impact the development of land-use policies based on plausible future scenarios, but, to do that reliably may require further improvements in the

  16. An overview of erosion corrosion models and reliability assessment for corrosion defects in piping system

    International Nuclear Information System (INIS)

    Srividya, A.; Suresh, H.N.; Verma, A.K.; Gopika, V.; Santosh

    2006-01-01

    Piping systems are part of passive structural elements in power plants. The analysis of the piping systems and their quantification in terms of failure probability is of utmost importance. The piping systems may fail due to various degradation mechanisms like thermal fatigue, erosion-corrosion, stress corrosion cracking and vibration fatigue. On examination of previous results, erosion corrosion was more prevalent and wall thinning is a time dependent phenomenon. The paper is intended to consolidate the work done by various investigators on erosion corrosion in estimating the erosion corrosion rate and reliability predictions. A comparison of various erosion corrosion models is made. The reliability predictions based on remaining strength of corroded pipelines by wall thinning is also attempted. Variables in the limit state functions are modelled using normal distributions and Reliability assessment is carried out using some of the existing failure pressure models. A steady state corrosion rate is assumed to estimate the corrosion defect and First Order Reliability Method (FORM) is used to find the probability of failure associated with corrosion defects over time using the software for Component Reliability evaluation (COMREL). (author)

  17. Efficient surrogate models for reliability analysis of systems with multiple failure modes

    International Nuclear Information System (INIS)

    Bichon, Barron J.; McFarland, John M.; Mahadevan, Sankaran

    2011-01-01

    Despite many advances in the field of computational reliability analysis, the efficient estimation of the reliability of a system with multiple failure modes remains a persistent challenge. Various sampling and analytical methods are available, but they typically require accepting a tradeoff between accuracy and computational efficiency. In this work, a surrogate-based approach is presented that simultaneously addresses the issues of accuracy, efficiency, and unimportant failure modes. The method is based on the creation of Gaussian process surrogate models that are required to be locally accurate only in the regions of the component limit states that contribute to system failure. This approach to constructing surrogate models is demonstrated to be both an efficient and accurate method for system-level reliability analysis. - Highlights: → Extends efficient global reliability analysis to systems with multiple failure modes. → Constructs locally accurate Gaussian process models of each response. → Highly efficient and accurate method for assessing system reliability. → Effectiveness is demonstrated on several test problems from the literature.

  18. Intra-observer reliability and agreement of manual and digital orthodontic model analysis.

    Science.gov (United States)

    Koretsi, Vasiliki; Tingelhoff, Linda; Proff, Peter; Kirschneck, Christian

    2018-01-23

    Digital orthodontic model analysis is gaining acceptance in orthodontics, but its reliability is dependent on the digitalisation hardware and software used. We thus investigated intra-observer reliability and agreement / conformity of a particular digital model analysis work-flow in relation to traditional manual plaster model analysis. Forty-eight plaster casts of the upper/lower dentition were collected. Virtual models were obtained with orthoX®scan (Dentaurum) and analysed with ivoris®analyze3D (Computer konkret). Manual model analyses were done with a dial caliper (0.1 mm). Common parameters were measured on each plaster cast and its virtual counterpart five times each by an experienced observer. We assessed intra-observer reliability within method (ICC), agreement/conformity between methods (Bland-Altman analyses and Lin's concordance correlation), and changing bias (regression analyses). Intra-observer reliability was substantial within each method (ICC ≥ 0.7), except for five manual outcomes (12.8 per cent). Bias between methods was statistically significant, but less than 0.5 mm for 87.2 per cent of the outcomes. In general, larger tooth sizes were measured digitally. Total difference maxilla and mandible had wide limits of agreement (-3.25/6.15 and -2.31/4.57 mm), but bias between methods was mostly smaller than intra-observer variation within each method with substantial conformity of manual and digital measurements in general. No changing bias was detected. Although both work-flows were reliable, the investigated digital work-flow proved to be more reliable and yielded on average larger tooth sizes. Averaged differences between methods were within 0.5 mm for directly measured outcomes but wide ranges are expected for some computed space parameters due to cumulative error. © The Author 2017. Published by Oxford University Press on behalf of the European Orthodontic Society. All rights reserved. For permissions, please email: journals.permissions@oup.com

  19. A simulation model for reliability evaluation of Space Station power systems

    Science.gov (United States)

    Singh, C.; Patton, A. D.; Kumar, Mudit; Wagner, H.

    1988-01-01

    A detailed simulation model for the hybrid Space Station power system is presented which allows photovoltaic and solar dynamic power sources to be mixed in varying proportions. The model considers the dependence of reliability and storage characteristics during the sun and eclipse periods, and makes it possible to model the charging and discharging of the energy storage modules in a relatively accurate manner on a continuous basis.

  20. Do Cochrane reviews provide a good model for social science?

    DEFF Research Database (Denmark)

    Konnerup, Merete; Kongsted, Hans Christian

    2012-01-01

    Formalised research synthesis to underpin evidence-based policy and practice has become increasingly important in areas of public policy. In this paper we discuss whether the Cochrane standard for systematic reviews of healthcare interventions is appropriate for social research. We examine...... the formal criteria of the Cochrane Collaboration for including particular study designs and search the Cochrane Library to provide quantitative evidence on the de facto standard of actual Cochrane reviews. By identifying the sample of Cochrane reviews that consider observational designs, we are able...... to conclude that the majority of reviews appears limited to considering randomised controlled trials only. Because recent studies have delineated conditions for observational studies in social research to produce valid evidence, we argue that an inclusive approach is essential for truly evidence-based policy...

  1. Modeling reliability of power systems substations by using stochastic automata networks

    International Nuclear Information System (INIS)

    Šnipas, Mindaugas; Radziukynas, Virginijus; Valakevičius, Eimutis

    2017-01-01

    In this paper, stochastic automata networks (SANs) formalism to model reliability of power systems substations is applied. The proposed strategy allows reducing the size of state space of Markov chain model and simplifying system specification. Two case studies of standard configurations of substations are considered in detail. SAN models with different assumptions were created. SAN approach is compared with exact reliability calculation by using a minimal path set method. Modeling results showed that total independence of automata can be assumed for relatively small power systems substations with reliable equipment. In this case, the implementation of Markov chain model by a using SAN method is a relatively easy task. - Highlights: • We present the methodology to apply stochastic automata network formalism to create Markov chain models of power systems. • The stochastic automata network approach is combined with minimal path sets and structural functions. • Two models of substation configurations with different model assumptions are presented to illustrate the proposed methodology. • Modeling results of system with independent automata and functional transition rates are similar. • The conditions when total independence of automata can be assumed are addressed.

  2. Reliable gain-scheduled control of discrete-time systems and its application to CSTR model

    Science.gov (United States)

    Sakthivel, R.; Selvi, S.; Mathiyalagan, K.; Shi, Y.

    2016-10-01

    This paper is focused on reliable gain-scheduled controller design for a class of discrete-time systems with randomly occurring nonlinearities and actuator fault. Further, the nonlinearity in the system model is assumed to occur randomly according to a Bernoulli distribution with measurable time-varying probability in real time. The main purpose of this paper is to design a gain-scheduled controller by implementing a probability-dependent Lyapunov function and linear matrix inequality (LMI) approach such that the closed-loop discrete-time system is stochastically stable for all admissible randomly occurring nonlinearities. The existence conditions for the reliable controller is formulated in terms of LMI constraints. Finally, the proposed reliable gain-scheduled control scheme is applied on continuously stirred tank reactor model to demonstrate the effectiveness and applicability of the proposed design technique.

  3. Capturing cognitive causal paths in human reliability analysis with Bayesian network models

    International Nuclear Information System (INIS)

    Zwirglmaier, Kilian; Straub, Daniel; Groth, Katrina M.

    2017-01-01

    reIn the last decade, Bayesian networks (BNs) have been identified as a powerful tool for human reliability analysis (HRA), with multiple advantages over traditional HRA methods. In this paper we illustrate how BNs can be used to include additional, qualitative causal paths to provide traceability. The proposed framework provides the foundation to resolve several needs frequently expressed by the HRA community. First, the developed extended BN structure reflects the causal paths found in cognitive psychology literature, thereby addressing the need for causal traceability and strong scientific basis in HRA. Secondly, the use of node reduction algorithms allows the BN to be condensed to a level of detail at which quantification is as straightforward as the techniques used in existing HRA. We illustrate the framework by developing a BN version of the critical data misperceived crew failure mode in the IDHEAS HRA method, which is currently under development at the US NRC . We illustrate how the model could be quantified with a combination of expert-probabilities and information from operator performance databases such as SACADA. This paper lays the foundations necessary to expand the cognitive and quantitative foundations of HRA. - Highlights: • A framework for building traceable BNs for HRA, based on cognitive causal paths. • A qualitative BN structure, directly showing these causal paths is developed. • Node reduction algorithms are used for making the BN structure quantifiable. • BN quantified through expert estimates and observed data (Bayesian updating). • The framework is illustrated for a crew failure mode of IDHEAS.

  4. Human reliability-based MC and A models for detecting insider theft

    International Nuclear Information System (INIS)

    Duran, Felicia Angelica; Wyss, Gregory Dane

    2010-01-01

    Material control and accounting (MC and A) safeguards operations that track and account for critical assets at nuclear facilities provide a key protection approach for defeating insider adversaries. These activities, however, have been difficult to characterize in ways that are compatible with the probabilistic path analysis methods that are used to systematically evaluate the effectiveness of a site's physical protection (security) system (PPS). MC and A activities have many similar characteristics to operator procedures performed in a nuclear power plant (NPP) to check for anomalous conditions. This work applies human reliability analysis (HRA) methods and models for human performance of NPP operations to develop detection probabilities for MC and A activities. This has enabled the development of an extended probabilistic path analysis methodology in which MC and A protections can be combined with traditional sensor data in the calculation of PPS effectiveness. The extended path analysis methodology provides an integrated evaluation of a safeguards and security system that addresses its effectiveness for attacks by both outside and inside adversaries.

  5. A hybrid reliability algorithm using PSO-optimized Kriging model and adaptive importance sampling

    Science.gov (United States)

    Tong, Cao; Gong, Haili

    2018-03-01

    This paper aims to reduce the computational cost of reliability analysis. A new hybrid algorithm is proposed based on PSO-optimized Kriging model and adaptive importance sampling method. Firstly, the particle swarm optimization algorithm (PSO) is used to optimize the parameters of Kriging model. A typical function is fitted to validate improvement by comparing results of PSO-optimized Kriging model with those of the original Kriging model. Secondly, a hybrid algorithm for reliability analysis combined optimized Kriging model and adaptive importance sampling is proposed. Two cases from literatures are given to validate the efficiency and correctness. The proposed method is proved to be more efficient due to its application of small number of sample points according to comparison results.

  6. A Review of the Progress with Statistical Models of Passive Component Reliability

    Directory of Open Access Journals (Sweden)

    Bengt O.Y. Lydell

    2017-03-01

    Full Text Available During the past 25 years, in the context of probabilistic safety assessment, efforts have been directed towards establishment of comprehensive pipe failure event databases as a foundation for exploratory research to better understand how to effectively organize a piping reliability analysis task. The focused pipe failure database development efforts have progressed well with the development of piping reliability analysis frameworks that utilize the full body of service experience data, fracture mechanics analysis insights, expert elicitation results that are rolled into an integrated and risk-informed approach to the estimation of piping reliability parameters with full recognition of the embedded uncertainties. The discussion in this paper builds on a major collection of operating experience data (more than 11,000 pipe failure records and the associated lessons learned from data analysis and data applications spanning three decades. The piping reliability analysis lessons learned have been obtained from the derivation of pipe leak and rupture frequencies for corrosion resistant piping in a raw water environment, loss-of-coolant-accident frequencies given degradation mitigation, high-energy pipe break analysis, moderate-energy pipe break analysis, and numerous plant-specific applications of a statistical piping reliability model framework. Conclusions are presented regarding the feasibility of determining and incorporating aging effects into probabilistic safety assessment models.

  7. A review of the progress with statistical models of passive component reliability

    Energy Technology Data Exchange (ETDEWEB)

    Lydell, Bengt O. Y. [Sigma-Phase Inc., Vail (United States)

    2017-03-15

    During the past 25 years, in the context of probabilistic safety assessment, efforts have been directed towards establishment of comprehensive pipe failure event databases as a foundation for exploratory research to better understand how to effectively organize a piping reliability analysis task. The focused pipe failure database development efforts have progressed well with the development of piping reliability analysis frameworks that utilize the full body of service experience data, fracture mechanics analysis insights, expert elicitation results that are rolled into an integrated and risk-informed approach to the estimation of piping reliability parameters with full recognition of the embedded uncertainties. The discussion in this paper builds on a major collection of operating experience data (more than 11,000 pipe failure records) and the associated lessons learned from data analysis and data applications spanning three decades. The piping reliability analysis lessons learned have been obtained from the derivation of pipe leak and rupture frequencies for corrosion resistant piping in a raw water environment, loss-of-coolant-accident frequencies given degradation mitigation, high-energy pipe break analysis, moderate-energy pipe break analysis, and numerous plant-specific applications of a statistical piping reliability model framework. Conclusions are presented regarding the feasibility of determining and incorporating aging effects into probabilistic safety assessment models.

  8. Reliability model for helicopter main gearbox lubrication system using influence diagrams

    International Nuclear Information System (INIS)

    Rashid, H.S.J.; Place, C.S.; Mba, D.; Keong, R.L.C.; Healey, A.; Kleine-Beek, W.; Romano, M.

    2015-01-01

    The loss of oil from a helicopter main gearbox (MGB) leads to increased friction between components, a rise in component surface temperatures, and subsequent mechanical failure of gearbox components. A number of significant helicopter accidents have been caused due to such loss of lubrication. This paper presents a model to assess the reliability of helicopter MGB lubricating systems. Safety risk modeling was conducted for MGB oil system related accidents in order to analyse key failure mechanisms and the contributory factors. Thus, the dominant failure modes for lubrication systems and key contributing components were identified. The Influence Diagram (ID) approach was then employed to investigate reliability issues of the MGB lubrication systems at the level of primary causal factors, thus systematically investigating a complex context of events, conditions, and influences that are direct triggers of the helicopter MGB lubrication system failures. The interrelationships between MGB lubrication system failure types were thus identified, and the influence of each of these factors on the overall MGB lubrication system reliability was assessed. This paper highlights parts of the HELMGOP project, sponsored by the European Aviation Safety Agency to improve helicopter main gearbox reliability. - Highlights: • We investigated methods to optimize helicopter MGB oil system run-dry capability. • Used Influence Diagram to assess design and maintenance factors of MGB oil system. • Factors influencing overall MGB lubrication system reliability were identified. • This globally influences current and future helicopter MGB designs

  9. A review of the progress with statistical models of passive component reliability

    International Nuclear Information System (INIS)

    Lydell, Bengt O. Y.

    2017-01-01

    During the past 25 years, in the context of probabilistic safety assessment, efforts have been directed towards establishment of comprehensive pipe failure event databases as a foundation for exploratory research to better understand how to effectively organize a piping reliability analysis task. The focused pipe failure database development efforts have progressed well with the development of piping reliability analysis frameworks that utilize the full body of service experience data, fracture mechanics analysis insights, expert elicitation results that are rolled into an integrated and risk-informed approach to the estimation of piping reliability parameters with full recognition of the embedded uncertainties. The discussion in this paper builds on a major collection of operating experience data (more than 11,000 pipe failure records) and the associated lessons learned from data analysis and data applications spanning three decades. The piping reliability analysis lessons learned have been obtained from the derivation of pipe leak and rupture frequencies for corrosion resistant piping in a raw water environment, loss-of-coolant-accident frequencies given degradation mitigation, high-energy pipe break analysis, moderate-energy pipe break analysis, and numerous plant-specific applications of a statistical piping reliability model framework. Conclusions are presented regarding the feasibility of determining and incorporating aging effects into probabilistic safety assessment models

  10. Improved radiograph measurement inter-observer reliability by use of statistical shape models

    Energy Technology Data Exchange (ETDEWEB)

    Pegg, E.C., E-mail: elise.pegg@ndorms.ox.ac.uk [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom); Mellon, S.J., E-mail: stephen.mellon@ndorms.ox.ac.uk [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom); Salmon, G. [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom); Alvand, A., E-mail: abtin.alvand@ndorms.ox.ac.uk [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom); Pandit, H., E-mail: hemant.pandit@ndorms.ox.ac.uk [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom); Murray, D.W., E-mail: david.murray@ndorms.ox.ac.uk [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom); Gill, H.S., E-mail: richie.gill@ndorms.ox.ac.uk [University of Oxford, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Nuffield Orthopaedic Centre, Windmill Road, Oxford OX3 7LD (United Kingdom)

    2012-10-15

    Pre- and post-operative radiographs of patients undergoing joint arthroplasty are often examined for a variety of purposes including preoperative planning and patient assessment. This work examines the feasibility of using active shape models (ASM) to semi-automate measurements from post-operative radiographs for the specific case of the Oxford™ Unicompartmental Knee. Measurements of the proximal tibia and the position of the tibial tray were made using the ASM model and manually. Data were obtained by four observers and one observer took four sets of measurements to allow assessment of the inter- and intra-observer reliability, respectively. The parameters measured were the tibial tray angle, the tray overhang, the tray size, the sagittal cut position, the resection level and the tibial width. Results demonstrated improved reliability (average of 27% and 11.2% increase for intra- and inter-reliability, respectively) and equivalent accuracy (p > 0.05 for compared data values) for all of the measurements using the ASM model, with the exception of the tray overhang (p = 0.0001). Less time (15 s) was required to take measurements using the ASM model compared with manual measurements, which was significant. These encouraging results indicate that semi-automated measurement techniques could improve the reliability of radiographic measurements.

  11. Improved radiograph measurement inter-observer reliability by use of statistical shape models

    International Nuclear Information System (INIS)

    Pegg, E.C.; Mellon, S.J.; Salmon, G.; Alvand, A.; Pandit, H.; Murray, D.W.; Gill, H.S.

    2012-01-01

    Pre- and post-operative radiographs of patients undergoing joint arthroplasty are often examined for a variety of purposes including preoperative planning and patient assessment. This work examines the feasibility of using active shape models (ASM) to semi-automate measurements from post-operative radiographs for the specific case of the Oxford™ Unicompartmental Knee. Measurements of the proximal tibia and the position of the tibial tray were made using the ASM model and manually. Data were obtained by four observers and one observer took four sets of measurements to allow assessment of the inter- and intra-observer reliability, respectively. The parameters measured were the tibial tray angle, the tray overhang, the tray size, the sagittal cut position, the resection level and the tibial width. Results demonstrated improved reliability (average of 27% and 11.2% increase for intra- and inter-reliability, respectively) and equivalent accuracy (p > 0.05 for compared data values) for all of the measurements using the ASM model, with the exception of the tray overhang (p = 0.0001). Less time (15 s) was required to take measurements using the ASM model compared with manual measurements, which was significant. These encouraging results indicate that semi-automated measurement techniques could improve the reliability of radiographic measurements

  12. Reliability Analysis of an Extended Shock Model and Its Optimization Application in a Production Line

    Directory of Open Access Journals (Sweden)

    Renbin Liu

    2014-01-01

    some important reliability indices are derived, such as availability, failure frequency, mean vacation period, mean renewal cycle, mean startup period, and replacement frequency. Finally, a production line controlled by two cold-standby computers is modeled to present numerical illustration and its optimal part-time job policy at a maximum profit.

  13. A reliability model for interlayer dielectric cracking during fast thermal cycling

    NARCIS (Netherlands)

    Nguyen, Van Hieu; Salm, Cora; Krabbenborg, B.H.; Krabbenborg, B.H.; Bisschop, J.; Mouthaan, A.J.; Kuper, F.G.; Ray, Gary W.; Smy, Tom; Ohta, Tomohiro; Tsujimura, Manabu

    2003-01-01

    Interlayer dielectric (ILD) cracking can result in short circuits of multilevel interconnects. This paper presents a reliability model for ILD cracking induced by fast thermal cycling (FTC) stress. FTC tests have been performed under different temperature ranges (∆T) and minimum temperatures (Tmin).

  14. Reliability modeling of safety-critical network communication in a digitalized nuclear power plant

    International Nuclear Information System (INIS)

    Lee, Sang Hun; Kim, Hee Eun; Son, Kwang Seop; Shin, Sung Min; Lee, Seung Jun; Kang, Hyun Gook

    2015-01-01

    The Engineered Safety Feature-Component Control System (ESF-CCS), which uses a network communication system for the transmission of safety-critical information from group controllers (GCs) to loop controllers (LCs), was recently developed. However, the ESF-CCS has not been applied to nuclear power plants (NPPs) because the network communication failure risk in the ESF-CCS has yet to be fully quantified. Therefore, this study was performed to identify the potential hazardous states for network communication between GCs and LCs and to develop quantification schemes for various network failure causes. To estimate the risk effects of network communication failures in the ESF-CCS, a fault-tree model of an ESF-CCS signal failure in the containment spray actuation signal condition was developed for the case study. Based on a specified range of periodic inspection periods for network modules and the baseline probability of software failure, a sensitivity study was conducted to analyze the risk effect of network failure between GCs and LCs on ESF-CCS signal failure. This study is expected to provide insight into the development of a fault-tree model for network failures in digital I&C systems and the quantification of the risk effects of network failures for safety-critical information transmission in NPPs. - Highlights: • Network reliability modeling framework for digital I&C system in NPP is proposed. • Hazardous states of network protocol between GC and LC in ESF-CCS are identified. • Fault-tree model of ESF-CCS signal failure in ESF actuation condition is developed. • Risk effect of network failure on ESF-CCS signal failure is analyzed.

  15. Can simple mobile phone applications provide reliable counts of respiratory rates in sick infants and children? An initial evaluation of three new applications.

    Science.gov (United States)

    Black, James; Gerdtz, Marie; Nicholson, Pat; Crellin, Dianne; Browning, Laura; Simpson, Julie; Bell, Lauren; Santamaria, Nick

    2015-05-01

    applications found. This study provides evidence that applications running on simple phones can be used to count respiratory rates in children. The Once-per-Breath methods are the most reliable, outperforming the 60-second count. For children with raised respiratory rates the 20-breath version of the Once-per-Breath method is faster, so it is a more suitable option where health workers are under time pressure. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Machine Learning Approach for Software Reliability Growth Modeling with Infinite Testing Effort Function

    Directory of Open Access Journals (Sweden)

    Subburaj Ramasamy

    2017-01-01

    Full Text Available Reliability is one of the quantifiable software quality attributes. Software Reliability Growth Models (SRGMs are used to assess the reliability achieved at different times of testing. Traditional time-based SRGMs may not be accurate enough in all situations where test effort varies with time. To overcome this lacuna, test effort was used instead of time in SRGMs. In the past, finite test effort functions were proposed, which may not be realistic as, at infinite testing time, test effort will be infinite. Hence in this paper, we propose an infinite test effort function in conjunction with a classical Nonhomogeneous Poisson Process (NHPP model. We use Artificial Neural Network (ANN for training the proposed model with software failure data. Here it is possible to get a large set of weights for the same model to describe the past failure data equally well. We use machine learning approach to select the appropriate set of weights for the model which will describe both the past and the future data well. We compare the performance of the proposed model with existing model using practical software failure data sets. The proposed log-power TEF based SRGM describes all types of failure data equally well and also improves the accuracy of parameter estimation more than existing TEF and can be used for software release time determination as well.

  17. Reliability analysis of nuclear component cooling water system using semi-Markov process model

    International Nuclear Information System (INIS)

    Veeramany, Arun; Pandey, Mahesh D.

    2011-01-01

    Research highlights: → Semi-Markov process (SMP) model is used to evaluate system failure probability of the nuclear component cooling water (NCCW) system. → SMP is used because it can solve reliability block diagram with a mixture of redundant repairable and non-repairable components. → The primary objective is to demonstrate that SMP can consider Weibull failure time distribution for components while a Markov model cannot → Result: the variability in component failure time is directly proportional to the NCCW system failure probability. → The result can be utilized as an initiating event probability in probabilistic safety assessment projects. - Abstract: A reliability analysis of nuclear component cooling water (NCCW) system is carried out. Semi-Markov process model is used in the analysis because it has potential to solve a reliability block diagram with a mixture of repairable and non-repairable components. With Markov models it is only possible to assume an exponential profile for component failure times. An advantage of the proposed model is the ability to assume Weibull distribution for the failure time of components. In an attempt to reduce the number of states in the model, it is shown that usage of poly-Weibull distribution arises. The objective of the paper is to determine system failure probability under these assumptions. Monte Carlo simulation is used to validate the model result. This result can be utilized as an initiating event probability in probabilistic safety assessment projects.

  18. Human reliability

    International Nuclear Information System (INIS)

    Embrey, D.E.

    1987-01-01

    Concepts and techniques of human reliability have been developed and are used mostly in probabilistic risk assessment. For this, the major application of human reliability assessment has been to identify the human errors which have a significant effect on the overall safety of the system and to quantify the probability of their occurrence. Some of the major issues within human reliability studies are reviewed and it is shown how these are applied to the assessment of human failures in systems. This is done under the following headings; models of human performance used in human reliability assessment, the nature of human error, classification of errors in man-machine systems, practical aspects, human reliability modelling in complex situations, quantification and examination of human reliability, judgement based approaches, holistic techniques and decision analytic approaches. (UK)

  19. Reliability of a new biokinetic model of zirconium in internal dosimetry: part I, parameter uncertainty analysis.

    Science.gov (United States)

    Li, Wei Bo; Greiter, Matthias; Oeh, Uwe; Hoeschen, Christoph

    2011-12-01

    The reliability of biokinetic models is essential in internal dose assessments and radiation risk analysis for the public, occupational workers, and patients exposed to radionuclides. In this paper, a method for assessing the reliability of biokinetic models by means of uncertainty and sensitivity analysis was developed. The paper is divided into two parts. In the first part of the study published here, the uncertainty sources of the model parameters for zirconium (Zr), developed by the International Commission on Radiological Protection (ICRP), were identified and analyzed. Furthermore, the uncertainty of the biokinetic experimental measurement performed at the Helmholtz Zentrum München-German Research Center for Environmental Health (HMGU) for developing a new biokinetic model of Zr was analyzed according to the Guide to the Expression of Uncertainty in Measurement, published by the International Organization for Standardization. The confidence interval and distribution of model parameters of the ICRP and HMGU Zr biokinetic models were evaluated. As a result of computer biokinetic modelings, the mean, standard uncertainty, and confidence interval of model prediction calculated based on the model parameter uncertainty were presented and compared to the plasma clearance and urinary excretion measured after intravenous administration. It was shown that for the most important compartment, the plasma, the uncertainty evaluated for the HMGU model was much smaller than that for the ICRP model; that phenomenon was observed for other organs and tissues as well. The uncertainty of the integral of the radioactivity of Zr up to 50 y calculated by the HMGU model after ingestion by adult members of the public was shown to be smaller by a factor of two than that of the ICRP model. It was also shown that the distribution type of the model parameter strongly influences the model prediction, and the correlation of the model input parameters affects the model prediction to a

  20. A Combined Reliability Model of VSC-HVDC Connected Offshore Wind Farms Considering Wind Speed Correlation

    DEFF Research Database (Denmark)

    Guo, Yifei; Gao, Houlei; Wu, Qiuwei

    2017-01-01

    and WTGs outage. The wind speed correlation between different WFs is included in the two-dimensional multistate WF model by using an improved k-means clustering method. Then, the entire system with two WFs and a threeterminal VSC-HVDC system is modeled as a multi-state generation unit. The proposed model...... is applied to the Roy Billinton test system (RBTS) for adequacy studies. Both the probability and frequency indices are calculated. The effectiveness and accuracy of the combined model is validated by comparing results with the sequential Monte Carlo simulation (MCS) method. The effects of the outage of VSC-HVDC...... system and wind speed correlation on the system reliability were analyzed. Sensitivity analyses were conducted to investigate the impact of repair time of the offshore VSC-HVDC system on system reliability....

  1. Reliability modelling of repairable systems using Petri nets and fuzzy Lambda-Tau methodology

    International Nuclear Information System (INIS)

    Knezevic, J.; Odoom, E.R.

    2001-01-01

    A methodology is developed which uses Petri nets instead of the fault tree methodology and solves for reliability indices utilising fuzzy Lambda-Tau method. Fuzzy set theory is used for representing the failure rate and repair time instead of the classical (crisp) set theory because fuzzy numbers allow expert opinions, linguistic variables, operating conditions, uncertainty and imprecision in reliability information to be incorporated into the system model. Petri nets are used because unlike the fault tree methodology, the use of Petri nets allows efficient simultaneous generation of minimal cut and path sets

  2. Meeting Human Reliability Requirements through Human Factors Design, Testing, and Modeling

    Energy Technology Data Exchange (ETDEWEB)

    R. L. Boring

    2007-06-01

    In the design of novel systems, it is important for the human factors engineer to work in parallel with the human reliability analyst to arrive at the safest achievable design that meets design team safety goals and certification or regulatory requirements. This paper introduces the System Development Safety Triptych, a checklist of considerations for the interplay of human factors and human reliability through design, testing, and modeling in product development. This paper also explores three phases of safe system development, corresponding to the conception, design, and implementation of a system.

  3. Phd study of reliability and validity: One step closer to a standardized music therapy assessment model

    DEFF Research Database (Denmark)

    Jacobsen, Stine Lindahl

    The paper will present a phd study concerning reliability and validity of music therapy assessment model “Assessment of Parenting Competences” (APC) in the area of families with emotionally neglected children. This study had a multiple strategy design with a philosophical base of critical realism...... and pragmatism. The fixed design for this study was a between and within groups design in testing the APCs reliability and validity. The two different groups were parents with neglected children and parents with non-neglected children. The flexible design had a multiple case study strategy specifically...

  4. Assessment of Electronic Circuits Reliability Using Boolean Truth Table Modeling Method

    International Nuclear Information System (INIS)

    EI-Shanshoury, A.I.

    2011-01-01

    This paper explores the use of Boolean Truth Table modeling Method (BTTM) in the analysis of qualitative data. It is widely used in certain fields especially in the fields of electrical and electronic engineering. Our work focuses on the evaluation of power supply circuit reliability using (BTTM) which involves systematic attempts to falsify and identify hypotheses on the basis of truth tables constructed from qualitative data. Reliability parameters such as the system's failure rates for the power supply case study are estimated. All possible state combinations (operating and failed states) of the major components in the circuit were listed and their effects on overall system were studied

  5. Reliability of electronic systems

    International Nuclear Information System (INIS)

    Roca, Jose L.

    2001-01-01

    Reliability techniques have been developed subsequently as a need of the diverse engineering disciplines, nevertheless they are not few those that think they have been work a lot on reliability before the same word was used in the current context. Military, space and nuclear industries were the first ones that have been involved in this topic, however not only in these environments it is that it has been carried out this small great revolution in benefit of the increase of the reliability figures of the products of those industries, but rather it has extended to the whole industry. The fact of the massive production, characteristic of the current industries, drove four decades ago, to the fall of the reliability of its products, on one hand, because the massively itself and, for other, to the recently discovered and even not stabilized industrial techniques. Industry should be changed according to those two new requirements, creating products of medium complexity and assuring an enough reliability appropriated to production costs and controls. Reliability began to be integral part of the manufactured product. Facing this philosophy, the book describes reliability techniques applied to electronics systems and provides a coherent and rigorous framework for these diverse activities providing a unifying scientific basis for the entire subject. It consists of eight chapters plus a lot of statistical tables and an extensive annotated bibliography. Chapters embrace the following topics: 1- Introduction to Reliability; 2- Basic Mathematical Concepts; 3- Catastrophic Failure Models; 4-Parametric Failure Models; 5- Systems Reliability; 6- Reliability in Design and Project; 7- Reliability Tests; 8- Software Reliability. This book is in Spanish language and has a potentially diverse audience as a text book from academic to industrial courses. (author)

  6. Applying the High Reliability Health Care Maturity Model to Assess Hospital Performance: A VA Case Study.

    Science.gov (United States)

    Sullivan, Jennifer L; Rivard, Peter E; Shin, Marlena H; Rosen, Amy K

    2016-09-01

    The lack of a tool for categorizing and differentiating hospitals according to their high reliability organization (HRO)-related characteristics has hindered progress toward implementing and sustaining evidence-based HRO practices. Hospitals would benefit both from an understanding of the organizational characteristics that support HRO practices and from knowledge about the steps necessary to achieve HRO status to reduce the risk of harm and improve outcomes. The High Reliability Health Care Maturity (HRHCM) model, a model for health care organizations' achievement of high reliability with zero patient harm, incorporates three major domains critical for promoting HROs-Leadership, Safety Culture, and Robust Process Improvement ®. A study was conducted to examine the content validity of the HRHCM model and evaluate whether it can differentiate hospitals' maturity levels for each of the model's components. Staff perceptions of patient safety at six US Department of Veterans Affairs (VA) hospitals were examined to determine whether all 14 HRHCM components were present and to characterize each hospital's level of organizational maturity. Twelve of the 14 components from the HRHCM model were detected; two additional characteristics emerged that are present in the HRO literature but not represented in the model-teamwork culture and system-focused tools for learning and improvement. Each hospital's level of organizational maturity could be characterized for 9 of the 14 components. The findings suggest the HRHCM model has good content validity and that there is differentiation between hospitals on model components. Additional research is needed to understand how these components can be used to build the infrastructure necessary for reaching high reliability.

  7. Discrete Address Beacon System (DABS) Software System Reliability Modeling and Prediction.

    Science.gov (United States)

    1981-06-01

    Service ( ATARS ) module because of its interim status. Reliability prediction models for software modules were derived and then verified by matching...System (A’iCR3BS) and thus can be introduced gradually and economically without ma jor olper- ational or procedural change. Since DABS uses monopulse...lineanaly- sis tools or are ured during maintenance or pre-initialization were not modeled because they are not part of the mission software. The ATARS

  8. The role of the American Society for Testing and Materials (ASTM) in providing standards to support reliability technology for nuclear power plants

    International Nuclear Information System (INIS)

    Steele, L.E.

    1978-01-01

    ASTM is an international society for managing the development of standards on characteristics and performance of materials, products, systems and services and the promotion of related knowledge. This paper provides on overview of ASTM, emphasizing its contribution to nuclear systems reliability. In so doing, the author, from his perspective as chairman of ASTM committee E 10 on ''Nuclear Applications and the Measurement of Radiation Effects and the Committee on Standards'', illustrates ASTM contributions to the understanding and control of radiation embrittlement of light-water reactor pressure vessels. Four major related taks are summarized and pertinent standards identified. These include: (1) surveillance practice (5 standards), (2) neutron dosimetry (8 standards), (3) specification for steels for nuclear service (7 standards) and (4) basic guidelines for thermal annealing to correct radiation embrittlement (1 standard). This illustration, a specific accomplishment using ASTM standards, is cited within the context of the broader nuclear-related activities of ASTM. (author)

  9. System principles, mathematical models and methods to ensure high reliability of safety systems

    Science.gov (United States)

    Zaslavskyi, V.

    2017-04-01

    Modern safety and security systems are composed of a large number of various components designed for detection, localization, tracking, collecting, and processing of information from the systems of monitoring, telemetry, control, etc. They are required to be highly reliable in a view to correctly perform data aggregation, processing and analysis for subsequent decision making support. On design and construction phases of the manufacturing of such systems a various types of components (elements, devices, and subsystems) are considered and used to ensure high reliability of signals detection, noise isolation, and erroneous commands reduction. When generating design solutions for highly reliable systems a number of restrictions and conditions such as types of components and various constrains on resources should be considered. Various types of components perform identical functions; however, they are implemented using diverse principles, approaches and have distinct technical and economic indicators such as cost or power consumption. The systematic use of different component types increases the probability of tasks performing and eliminates the common cause failure. We consider type-variety principle as an engineering principle of system analysis, mathematical models based on this principle, and algorithms for solving optimization problems of highly reliable safety and security systems design. Mathematical models are formalized in a class of two-level discrete optimization problems of large dimension. The proposed approach, mathematical models, algorithms can be used for problem solving of optimal redundancy on the basis of a variety of methods and control devices for fault and defects detection in technical systems, telecommunication networks, and energy systems.

  10. Adjoint sensitivity analysis of dynamic reliability models based on Markov chains - I: Theory

    International Nuclear Information System (INIS)

    Cacuci, D. G.; Cacuci, D. G.; Ionescu-Bujor, M.

    2008-01-01

    The development of the adjoint sensitivity analysis procedure (ASAP) for generic dynamic reliability models based on Markov chains is presented, together with applications of this procedure to the analysis of several systems of increasing complexity. The general theory is presented in Part I of this work and is accompanied by a paradigm application to the dynamic reliability analysis of a simple binary component, namely a pump functioning on an 'up/down' cycle until it fails irreparably. This paradigm example admits a closed form analytical solution, which permits a clear illustration of the main characteristics of the ASAP for Markov chains. In particular, it is shown that the ASAP for Markov chains presents outstanding computational advantages over other procedures currently in use for sensitivity and uncertainty analysis of the dynamic reliability of large-scale systems. This conclusion is further underscored by the large-scale applications presented in Part II. (authors)

  11. Adjoint sensitivity analysis of dynamic reliability models based on Markov chains - I: Theory

    Energy Technology Data Exchange (ETDEWEB)

    Cacuci, D. G. [Commiss Energy Atom, Direct Energy Nucl, Saclay, (France); Cacuci, D. G. [Univ Karlsruhe, Inst Nucl Technol and Reactor Safety, D-76021 Karlsruhe, (Germany); Ionescu-Bujor, M. [Forschungszentrum Karlsruhe, Fus Program, D-76021 Karlsruhe, (Germany)

    2008-07-01

    The development of the adjoint sensitivity analysis procedure (ASAP) for generic dynamic reliability models based on Markov chains is presented, together with applications of this procedure to the analysis of several systems of increasing complexity. The general theory is presented in Part I of this work and is accompanied by a paradigm application to the dynamic reliability analysis of a simple binary component, namely a pump functioning on an 'up/down' cycle until it fails irreparably. This paradigm example admits a closed form analytical solution, which permits a clear illustration of the main characteristics of the ASAP for Markov chains. In particular, it is shown that the ASAP for Markov chains presents outstanding computational advantages over other procedures currently in use for sensitivity and uncertainty analysis of the dynamic reliability of large-scale systems. This conclusion is further underscored by the large-scale applications presented in Part II. (authors)

  12. Data Applicability of Heritage and New Hardware for Launch Vehicle System Reliability Models

    Science.gov (United States)

    Al Hassan Mohammad; Novack, Steven

    2015-01-01

    Many launch vehicle systems are designed and developed using heritage and new hardware. In most cases, the heritage hardware undergoes modifications to fit new functional system requirements, impacting the failure rates and, ultimately, the reliability data. New hardware, which lacks historical data, is often compared to like systems when estimating failure rates. Some qualification of applicability for the data source to the current system should be made. Accurately characterizing the reliability data applicability and quality under these circumstances is crucial to developing model estimations that support confident decisions on design changes and trade studies. This presentation will demonstrate a data-source classification method that ranks reliability data according to applicability and quality criteria to a new launch vehicle. This method accounts for similarities/dissimilarities in source and applicability, as well as operating environments like vibrations, acoustic regime, and shock. This classification approach will be followed by uncertainty-importance routines to assess the need for additional data to reduce uncertainty.

  13. A Reliability Model for Ni-BaTiO3-Based (BME) Ceramic Capacitors

    Science.gov (United States)

    Liu, Donhang

    2014-01-01

    The evaluation of multilayer ceramic capacitors (MLCCs) with base-metal electrodes (BMEs) for potential NASA space project applications requires an in-depth understanding of their reliability. The reliability of an MLCC is defined as the ability of the dielectric material to retain its insulating properties under stated environmental and operational conditions for a specified period of time t. In this presentation, a general mathematic expression of a reliability model for a BME MLCC is developed and discussed. The reliability model consists of three parts: (1) a statistical distribution that describes the individual variation of properties in a test group of samples (Weibull, log normal, normal, etc.), (2) an acceleration function that describes how a capacitors reliability responds to external stresses such as applied voltage and temperature (All units in the test group should follow the same acceleration function if they share the same failure mode, independent of individual units), and (3) the effect and contribution of the structural and constructional characteristics of a multilayer capacitor device, such as the number of dielectric layers N, dielectric thickness d, average grain size r, and capacitor chip size S. In general, a two-parameter Weibull statistical distribution model is used in the description of a BME capacitors reliability as a function of time. The acceleration function that relates a capacitors reliability to external stresses is dependent on the failure mode. Two failure modes have been identified in BME MLCCs: catastrophic and slow degradation. A catastrophic failure is characterized by a time-accelerating increase in leakage current that is mainly due to existing processing defects (voids, cracks, delamination, etc.), or the extrinsic defects. A slow degradation failure is characterized by a near-linear increase in leakage current against the stress time; this is caused by the electromigration of oxygen vacancies (intrinsic defects). The

  14. Improvement of level-1 PSA computer code package - Modeling and analysis for dynamic reliability of nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Chang Hoon; Baek, Sang Yeup; Shin, In Sup; Moon, Shin Myung; Moon, Jae Phil; Koo, Hoon Young; Kim, Ju Shin [Seoul National University, Seoul (Korea, Republic of); Hong, Jung Sik [Seoul National Polytechnology University, Seoul (Korea, Republic of); Lim, Tae Jin [Soongsil University, Seoul (Korea, Republic of)

    1996-08-01

    The objective of this project is to develop a methodology of the dynamic reliability analysis for NPP. The first year`s research was focused on developing a procedure for analyzing failure data of running components and a simulator for estimating the reliability of series-parallel structures. The second year`s research was concentrated on estimating the lifetime distribution and PM effect of a component from its failure data in various cases, and the lifetime distribution of a system with a particular structure. Computer codes for performing these jobs were also developed. The objectives of the third year`s research is to develop models for analyzing special failure types (CCFs, Standby redundant structure) that were nor considered in the first two years, and to complete a methodology of the dynamic reliability analysis for nuclear power plants. The analysis of failure data of components and related researches for supporting the simulator must be preceded for providing proper input to the simulator. Thus this research is divided into three major parts. 1. Analysis of the time dependent life distribution and the PM effect. 2. Development of a simulator for system reliability analysis. 3. Related researches for supporting the simulator : accelerated simulation analytic approach using PH-type distribution, analysis for dynamic repair effects. 154 refs., 5 tabs., 87 figs. (author)

  15. Modeling Travel Time Reliability of Road Network Considering Connected Vehicle Guidance Characteristics Indexes

    Directory of Open Access Journals (Sweden)

    Jiangfeng Wang

    2017-01-01

    Full Text Available Travel time reliability (TTR is one of the important indexes for effectively evaluating the performance of road network, and TTR can effectively be improved using the real-time traffic guidance information. Compared with traditional traffic guidance, connected vehicle (CV guidance can provide travelers with more timely and accurate travel information, which can further improve the travel efficiency of road network. Five CV characteristics indexes are selected as explanatory variables including the Congestion Level (CL, Penetration Rate (PR, Compliance Rate (CR, release Delay Time (DT, and Following Rate (FR. Based on the five explanatory variables, a TTR model is proposed using the multilogistic regression method, and the prediction accuracy and the impact of characteristics indexes on TTR are analyzed using a CV guidance scenario. The simulation results indicate that 80% of the RMSE is concentrated within the interval of 0 to 0.0412. The correlation analysis of characteristics indexes shows that the influence of CL, PR, CR, and DT on the TTR is significant. PR and CR have a positive effect on TTR, and the average improvement rate is about 77.03% and 73.20% with the increase of PR and CR, respectively, while CL and DT have a negative effect on TTR, and TTR decreases by 31.21% with the increase of DT from 0 to 180 s.

  16. Natural circulation in water cooled nuclear power plants: Phenomena, models, and methodology for system reliability assessments

    International Nuclear Information System (INIS)

    2005-11-01

    In recent years it has been recognized that the application of passive safety systems (i.e. those whose operation takes advantage of natural forces such as convection and gravity), can contribute to simplification and potentially to improved economics of new nuclear power plant designs. Further, the IAEA Conference on The Safety of Nuclear Power: Strategy for the Future which was convened in 1991 noted that for new plants 'the use of passive safety features is a desirable method of achieving simplification and increasing the reliability of the performance of essential safety functions, and should be used wherever appropriate'. Considering the weak driving forces of passive systems based on natural circulation, careful design and analysis methods must be employed to assure that the systems perform their intended functions. To support the development of advanced water cooled reactor designs with passive systems, investigations of natural circulation are an ongoing activity in several IAEA Member States. Some new designs also utilize natural circulation as a means to remove core power during normal operation. In response to the motivating factors discussed above, and to foster international collaboration on the enabling technology of passive systems that utilize natural circulation, an IAEA Coordinated Research Project (CRP) on Natural Circulation Phenomena, Modelling and Reliability of Passive Systems that Utilize Natural Circulation was started in early 2004. Building on the shared expertise within the CRP, this publication presents extensive information on natural circulation phenomena, models, predictive tools and experiments that currently support design and analyses of natural circulation systems and highlights areas where additional research is needed. Therefore, this publication serves both to provide a description of the present state of knowledge on natural circulation in water cooled nuclear power plants and to guide the planning and conduct of the CRP in

  17. A reliability-risk modelling of nuclear rad-waste facilities

    International Nuclear Information System (INIS)

    Lehmann, P.H.; El-Bassioni, A.A.

    1975-01-01

    Rad-waste disposal systems of nuclear power sites are designed and operated to collect, delay, contain, and concentrate radioactive wastes from reactor plant processes such that on-site and off-site exposures to radiation are well below permissible limits. To assist the designer in achieving minimum release/exposure goals, a computerized reliability-risk model has been developed to simulate the rad-waste system. The objectives of the model are to furnish a practical tool for quantifying the effects of changes in system configuration, operation, and equipment, and for the identification of weak segments in the system design. Primarily, the model comprises a marriage of system analysis, reliability analysis, and release-risk assessment. Provisions have been included in the model to permit the optimization of the system design subject to constraints on cost and rad-releases. The system analysis phase involves the preparation of a physical and functional description of the rad-waste facility accompanied by the formation of a system tree diagram. The reliability analysis phase embodies the formulation of appropriate reliability models and the collection of model parameters. Release-risk assessment constitutes the analytical basis whereupon further system and reliability analyses may be warranted. Release-risk represents the potential for release of radioactivity and is defined as the product of an element's unreliability at time, t, and the radioactivity available for release in time interval, Δt. A computer code (RARISK) has been written to simulate the tree diagram of the rad-waste system. Reliability and release-risk results have been generated for cases which examined the process flow paths of typical rad-waste systems, the effects of repair and standby, the variations of equipment failure and repair rates, and changes in system configurations. The essential feature of this model is that a complex system like the rad-waste facility can be easily decomposed into its

  18. Reliability and Efficiency of Generalized Rumor Spreading Model on Complex Social Networks

    International Nuclear Information System (INIS)

    Naimi, Yaghoob; Naimi, Mohammad

    2013-01-01

    We introduce the generalized rumor spreading model and investigate some properties of this model on different complex social networks. Despite pervious rumor models that both the spreader-spreader (SS) and the spreader-stifler (SR) interactions have the same rate α, we define α (1) and α (2) for SS and SR interactions, respectively. The effect of variation of α (1) and α (2) on the final density of stiflers is investigated. Furthermore, the influence of the topological structure of the network in rumor spreading is studied by analyzing the behavior of several global parameters such as reliability and efficiency. Our results show that while networks with homogeneous connectivity patterns reach a higher reliability, scale-free topologies need a less time to reach a steady state with respect the rumor. (interdisciplinary physics and related areas of science and technology)

  19. LED Lighting System Reliability Modeling and Inference via Random Effects Gamma Process and Copula Function

    Directory of Open Access Journals (Sweden)

    Huibing Hao

    2015-01-01

    Full Text Available Light emitting diode (LED lamp has attracted increasing interest in the field of lighting systems due to its low energy and long lifetime. For different functions (i.e., illumination and color, it may have two or more performance characteristics. When the multiple performance characteristics are dependent, it creates a challenging problem to accurately analyze the system reliability. In this paper, we assume that the system has two performance characteristics, and each performance characteristic is governed by a random effects Gamma process where the random effects can capture the unit to unit differences. The dependency of performance characteristics is described by a Frank copula function. Via the copula function, the reliability assessment model is proposed. Considering the model is so complicated and analytically intractable, the Markov chain Monte Carlo (MCMC method is used to estimate the unknown parameters. A numerical example about actual LED lamps data is given to demonstrate the usefulness and validity of the proposed model and method.

  20. Knowledge modelling and reliability processing: presentation of the Figaro language and associated tools

    International Nuclear Information System (INIS)

    Bouissou, M.; Villatte, N.; Bouhadana, H.; Bannelier, M.

    1991-12-01

    EDF has been developing for several years an integrated set of knowledge-based and algorithmic tools for automation of reliability assessment of complex (especially sequential) systems. In this environment, the reliability expert has at his disposal all the powerful software tools for qualitative and quantitative processing, besides he gets various means to generate automatically the inputs for these tools, through the acquisition of graphical data. The development of these tools has been based on FIGARO, a specific language, which was built to get an homogeneous system modelling. Various compilers and interpreters get a FIGARO model into conventional models, such as fault-trees, Markov chains, Petri Networks. In this report, we introduce the main basics of FIGARO language, illustrating them with examples

  1. A reliability model of a warm standby configuration with two identical sets of units

    International Nuclear Information System (INIS)

    Huang, Wei; Loman, James; Song, Thomas

    2015-01-01

    This article presents a new reliability model and the development of its analytical solution for a warm standby redundant configuration with units that are originally operated in active mode, and then, upon turn-on of originally standby units, are put into warm standby mode. These units can be used later if a standby- turned into active-unit fails. Numerical results of an example configuration are presented and discussed with comparison to other warm standby configurations, and to Monte Carlo simulation results obtained from BlockSim software. Results show that the Monte Carlo simulation model gives virtually identical reliability value when the simulation uses a high number of replications, confirming the developed model. - Highlights: • A new reliability model is developed for a warm standby redundancy with two sets of identical units. • The units subject to state change from active to standby then back to active mode. • A closed form analytical solution is developed with exponential distribution. • To validate the developed model, a Monte Carlo simulation for an exemplary configuration is performed

  2. Reliability modelling and analysis of a multi-state element based on a dynamic Bayesian network

    Science.gov (United States)

    Li, Zhiqiang; Xu, Tingxue; Gu, Junyuan; Dong, Qi; Fu, Linyu

    2018-04-01

    This paper presents a quantitative reliability modelling and analysis method for multi-state elements based on a combination of the Markov process and a dynamic Bayesian network (DBN), taking perfect repair, imperfect repair and condition-based maintenance (CBM) into consideration. The Markov models of elements without repair and under CBM are established, and an absorbing set is introduced to determine the reliability of the repairable element. According to the state-transition relations between the states determined by the Markov process, a DBN model is built. In addition, its parameters for series and parallel systems, namely, conditional probability tables, can be calculated by referring to the conditional degradation probabilities. Finally, the power of a control unit in a failure model is used as an example. A dynamic fault tree (DFT) is translated into a Bayesian network model, and subsequently extended to a DBN. The results show the state probabilities of an element and the system without repair, with perfect and imperfect repair, and under CBM, with an absorbing set plotted by differential equations and verified. Through referring forward, the reliability value of the control unit is determined in different kinds of modes. Finally, weak nodes are noted in the control unit.

  3. A competing risk model for the reliability of cylinder liners in marine Diesel engines

    Energy Technology Data Exchange (ETDEWEB)

    Bocchetti, D. [Grimaldi Group, Naples (Italy); Giorgio, M. [Department of Aerospace and Mechanical Engineering, Second University of Naples, Aversa (Italy); Guida, M. [Department of Information Engineering and Electrical Engineering, University of Salerno, Fisciano (Italy); Pulcini, G. [Istituto Motori, National Research Council-CNR, Naples (Italy)], E-mail: g.pulcini@im.cnr.it

    2009-08-15

    In this paper, a competing risk model is proposed to describe the reliability of the cylinder liners of a marine Diesel engine. Cylinder liners presents two dominant failure modes: wear degradation and thermal cracking. The wear process is described through a stochastic process, whereas the failure time due to the thermal cracking is described by the Weibull distribution. The use of the proposed model allows performing goodness-of-fit test and parameters estimation on the basis of both wear and failure data. Moreover, it enables reliability estimates of the state of the liners to be obtained and the hierarchy of the failure mechanisms to be determined for any given age and wear level of the liner. The model has been applied to a real data set: 33 cylinder liners of Sulzer RTA 58 engines, which equip twin ships of the Grimaldi Group. Estimates of the liner reliability and of other quantities of interest under the competing risk model are obtained, as well as the conditional failure probability and mean residual lifetime, given the survival age and the accumulated wear. Furthermore, the model has been used to estimate the probability that a liner fails due to one of the failure modes when both of these modes act.

  4. Age-dependent reliability model considering effects of maintenance and working conditions

    International Nuclear Information System (INIS)

    Martorell, Sebastian; Sanchez, Ana; Serradell, Vicente

    1999-01-01

    Nowadays, there is some doubt about building new nuclear power plants (NPPs). Instead, there is a growing interest in analyzing the possibility to extend current NPP operation, where life management programs play an important role. The evolution of the NPP safety depends on the evolution of the reliability of its safety components, which, in turn, is a function of their age along the NPP operational life. In this paper, a new age-dependent reliability model is presented, which includes parameters related to surveillance and maintenance effectiveness and working conditions of the equipment, both environmental and operational. This model may be used to support NPP life management and life extension programs, by improving or optimizing surveillance and maintenance tasks using risk and cost models based on such an age-dependent reliability model. The results of the sensitivity study in the example application show that the selection of the most appropriate maintenance strategy would directly depend on the previous parameters. Then, very important differences are expected to appear under certain circumstances, particularly, in comparison with other models that do not consider maintenance effectiveness and working conditions simultaneously

  5. solveME: fast and reliable solution of nonlinear ME models

    DEFF Research Database (Denmark)

    Yang, Laurence; Ma, Ding; Ebrahim, Ali

    2016-01-01

    Background: Genome-scale models of metabolism and macromolecular expression (ME) significantly expand the scope and predictive capabilities of constraint-based modeling. ME models present considerable computational challenges: they are much (>30 times) larger than corresponding metabolic reconstr......Background: Genome-scale models of metabolism and macromolecular expression (ME) significantly expand the scope and predictive capabilities of constraint-based modeling. ME models present considerable computational challenges: they are much (>30 times) larger than corresponding metabolic...... reconstructions (M models), are multiscale, and growth maximization is a nonlinear programming (NLP) problem, mainly due to macromolecule dilution constraints. Results: Here, we address these computational challenges. We develop a fast and numerically reliable solution method for growth maximization in ME models...

  6. An Open Modelling Approach for Availability and Reliability of Systems - OpenMARS

    CERN Document Server

    Penttinen, Jussi-Pekka; Gutleber, Johannes

    2018-01-01

    This document introduces and gives specification for OpenMARS, which is an open modelling approach for availability and reliability of systems. It supports the most common risk assessment and operation modelling techniques. Uniquely OpenMARS allows combining and connecting models defined with different techniques. This ensures that a modeller has a high degree of freedom to accurately describe the modelled system without limitations imposed by an individual technique. Here the OpenMARS model definition is specified with a tool independent tabular format, which supports managing models developed in a collaborative fashion. Origin of our research is in Future Circular Collider (FCC) study, where we developed the unique features of our concept to model the availability and luminosity production of particle colliders. We were motivated to describe our approach in detail as we see potential further applications in performance and energy efficiency analyses of large scientific infrastructures or industrial processe...

  7. Stochastic network interdiction optimization via capacitated network reliability modeling and probabilistic solution discovery

    International Nuclear Information System (INIS)

    Ramirez-Marquez, Jose Emmanuel; Rocco S, Claudio M.

    2009-01-01

    This paper introduces an evolutionary optimization approach that can be readily applied to solve stochastic network interdiction problems (SNIP). The network interdiction problem solved considers the minimization of the cost associated with an interdiction strategy such that the maximum flow that can be transmitted between a source node and a sink node for a fixed network design is greater than or equal to a given reliability requirement. Furthermore, the model assumes that the nominal capacity of each network link and the cost associated with their interdiction can change from link to link and that such interdiction has a probability of being successful. This version of the SNIP is for the first time modeled as a capacitated network reliability problem allowing for the implementation of computation and solution techniques previously unavailable. The solution process is based on an evolutionary algorithm that implements: (1) Monte-Carlo simulation, to generate potential network interdiction strategies, (2) capacitated network reliability techniques to analyze strategies' source-sink flow reliability and, (3) an evolutionary optimization technique to define, in probabilistic terms, how likely a link is to appear in the final interdiction strategy. Examples for different sizes of networks are used throughout the paper to illustrate the approach

  8. Development of Probabilistic Reliability Models of Photovoltaic System Topologies for System Adequacy Evaluation

    Directory of Open Access Journals (Sweden)

    Ahmad Alferidi

    2017-02-01

    Full Text Available The contribution of solar power in electric power systems has been increasing rapidly due to its environmentally friendly nature. Photovoltaic (PV systems contain solar cell panels, power electronic converters, high power switching and often transformers. These components collectively play an important role in shaping the reliability of PV systems. Moreover, the power output of PV systems is variable, so it cannot be controlled as easily as conventional generation due to the unpredictable nature of weather conditions. Therefore, solar power has a different influence on generating system reliability compared to conventional power sources. Recently, different PV system designs have been constructed to maximize the output power of PV systems. These different designs are commonly adopted based on the scale of a PV system. Large-scale grid-connected PV systems are generally connected in a centralized or a string structure. Central and string PV schemes are different in terms of connecting the inverter to PV arrays. Micro-inverter systems are recognized as a third PV system topology. It is therefore important to evaluate the reliability contribution of PV systems under these topologies. This work utilizes a probabilistic technique to develop a power output model for a PV generation system. A reliability model is then developed for a PV integrated power system in order to assess the reliability and energy contribution of the solar system to meet overall system demand. The developed model is applied to a small isolated power unit to evaluate system adequacy and capacity level of a PV system considering the three topologies.

  9. System Reliability Engineering

    International Nuclear Information System (INIS)

    Lim, Tae Jin

    2005-02-01

    This book tells of reliability engineering, which includes quality and reliability, reliability data, importance of reliability engineering, reliability and measure, the poisson process like goodness of fit test and the poisson arrival model, reliability estimation like exponential distribution, reliability of systems, availability, preventive maintenance such as replacement policies, minimal repair policy, shock models, spares, group maintenance and periodic inspection, analysis of common cause failure, and analysis model of repair effect.

  10. A holistic framework of degradation modeling for reliability analysis and maintenance optimization of nuclear safety systems

    International Nuclear Information System (INIS)

    Lin, Yanhui

    2016-01-01

    Components of nuclear safety systems are in general highly reliable, which leads to a difficulty in modeling their degradation and failure behaviors due to the limited amount of data available. Besides, the complexity of such modeling task is increased by the fact that these systems are often subject to multiple competing degradation processes and that these can be dependent under certain circumstances, and influenced by a number of external factors (e.g. temperature, stress, mechanical shocks, etc.). In this complicated problem setting, this PhD work aims to develop a holistic framework of models and computational methods for the reliability-based analysis and maintenance optimization of nuclear safety systems taking into account the available knowledge on the systems, degradation and failure behaviors, their dependencies, the external influencing factors and the associated uncertainties.The original scientific contributions of the work are: (1) For single components, we integrate random shocks into multi-state physics models for component reliability analysis, considering general dependencies between the degradation and two types of random shocks. (2) For multi-component systems (with a limited number of components):(a) a piecewise-deterministic Markov process modeling framework is developed to treat degradation dependency in a system whose degradation processes are modeled by physics-based models and multi-state models; (b) epistemic uncertainty due to incomplete or imprecise knowledge is considered and a finite-volume scheme is extended to assess the (fuzzy) system reliability; (c) the mean absolute deviation importance measures are extended for components with multiple dependent competing degradation processes and subject to maintenance; (d) the optimal maintenance policy considering epistemic uncertainty and degradation dependency is derived by combining finite-volume scheme, differential evolution and non-dominated sorting differential evolution; (e) the

  11. Research on the Reliability Analysis of the Integrated Modular Avionics System Based on the AADL Error Model

    Directory of Open Access Journals (Sweden)

    Peng Wang

    2018-01-01

    Full Text Available In recent years, the integrated modular avionics (IMA concept has been introduced to replace the traditional federated avionics. Different avionics functions are hosted in a shared IMA platform, and IMA adopts partition technologies to provide a logical isolation among different functions. The IMA architecture can provide more sophisticated and powerful avionics functionality; meanwhile, the failure propagation patterns in IMA are more complex. The feature of resource sharing introduces some unintended interconnections among different functions, which makes the failure propagation modes more complex. Therefore, this paper proposes an architecture analysis and design language- (AADL- based method to establish the reliability model of IMA platform. The single software and hardware error behavior in IMA system is modeled. The corresponding AADL error model of failure propagation among components, between software and hardware, is given. Finally, the display function of IMA platform is taken as an example to illustrate the effectiveness of the proposed method.

  12. Modeling Energy & Reliability of a CNT based WSN on an HPC Setup

    Directory of Open Access Journals (Sweden)

    Rohit Pathak

    2010-07-01

    Full Text Available We have analyzed the effect of innovations in Nanotechnology on Wireless Sensor Networks (WSN and have modeled Carbon Nanotube (CNT based sensor nodes from a device prospective. A WSN model has been programmed in Simulink-MATLAB and a library has been developed. Integration of CNT in WSN for various modules such as sensors, microprocessors, batteries etc has been shown. Also average energy consumption for the system has been formulated and its reliability has been shown holistically. A proposition has been put forward on the changes needed in existing sensor node structure to improve its efficiency and to facilitate as well as enhance the assimilation of CNT based devices in a WSN. Finally we have commented on the challenges that exist in this technology and described the important factors that need to be considered for calculating reliability. This research will help in practical implementation of CNT based devices and analysis of their key effects on the WSN environment. The work has been executed on Simulink and Distributive Computing toolbox of MATLAB. The proposal has been compared to the recent developments and past experimental results reported in this field. This attempt to derieve the energy consumption and reliability implications will help in development of real devices using CNT which is a major hurdle in bringing the success from lab to commercial market. Recent research in CNT has been used to model an energy efficient model which will also lead to the development CAD tools. Library for Reliability and Energy consumption includes analysis of various parts of a WSN system which is being constructed from CNT. Nano routing in a CNT system is also implemented with its dependencies. Finally the computations were executed on a HPC setup and the model showed remarkable speedup.

  13. Fuzzy sets as extension of probabilistic models for evaluating human reliability

    International Nuclear Information System (INIS)

    Przybylski, F.

    1996-11-01

    On the base of a survey of established quantification methodologies for evaluating human reliability, a new computerized methodology was developed in which a differential consideration of user uncertainties is made. In this quantification method FURTHER (FUzzy Sets Related To Human Error Rate Prediction), user uncertainties are quantified separately from model and data uncertainties. As tools fuzzy sets are applied which, however, stay hidden to the method's user. The user in the quantification process only chooses an action pattern, performance shaping factors and natural language expressions. The acknowledged method HEART (Human Error Assessment and Reduction Technique) serves as foundation of the fuzzy set approach FURTHER. By means of this method, the selection of a basic task in connection with its basic error probability, the decision how correct the basic task's selection is, the selection of a peformance shaping factor, and the decision how correct the selection and how important the performance shaping factor is, were identified as aspects of fuzzification. This fuzzification is made on the base of data collection and information from literature as well as of the estimation by competent persons. To verify the ammount of additional information to be received by the usage of fuzzy sets, a benchmark session was accomplished. In this benchmark twelve actions were assessed by five test-persons. In case of the same degree of detail in the action modelling process, the bandwidths of the interpersonal evaluations decrease in FURTHER in comparison with HEART. The uncertainties of the single results could not be reduced up to now. The benchmark sessions conducted so far showed plausible results. A further testing of the fuzzy set approach by using better confirmed fuzzy sets can only be achieved in future practical application. Adequate procedures, however, are provided. (orig.) [de

  14. Risk evaluations of aging phenomena: the linear aging reliability model and its extensions

    International Nuclear Information System (INIS)

    Vesely, W.E.

    1987-01-01

    A model for component failure rates due to aging mechanisms has been developed from basic phenomenological considerations. In the treatment, the occurrences of deterioration are modeled as following a Poisson process. The severity of damage is allowed to have any distribution, however the damage is assumed to accumulate independently. Finally, the failure rate is modeled as being proportional to the accumulated damage. Using this treatment, the linear aging failure rate model is obtained. The applicability of the linear aging model to various mechanisms is discussed. The model can be extended to cover nonlinear and dependent aging phenomena. The implementability of the linear aging model is demonstrated by applying it to the aging data collected in NRC's Nuclear Plant Aging Research (NPAR) Program. The applications show that aging as observed in collected data have significant effects on the component failure probability and component reliability when aging is not effectively detected and controlled by testing and maintenance

  15. Risk evaluations of aging phenomena: The linear aging reliability model and its extensions

    International Nuclear Information System (INIS)

    Vesely, W.E.

    1986-01-01

    A model for component failure rates due to aging mechanisms has been developed from basic phenomenological considerations. In the treatment, the occurrences of deterioration are modeled as following a Poisson process. The severity of damage is allowed to have any distribution, however the damage is assumed to accumulate independently. Finally, the failure rate is modeled as being proportional to the accumulated damage. Using this treatment, the linear aging failure rate model is obtained. The applicability of the linear aging model to various mechanisms is discussed. The model can be extended to cover nonlinear and dependent aging phenomena. The implementability of the linear aging model is demonstrated by applying it of the aging data collected in NRC's Nuclear Plant Aging Research (NPAR) Program. The applications show that aging as observed in collected data have significant effects on the component failure probability and component reliability when aging is not effectively detected and controlled by testing and maintenance

  16. Development of thermal hydraulic models for the reliable regulatory auditing code

    Energy Technology Data Exchange (ETDEWEB)

    Chung, B. D.; Song, C. H.; Lee, Y. J.; Kwon, T. S. [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2003-04-15

    The objective of this project is to develop thermal hydraulic models for use in improving the reliability of the regulatory auditing codes. The current year fall under the first step of the 3 year project, and the main researches were focused on identifying the candidate thermal hydraulic models for improvement and to develop prototypical model development. During the current year, the verification calculations submitted for the APR 1400 design certification have been reviewed, the experimental data from the MIDAS DVI experiment facility in KAERI have been analyzed and evaluated, candidate thermal hydraulic models for improvement have been identified, prototypical models for the improved thermal hydraulic models have been developed, items for experiment in connection with the model development have been identified, and preliminary design of the experiment has been carried out.

  17. Development of thermal hydraulic models for the reliable regulatory auditing code

    International Nuclear Information System (INIS)

    Chung, B. D.; Song, C. H.; Lee, Y. J.; Kwon, T. S.

    2003-04-01

    The objective of this project is to develop thermal hydraulic models for use in improving the reliability of the regulatory auditing codes. The current year fall under the first step of the 3 year project, and the main researches were focused on identifying the candidate thermal hydraulic models for improvement and to develop prototypical model development. During the current year, the verification calculations submitted for the APR 1400 design certification have been reviewed, the experimental data from the MIDAS DVI experiment facility in KAERI have been analyzed and evaluated, candidate thermal hydraulic models for improvement have been identified, prototypical models for the improved thermal hydraulic models have been developed, items for experiment in connection with the model development have been identified, and preliminary design of the experiment has been carried out

  18. Modeling and simulation for microelectronic packaging assembly manufacturing, reliability and testing

    CERN Document Server

    Liu, Sheng

    2011-01-01

    Although there is increasing need for modeling and simulation in the IC package design phase, most assembly processes and various reliability tests are still based on the time consuming ""test and try out"" method to obtain the best solution. Modeling and simulation can easily ensure virtual Design of Experiments (DoE) to achieve the optimal solution. This has greatly reduced the cost and production time, especially for new product development. Using modeling and simulation will become increasingly necessary for future advances in 3D package development.  In this book, Liu and Liu allow people

  19. Evaluation of seismic reliability of steel moment resisting frames rehabilitated by concentric braces with probabilistic models

    Directory of Open Access Journals (Sweden)

    Fateme Rezaei

    2017-08-01

    Full Text Available Probability of structure failure which has been designed by "deterministic methods" can be more than the one which has been designed in similar situation using probabilistic methods and models considering "uncertainties". The main purpose of this research was to evaluate the seismic reliability of steel moment resisting frames rehabilitated with concentric braces by probabilistic models. To do so, three-story and nine-story steel moment resisting frames were designed based on resistant criteria of Iranian code and then they were rehabilitated based on controlling drift limitations by concentric braces. Probability of frames failure was evaluated by probabilistic models of magnitude, location of earthquake, ground shaking intensity in the area of the structure, probabilistic model of building response (based on maximum lateral roof displacement and probabilistic methods. These frames were analyzed under subcrustal source by sampling probabilistic method "Risk Tools" (RT. Comparing the exceedance probability of building response curves (or selected points on it of the three-story and nine-story model frames (before and after rehabilitation, seismic response of rehabilitated frames, was reduced and their reliability was improved. Also the main effective variables in reducing the probability of frames failure were determined using sensitivity analysis by FORM probabilistic method. The most effective variables reducing the probability of frames failure are  in the magnitude model, ground shaking intensity model error and magnitude model error

  20. Using the graphs models for evaluating in-core monitoring systems reliability by the method of imiting simulaton

    International Nuclear Information System (INIS)

    Golovanov, M.N.; Zyuzin, N.N.; Levin, G.L.; Chesnokov, A.N.

    1987-01-01

    An approach for estimation of reliability factors of complex reserved systems at early stages of development using the method of imitating simulation is considered. Different types of models, their merits and lacks are given. Features of in-core monitoring systems and advosability of graph model and graph theory element application for estimating reliability of such systems are shown. The results of investigation of the reliability factors of the reactor monitoring, control and core local protection subsystem are shown

  1. Markov modeling and reliability analysis of urea synthesis system of a fertilizer plant

    Science.gov (United States)

    Aggarwal, Anil Kr.; Kumar, Sanjeev; Singh, Vikram; Garg, Tarun Kr.

    2015-12-01

    This paper deals with the Markov modeling and reliability analysis of urea synthesis system of a fertilizer plant. This system was modeled using Markov birth-death process with the assumption that the failure and repair rates of each subsystem follow exponential distribution. The first-order Chapman-Kolmogorov differential equations are developed with the use of mnemonic rule and these equations are solved with Runga-Kutta fourth-order method. The long-run availability, reliability and mean time between failures are computed for various choices of failure and repair rates of subsystems of the system. The findings of the paper are discussed with the plant personnel to adopt and practice suitable maintenance policies/strategies to enhance the performance of the urea synthesis system of the fertilizer plant.

  2. An artificial neural network for modeling reliability, availability and maintainability of a repairable system

    International Nuclear Information System (INIS)

    Rajpal, P.S.; Shishodia, K.S.; Sekhon, G.S.

    2006-01-01

    The paper explores the application of artificial neural networks to model the behaviour of a complex, repairable system. A composite measure of reliability, availability and maintainability parameters has been proposed for measuring the system performance. The artificial neural network has been trained using past data of a helicopter transportation facility. It is used to simulate behaviour of the facility under various constraints. The insights obtained from results of simulation are useful in formulating strategies for optimal operation of the system

  3. Frontiers of reliability

    CERN Document Server

    Basu, Asit P; Basu, Sujit K

    1998-01-01

    This volume presents recent results in reliability theory by leading experts in the world. It will prove valuable for researchers, and users of reliability theory. It consists of refereed invited papers on a broad spectrum of topics in reliability. The subjects covered include Bayesian reliability, Bayesian reliability modeling, confounding in a series system, DF tests, Edgeworth approximation to reliability, estimation under random censoring, fault tree reduction for reliability, inference about changes in hazard rates, information theory and reliability, mixture experiment, mixture of Weibul

  4. Human reliability in non-destructive inspections of nuclear power plant components: modeling and analysis

    International Nuclear Information System (INIS)

    Vasconcelos, Vanderley de; Soares, Wellington Antonio; Marques, Raíssa Oliveira; Silva Júnior, Silvério Ferreira da; Raso, Amanda Laureano

    2017-01-01

    Non-destructive inspection (NDI) is one of the key elements in ensuring quality of engineering systems and their safe use. NDI is a very complex task, during which the inspectors have to rely on their sensory, perceptual, cognitive, and motor skills. It requires high vigilance once it is often carried out on large components, over a long period of time, and in hostile environments and restriction of workplace. A successful NDI requires careful planning, choice of appropriate NDI methods and inspection procedures, as well as qualified and trained inspection personnel. A failure of NDI to detect critical defects in safety-related components of nuclear power plants, for instance, may lead to catastrophic consequences for workers, public and environment. Therefore, ensuring that NDI methods are reliable and capable of detecting all critical defects is of utmost importance. Despite increased use of automation in NDI, human inspectors, and thus human factors, still play an important role in NDI reliability. Human reliability is the probability of humans conducting specific tasks with satisfactory performance. Many techniques are suitable for modeling and analyzing human reliability in NDI of nuclear power plant components. Among these can be highlighted Failure Modes and Effects Analysis (FMEA) and THERP (Technique for Human Error Rate Prediction). The application of these techniques is illustrated in an example of qualitative and quantitative studies to improve typical NDI of pipe segments of a core cooling system of a nuclear power plant, through acting on human factors issues. (author)

  5. Human reliability in non-destructive inspections of nuclear power plant components: modeling and analysis

    Energy Technology Data Exchange (ETDEWEB)

    Vasconcelos, Vanderley de; Soares, Wellington Antonio; Marques, Raíssa Oliveira; Silva Júnior, Silvério Ferreira da; Raso, Amanda Laureano, E-mail: vasconv@cdtn.br, E-mail: soaresw@cdtn.br, E-mail: raissaomarques@gmail.com, E-mail: silvasf@cdtn.br, E-mail: amandaraso@hotmail.com [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)

    2017-07-01

    Non-destructive inspection (NDI) is one of the key elements in ensuring quality of engineering systems and their safe use. NDI is a very complex task, during which the inspectors have to rely on their sensory, perceptual, cognitive, and motor skills. It requires high vigilance once it is often carried out on large components, over a long period of time, and in hostile environments and restriction of workplace. A successful NDI requires careful planning, choice of appropriate NDI methods and inspection procedures, as well as qualified and trained inspection personnel. A failure of NDI to detect critical defects in safety-related components of nuclear power plants, for instance, may lead to catastrophic consequences for workers, public and environment. Therefore, ensuring that NDI methods are reliable and capable of detecting all critical defects is of utmost importance. Despite increased use of automation in NDI, human inspectors, and thus human factors, still play an important role in NDI reliability. Human reliability is the probability of humans conducting specific tasks with satisfactory performance. Many techniques are suitable for modeling and analyzing human reliability in NDI of nuclear power plant components. Among these can be highlighted Failure Modes and Effects Analysis (FMEA) and THERP (Technique for Human Error Rate Prediction). The application of these techniques is illustrated in an example of qualitative and quantitative studies to improve typical NDI of pipe segments of a core cooling system of a nuclear power plant, through acting on human factors issues. (author)

  6. A comparison between Markovian models and Bayesian networks for treating some dependent events in reliability evaluations

    Energy Technology Data Exchange (ETDEWEB)

    Duarte, Juliana P.; Leite, Victor C.; Melo, P.F. Frutuoso e, E-mail: julianapduarte@poli.ufrj.br, E-mail: victor.coppo.leite@poli.ufrj.br, E-mail: frutuoso@nuclear.ufrj.br [Universidade Federal do Rio de Janeiro (UFRJ), Rio de Janeiro, RJ (Brazil)

    2013-07-01

    Bayesian networks have become a very handy tool for solving problems in various application areas. This paper discusses the use of Bayesian networks to treat dependent events in reliability engineering typically modeled by Markovian models. Dependent events play an important role as, for example, when treating load-sharing systems, bridge systems, common-cause failures, and switching systems (those for which a standby component is activated after the main one fails by means of a switching mechanism). Repair plays an important role in all these cases (as, for example, the number of repairmen). All Bayesian network calculations are performed by means of the Netica™ software, of Norsys Software Corporation, and Fortran 90 to evaluate them over time. The discussion considers the development of time-dependent reliability figures of merit, which are easily obtained, through Markovian models, but not through Bayesian networks, because these latter need probability figures as input and not failure and repair rates. Bayesian networks produced results in very good agreement with those of Markov models and pivotal decomposition. Static and discrete time (DTBN) Bayesian networks were used in order to check their capabilities of modeling specific situations, like switching failures in cold-standby systems. The DTBN was more flexible to modeling systems where the time of occurrence of an event is important, for example, standby failure and repair. However, the static network model showed as good results as DTBN by a much more simplified approach. (author)

  7. A comparison between Markovian models and Bayesian networks for treating some dependent events in reliability evaluations

    International Nuclear Information System (INIS)

    Duarte, Juliana P.; Leite, Victor C.; Melo, P.F. Frutuoso e

    2013-01-01

    Bayesian networks have become a very handy tool for solving problems in various application areas. This paper discusses the use of Bayesian networks to treat dependent events in reliability engineering typically modeled by Markovian models. Dependent events play an important role as, for example, when treating load-sharing systems, bridge systems, common-cause failures, and switching systems (those for which a standby component is activated after the main one fails by means of a switching mechanism). Repair plays an important role in all these cases (as, for example, the number of repairmen). All Bayesian network calculations are performed by means of the Netica™ software, of Norsys Software Corporation, and Fortran 90 to evaluate them over time. The discussion considers the development of time-dependent reliability figures of merit, which are easily obtained, through Markovian models, but not through Bayesian networks, because these latter need probability figures as input and not failure and repair rates. Bayesian networks produced results in very good agreement with those of Markov models and pivotal decomposition. Static and discrete time (DTBN) Bayesian networks were used in order to check their capabilities of modeling specific situations, like switching failures in cold-standby systems. The DTBN was more flexible to modeling systems where the time of occurrence of an event is important, for example, standby failure and repair. However, the static network model showed as good results as DTBN by a much more simplified approach. (author)

  8. Automatic creation of Markov models for reliability assessment of safety instrumented systems

    International Nuclear Information System (INIS)

    Guo Haitao; Yang Xianhui

    2008-01-01

    After the release of new international functional safety standards like IEC 61508, people care more for the safety and availability of safety instrumented systems. Markov analysis is a powerful and flexible technique to assess the reliability measurements of safety instrumented systems, but it is fallible and time-consuming to create Markov models manually. This paper presents a new technique to automatically create Markov models for reliability assessment of safety instrumented systems. Many safety related factors, such as failure modes, self-diagnostic, restorations, common cause and voting, are included in Markov models. A framework is generated first based on voting, failure modes and self-diagnostic. Then, repairs and common-cause failures are incorporated into the framework to build a complete Markov model. Eventual simplification of Markov models can be done by state merging. Examples given in this paper show how explosively the size of Markov model increases as the system becomes a little more complicated as well as the advancement of automatic creation of Markov models

  9. The interplay of various sources of noise on reliability of species distribution models hinges on ecological specialisation.

    Science.gov (United States)

    Soultan, Alaaeldin; Safi, Kamran

    2017-01-01

    Digitized species occurrence data provide an unprecedented source of information for ecologists and conservationists. Species distribution model (SDM) has become a popular method to utilise these data for understanding the spatial and temporal distribution of species, and for modelling biodiversity patterns. Our objective is to study the impact of noise in species occurrence data (namely sample size and positional accuracy) on the performance and reliability of SDM, considering the multiplicative impact of SDM algorithms, species specialisation, and grid resolution. We created a set of four 'virtual' species characterized by different specialisation levels. For each of these species, we built the suitable habitat models using five algorithms at two grid resolutions, with varying sample sizes and different levels of positional accuracy. We assessed the performance and reliability of the SDM according to classic model evaluation metrics (Area Under the Curve and True Skill Statistic) and model agreement metrics (Overall Concordance Correlation Coefficient and geographic niche overlap) respectively. Our study revealed that species specialisation had by far the most dominant impact on the SDM. In contrast to previous studies, we found that for widespread species, low sample size and low positional accuracy were acceptable, and useful distribution ranges could be predicted with as few as 10 species occurrences. Range predictions for narrow-ranged species, however, were sensitive to sample size and positional accuracy, such that useful distribution ranges required at least 20 species occurrences. Against expectations, the MAXENT algorithm poorly predicted the distribution of specialist species at low sample size.

  10. Testing comparison models of DASS-12 and its reliability among adolescents in Malaysia.

    Science.gov (United States)

    Osman, Zubaidah Jamil; Mukhtar, Firdaus; Hashim, Hairul Anuar; Abdul Latiff, Latiffah; Mohd Sidik, Sherina; Awang, Hamidin; Ibrahim, Normala; Abdul Rahman, Hejar; Ismail, Siti Irma Fadhilah; Ibrahim, Faisal; Tajik, Esra; Othman, Norlijah

    2014-10-01

    The 21-item Depression, Anxiety and Stress Scale (DASS-21) is frequently used in non-clinical research to measure mental health factors among adults. However, previous studies have concluded that the 21 items are not stable for utilization among the adolescent population. Thus, the aims of this study are to examine the structure of the factors and to report on the reliability of the refined version of the DASS that consists of 12 items. A total of 2850 students (aged 13 to 17 years old) from three major ethnic in Malaysia completed the DASS-21. The study was conducted at 10 randomly selected secondary schools in the northern state of Peninsular Malaysia. The study population comprised secondary school students (Forms 1, 2 and 4) from the selected schools. Based on the results of the EFA stage, 12 items were included in a final CFA to test the fit of the model. Using maximum likelihood procedures to estimate the model, the selected fit indices indicated a close model fit (χ(2)=132.94, df=57, p=.000; CFI=.96; RMR=.02; RMSEA=.04). Moreover, significant loadings of all the unstandardized regression weights implied an acceptable convergent validity. Besides the convergent validity of the item, a discriminant validity of the subscales was also evident from the moderate latent factor inter-correlations, which ranged from .62 to .75. The subscale reliability was further estimated using Cronbach's alpha and the adequate reliability of the subscales was obtained (Total=76; Depression=.68; Anxiety=.53; Stress=.52). The new version of the 12-item DASS for adolescents in Malaysia (DASS-12) is reliable and has a stable factor structure, and thus it is a useful instrument for distinguishing between depression, anxiety and stress. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Accelerated Monte Carlo system reliability analysis through machine-learning-based surrogate models of network connectivity

    International Nuclear Information System (INIS)

    Stern, R.E.; Song, J.; Work, D.B.

    2017-01-01

    The two-terminal reliability problem in system reliability analysis is known to be computationally intractable for large infrastructure graphs. Monte Carlo techniques can estimate the probability of a disconnection between two points in a network by selecting a representative sample of network component failure realizations and determining the source-terminal connectivity of each realization. To reduce the runtime required for the Monte Carlo approximation, this article proposes an approximate framework in which the connectivity check of each sample is estimated using a machine-learning-based classifier. The framework is implemented using both a support vector machine (SVM) and a logistic regression based surrogate model. Numerical experiments are performed on the California gas distribution network using the epicenter and magnitude of the 1989 Loma Prieta earthquake as well as randomly-generated earthquakes. It is shown that the SVM and logistic regression surrogate models are able to predict network connectivity with accuracies of 99% for both methods, and are 1–2 orders of magnitude faster than using a Monte Carlo method with an exact connectivity check. - Highlights: • Surrogate models of network connectivity are developed by machine-learning algorithms. • Developed surrogate models can reduce the runtime required for Monte Carlo simulations. • Support vector machine and logistic regressions are employed to develop surrogate models. • Numerical example of California gas distribution network demonstrate the proposed approach. • The developed models have accuracies 99%, and are 1–2 orders of magnitude faster than MCS.

  12. RTE - 2013 Reliability Report

    International Nuclear Information System (INIS)

    Denis, Anne-Marie

    2014-01-01

    RTE publishes a yearly reliability report based on a standard model to facilitate comparisons and highlight long-term trends. The 2013 report is not only stating the facts of the Significant System Events (ESS), but it moreover underlines the main elements dealing with the reliability of the electrical power system. It highlights the various elements which contribute to present and future reliability and provides an overview of the interaction between the various stakeholders of the Electrical Power System on the scale of the European Interconnected Network. (author)

  13. High-Speed Shaft Bearing Loads Testing and Modeling in the NREL Gearbox Reliability Collaborative: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    McNiff, B.; Guo, Y.; Keller, J.; Sethuraman, L.

    2014-12-01

    Bearing failures in the high speed output stage of the gearbox are plaguing the wind turbine industry. Accordingly, the National Renewable Energy Laboratory (NREL) Gearbox Reliability Collaborative (GRC) has performed an experimental and theoretical investigation of loads within these bearings. The purpose of this paper is to describe the instrumentation, calibrations, data post-processing and initial results from this testing and modeling effort. Measured HSS torque, bending, and bearing loads are related to model predictions. Of additional interest is examining if the shaft measurements can be simply related to bearing load measurements, eliminating the need for invasive modifications of the bearing races for such instrumentation.

  14. Final Report: System Reliability Model for Solid-State Lighting (SSL) Luminaires

    Energy Technology Data Exchange (ETDEWEB)

    Davis, J. Lynn [RTI International, Research Triangle Park, NC (United States)

    2017-05-31

    The primary objectives of this project was to develop and validate reliability models and accelerated stress testing (AST) methodologies for predicting the lifetime of integrated SSL luminaires. This study examined the likely failure modes for SSL luminaires including abrupt failure, excessive lumen depreciation, unacceptable color shifts, and increased power consumption. Data on the relative distribution of these failure modes were acquired through extensive accelerated stress tests and combined with industry data and other source of information on LED lighting. This data was compiled and utilized to build models of the aging behavior of key luminaire optical and electrical components.

  15. Human-centered modeling in human reliability analysis: some trends based on case studies

    International Nuclear Information System (INIS)

    Mosneron-Dupin, F.; Reer, B.; Heslinga, G.; Straeter, O.; Gerdes, V.; Saliou, G.; Ullwer, W.

    1997-01-01

    As an informal working group of researchers from France, Germany and The Netherlands created in 1993, the EARTH association is investigating significant subjects in the field of human reliability analysis (HRA). Our initial review of cases from nuclear operating experience showed that decision-based unrequired actions (DUA) contribute to risk significantly on the one hand. On the other hand, our evaluation of current HRA methods showed that these methods do not cover such actions adequately. Especially, practice-oriented guidelines for their predictive identification are lacking. We assumed that a basic cause for such difficulties was that these methods actually use a limited representation of the stimulus-organism-response (SOR) paradigm. We proposed a human-centered model, which better highlights the active role of the operators and the importance of their culture, attitudes and goals. This orientation was encouraged by our review of current HRA research activities. We therefore decided to envisage progress by identifying cognitive tendencies in the context of operating and simulator experience. For this purpose, advanced approaches for retrospective event analysis were discussed. Some orientations for improvements were proposed. By analyzing cases, various cognitive tendencies were identified, together with useful information about their context. Some of them match psychological findings already published in the literature, some of them are not covered adequately by the literature that we reviewed. Finally, this exploratory study shows that contextual and case-illustrated findings about cognitive tendencies provide useful help for the predictive identification of DUA in HRA. More research should be carried out to complement our findings and elaborate more detailed and systematic guidelines for using them in HRA studies

  16. Development of thermal hydraulic models for the reliable regulatory auditing code

    Energy Technology Data Exchange (ETDEWEB)

    Chung, B. D.; Song, C. H.; Lee, Y. J.; Kwon, T. S.; Lee, S. W. [Korea Automic Energy Research Institute, Taejon (Korea, Republic of)

    2004-02-15

    The objective of this project is to develop thermal hydraulic models for use in improving the reliability of the regulatory auditing codes. The current year fall under the second step of the 3 year project, and the main researches were focused on the development of downcorner boiling model. During the current year, the bubble stream model of downcorner has been developed and installed in he auditing code. The model sensitivity analysis has been performed for APR1400 LBLOCA scenario using the modified code. The preliminary calculation has been performed for the experimental test facility using FLUENT and MARS code. The facility for air bubble experiment has been installed. The thermal hydraulic phenomena for VHTR and super critical reactor have been identified for the future application and model development.

  17. Accelerated oxygen-induced retinopathy is a reliable model of ischemia-induced retinal neovascularization.

    Science.gov (United States)

    Villacampa, Pilar; Menger, Katja E; Abelleira, Laura; Ribeiro, Joana; Duran, Yanai; Smith, Alexander J; Ali, Robin R; Luhmann, Ulrich F; Bainbridge, James W B

    2017-01-01

    Retinal ischemia and pathological angiogenesis cause severe impairment of sight. Oxygen-induced retinopathy (OIR) in young mice is widely used as a model to investigate the underlying pathological mechanisms and develop therapeutic interventions. We compared directly the conventional OIR model (exposure to 75% O2 from postnatal day (P) 7 to P12) with an alternative, accelerated version (85% O2 from P8 to P11). We found that accelerated OIR induces similar pre-retinal neovascularization but greater retinal vascular regression that recovers more rapidly. The extent of retinal gliosis is similar but neuroretinal function, as measured by electroretinography, is better maintained in the accelerated model. We found no systemic or maternal morbidity in either model. Accelerated OIR offers a safe, reliable and more rapid alternative model in which pre-retinal neovascularization is similar but retinal vascular regression is greater.

  18. Incorporation of Markov reliability models for digital instrumentation and control systems into existing PRAs

    International Nuclear Information System (INIS)

    Bucci, P.; Mangan, L. A.; Kirschenbaum, J.; Mandelli, D.; Aldemir, T.; Arndt, S. A.

    2006-01-01

    Markov models have the ability to capture the statistical dependence between failure events that can arise in the presence of complex dynamic interactions between components of digital instrumentation and control systems. One obstacle to the use of such models in an existing probabilistic risk assessment (PRA) is that most of the currently available PRA software is based on the static event-tree/fault-tree methodology which often cannot represent such interactions. We present an approach to the integration of Markov reliability models into existing PRAs by describing the Markov model of a digital steam generator feedwater level control system, how dynamic event trees (DETs) can be generated from the model, and how the DETs can be incorporated into an existing PRA with the SAPHIRE software. (authors)

  19. Rating the raters in a mixed model: An approach to deciphering the rater reliability

    Science.gov (United States)

    Shang, Junfeng; Wang, Yougui

    2013-05-01

    Rating the raters has attracted extensive attention in recent years. Ratings are quite complex in that the subjective assessment and a number of criteria are involved in a rating system. Whenever the human judgment is a part of ratings, the inconsistency of ratings is the source of variance in scores, and it is therefore quite natural for people to verify the trustworthiness of ratings. Accordingly, estimation of the rater reliability will be of great interest and an appealing issue. To facilitate the evaluation of the rater reliability in a rating system, we propose a mixed model where the scores of the ratees offered by a rater are described with the fixed effects determined by the ability of the ratees and the random effects produced by the disagreement of the raters. In such a mixed model, for the rater random effects, we derive its posterior distribution for the prediction of random effects. To quantitatively make a decision in revealing the unreliable raters, the predictive influence function (PIF) serves as a criterion which compares the posterior distributions of random effects between the full data and rater-deleted data sets. The benchmark for this criterion is also discussed. This proposed methodology of deciphering the rater reliability is investigated in the multiple simulated and two real data sets.

  20. The reliability of the Hendrich Fall Risk Model in a geriatric hospital.

    Science.gov (United States)

    Heinze, Cornelia; Halfens, Ruud; Dassen, Theo

    2008-12-01

    Aims and objectives.  The purpose of this study was to test the interrater reliability of the Hendrich Fall Risk Model, an instrument to identify patients in a hospital setting with a high risk of falling. Background.  Falls are a serious problem in older patients. Valid and reliable fall risk assessment tools are required to identify high-risk patients and to take adequate preventive measures. Methods.  Seventy older patients were independently and simultaneously assessed by six pairs of raters made up of nursing staff members. Consensus estimates were calculated using simple percentage agreement and consistency estimates using Spearman's rho and intra class coefficient. Results.  Percentage agreement ranged from 0.70 to 0.92 between the six pairs of raters. Spearman's rho coefficients were between 0.54 and 0.80 and the intra class coefficients were between 0.46 and 0.92. Conclusions.  Whereas some pairs of raters obtained considerable interobserver agreement and internal consistency, the others did not. Therefore, it is concluded that the Hendrich Fall Risk Model is not a reliable instrument. The use of more unambiguous operationalized items is preferred. Relevance to clinical practice.  In practice, well operationalized fall risk assessment tools are necessary. Observer agreement should always be investigated after introducing a standardized measurement tool. © 2008 The Authors. Journal compilation © 2008 Blackwell Publishing Ltd.

  1. Reliability modeling of a hard real-time system using the path-space approach

    International Nuclear Information System (INIS)

    Kim, Hagbae

    2000-01-01

    A hard real-time system, such as a fly-by-wire system, fails catastrophically (e.g. losing stability) if its control inputs are not updated by its digital controller computer within a certain timing constraint called the hard deadline. To assess and validate those systems' reliabilities by using a semi-Markov model that explicitly contains the deadline information, we propose a path-space approach deriving the upper and lower bounds of the probability of system failure. These bounds are derived by using only simple parameters, and they are especially suitable for highly reliable systems which should recover quickly. Analytical bounds are derived for both exponential and Wobble failure distributions encountered commonly, which have proven effective through numerical examples, while considering three repair strategies: repair-as-good-as-new, repair-as-good-as-old, and repair-better-than-old

  2. An integrated model for reliability estimation of digital nuclear protection system based on fault tree and software control flow methodologies

    International Nuclear Information System (INIS)

    Kim, Man Cheol; Seong, Poong Hyun

    2000-01-01

    In the nuclear industry, the difficulty of proving the reliabilities of digital systems prohibits the widespread use of digital systems in various nuclear application such as plant protection system. Even though there exist a few models which are used to estimate the reliabilities of digital systems, we develop a new integrated model which is more realistic than the existing models. We divide the process of estimating the reliability of a digital system into two phases, a high-level phase and a low-level phase, and the boundary of two phases is the reliabilities of subsystems. We apply software control flow method to the low-level phase and fault tree analysis to the high-level phase. The application of the model to Dynamic Safety System(DDS) shows that the estimated reliability of the system is quite reasonable and realistic

  3. An integrated model for reliability estimation of digital nuclear protection system based on fault tree and software control flow methodologies

    International Nuclear Information System (INIS)

    Kim, Man Cheol; Seong, Poong Hyun

    2000-01-01

    In nuclear industry, the difficulty of proving the reliabilities of digital systems prohibits the widespread use of digital systems in various nuclear application such as plant protection system. Even though there exist a few models which are used to estimate the reliabilities of digital systems, we develop a new integrated model which is more realistic than the existing models. We divide the process of estimating the reliability of a digital system into two phases, a high-level phase and a low-level phase, and the boundary of two phases is the reliabilities of subsystems. We apply software control flow method to the low-level phase and fault tree analysis to the high-level phase. The application of the model of dynamic safety system (DSS) shows that the estimated reliability of the system is quite reasonable and realistic. (author)

  4. An enhanced reliability-oriented workforce planning model for process industry using combined fuzzy goal programming and differential evolution approach

    Science.gov (United States)

    Ighravwe, D. E.; Oke, S. A.; Adebiyi, K. A.

    2018-03-01

    This paper draws on the "human reliability" concept as a structure for gaining insight into the maintenance workforce assessment in a process industry. Human reliability hinges on developing the reliability of humans to a threshold that guides the maintenance workforce to execute accurate decisions within the limits of resources and time allocations. This concept offers a worthwhile point of deviation to encompass three elegant adjustments to literature model in terms of maintenance time, workforce performance and return-on-workforce investments. These fully explain the results of our influence. The presented structure breaks new grounds in maintenance workforce theory and practice from a number of perspectives. First, we have successfully implemented fuzzy goal programming (FGP) and differential evolution (DE) techniques for the solution of optimisation problem in maintenance of a process plant for the first time. The results obtained in this work showed better quality of solution from the DE algorithm compared with those of genetic algorithm and particle swarm optimisation algorithm, thus expressing superiority of the proposed procedure over them. Second, the analytical discourse, which was framed on stochastic theory, focusing on specific application to a process plant in Nigeria is a novelty. The work provides more insights into maintenance workforce planning during overhaul rework and overtime maintenance activities in manufacturing systems and demonstrated capacity in generating substantially helpful information for practice.

  5. Reliability of a Novel Model for Drug Release from 2D HPMC-Matrices

    Directory of Open Access Journals (Sweden)

    Rumiana Blagoeva

    2010-04-01

    Full Text Available A novel model of drug release from 2D-HPMC matrices is considered. Detailed mathematical description of matrix swelling and the effect of the initial drug loading are introduced. A numerical approach to solution of the posed nonlinear 2D problem is used on the basis of finite element domain approximation and time difference method. The reliability of the model is investigated in two steps: numerical evaluation of the water uptake parameters; evaluation of drug release parameters under available experimental data. The proposed numerical procedure for fitting the model is validated performing different numerical examples of drug release in two cases (with and without taking into account initial drug loading. The goodness of fit evaluated by the coefficient of determination is presented to be very good with few exceptions. The obtained results show better model fitting when accounting the effect of initial drug loading (especially for larger values.

  6. Reliable and efficient solution of genome-scale models of Metabolism and macromolecular Expression

    DEFF Research Database (Denmark)

    Ma, Ding; Yang, Laurence; Fleming, Ronan M. T.

    2017-01-01

    orders of magnitude. Data values also have greatly varying magnitudes. Standard double-precision solvers may return inaccurate solutions or report that no solution exists. Exact simplex solvers based on rational arithmetic require a near-optimal warm start to be practical on large problems (current ME......Constraint-Based Reconstruction and Analysis (COBRA) is currently the only methodology that permits integrated modeling of Metabolism and macromolecular Expression (ME) at genome-scale. Linear optimization computes steady-state flux solutions to ME models, but flux values are spread over many...... models have 70,000 constraints and variables and will grow larger). We have developed a quadrupleprecision version of our linear and nonlinear optimizer MINOS, and a solution procedure (DQQ) involving Double and Quad MINOS that achieves reliability and efficiency for ME models and other challenging...

  7. Using Evidence Credibility Decay Model for dependence assessment in human reliability analysis

    International Nuclear Information System (INIS)

    Guo, Xingfeng; Zhou, Yanhui; Qian, Jin; Deng, Yong

    2017-01-01

    Highlights: • A new computational model is proposed for dependence assessment in HRA. • We combined three factors of “CT”, “TR” and “SP” within Dempster–Shafer theory. • The BBA of “SP” is reconstructed by discounting rate based on the ECDM. • Simulation experiments are illustrated to show the efficiency of the proposed method. - Abstract: Dependence assessment among human errors plays an important role in human reliability analysis. When dependence between two sequent tasks exists in human reliability analysis, if the preceding task fails, the failure probability of the following task is higher than success. Typically, three major factors are considered: “Closeness in Time” (CT), “Task Relatedness” (TR) and “Similarity of Performers” (SP). Assume TR is not changed, both SP and CT influence the degree of dependence level and SP is discounted by the time as the result of combine two factors in this paper. In this paper, a new computational model is proposed based on the Dempster–Shafer Evidence Theory (DSET) and Evidence Credibility Decay Model (ECDM) to assess the dependence between tasks in human reliability analysis. First, the influenced factors among human tasks are identified and the basic belief assignments (BBAs) of each factor are constructed based on expert evaluation. Then, the BBA of SP is discounted as the result of combining two factors and reconstructed by using the ECDM, the factors are integrated into a fused BBA. Finally, the dependence level is calculated based on fused BBA. Experimental results demonstrate that the proposed model not only quantitatively describe the fact that the input factors influence the dependence level, but also exactly show how the dependence level regular changes with different situations of input factors.

  8. Demands placed on waste package performance testing and modeling by some general results on reliability analysis

    International Nuclear Information System (INIS)

    Chesnut, D.A.

    1991-09-01

    Waste packages for a US nuclear waste repository are required to provide reasonable assurance of maintaining substantially complete containment of radionuclides for 300 to 1000 years after closure. The waiting time to failure for complex failure processes affecting engineered or manufactured systems is often found to be an exponentially-distributed random variable. Assuming that this simple distribution can be used to describe the behavior of a hypothetical single barrier waste package, calculations presented in this paper show that the mean time to failure (the only parameter needed to completely specify an exponential distribution) would have to be more than 10 7 years in order to provide reasonable assurance of meeting this requirement. With two independent barriers, each would need to have a mean time to failure of only 10 5 years to provide the same reliability. Other examples illustrate how multiple barriers can provide a strategy for not only achieving but demonstrating regulatory compliance

  9. Reliability engineering

    International Nuclear Information System (INIS)

    Lee, Chi Woo; Kim, Sun Jin; Lee, Seung Woo; Jeong, Sang Yeong

    1993-08-01

    This book start what is reliability? such as origin of reliability problems, definition of reliability and reliability and use of reliability. It also deals with probability and calculation of reliability, reliability function and failure rate, probability distribution of reliability, assumption of MTBF, process of probability distribution, down time, maintainability and availability, break down maintenance and preventive maintenance design of reliability, design of reliability for prediction and statistics, reliability test, reliability data and design and management of reliability.

  10. Algorithms for Bayesian network modeling and reliability assessment of infrastructure systems

    International Nuclear Information System (INIS)

    Tien, Iris; Der Kiureghian, Armen

    2016-01-01

    Novel algorithms are developed to enable the modeling of large, complex infrastructure systems as Bayesian networks (BNs). These include a compression algorithm that significantly reduces the memory storage required to construct the BN model, and an updating algorithm that performs inference on compressed matrices. These algorithms address one of the major obstacles to widespread use of BNs for system reliability assessment, namely the exponentially increasing amount of information that needs to be stored as the number of components in the system increases. The proposed compression and inference algorithms are described and applied to example systems to investigate their performance compared to that of existing algorithms. Orders of magnitude savings in memory storage requirement are demonstrated using the new algorithms, enabling BN modeling and reliability analysis of larger infrastructure systems. - Highlights: • Novel algorithms developed for Bayesian network modeling of infrastructure systems. • Algorithm presented to compress information in conditional probability tables. • Updating algorithm presented to perform inference on compressed matrices. • Algorithms applied to example systems to investigate their performance. • Orders of magnitude savings in memory storage requirement demonstrated.

  11. A reliability-based maintenance technicians' workloads optimisation model with stochastic consideration

    Science.gov (United States)

    Ighravwe, D. E.; Oke, S. A.; Adebiyi, K. A.

    2016-06-01

    The growing interest in technicians' workloads research is probably associated with the recent surge in competition. This was prompted by unprecedented technological development that triggers changes in customer tastes and preferences for industrial goods. In a quest for business improvement, this worldwide intense competition in industries has stimulated theories and practical frameworks that seek to optimise performance in workplaces. In line with this drive, the present paper proposes an optimisation model which considers technicians' reliability that complements factory information obtained. The information used emerged from technicians' productivity and earned-values using the concept of multi-objective modelling approach. Since technicians are expected to carry out routine and stochastic maintenance work, we consider these workloads as constraints. The influence of training, fatigue and experiential knowledge of technicians on workload management was considered. These workloads were combined with maintenance policy in optimising reliability, productivity and earned-values using the goal programming approach. Practical datasets were utilised in studying the applicability of the proposed model in practice. It was observed that our model was able to generate information that practicing maintenance engineers can apply in making more informed decisions on technicians' management.

  12. Reliability of a new biokinetic model of zirconium in internal dosimetry: part II, parameter sensitivity analysis.

    Science.gov (United States)

    Li, Wei Bo; Greiter, Matthias; Oeh, Uwe; Hoeschen, Christoph

    2011-12-01

    The reliability of biokinetic models is essential for the assessment of internal doses and a radiation risk analysis for the public and occupational workers exposed to radionuclides. In the present study, a method for assessing the reliability of biokinetic models by means of uncertainty and sensitivity analysis was developed. In the first part of the paper, the parameter uncertainty was analyzed for two biokinetic models of zirconium (Zr); one was reported by the International Commission on Radiological Protection (ICRP), and one was developed at the Helmholtz Zentrum München-German Research Center for Environmental Health (HMGU). In the second part of the paper, the parameter uncertainties and distributions of the Zr biokinetic models evaluated in Part I are used as the model inputs for identifying the most influential parameters in the models. Furthermore, the most influential model parameter on the integral of the radioactivity of Zr over 50 y in source organs after ingestion was identified. The results of the systemic HMGU Zr model showed that over the first 10 d, the parameters of transfer rates between blood and other soft tissues have the largest influence on the content of Zr in the blood and the daily urinary excretion; however, after day 1,000, the transfer rate from bone to blood becomes dominant. For the retention in bone, the transfer rate from blood to bone surfaces has the most influence out to the endpoint of the simulation; the transfer rate from blood to the upper larger intestine contributes a lot in the later days; i.e., after day 300. The alimentary tract absorption factor (fA) influences mostly the integral of radioactivity of Zr in most source organs after ingestion.

  13. Reliability Engineering

    CERN Document Server

    Lazzaroni, Massimo

    2012-01-01

    This book gives a practical guide for designers and users in Information and Communication Technology context. In particular, in the first Section, the definition of the fundamental terms according to the international standards are given. Then, some theoretical concepts and reliability models are presented in Chapters 2 and 3: the aim is to evaluate performance for components and systems and reliability growth. Chapter 4, by introducing the laboratory tests, puts in evidence the reliability concept from the experimental point of view. In ICT context, the failure rate for a given system can be

  14. Quantified Risk Ranking Model for Condition-Based Risk and Reliability Centered Maintenance

    Science.gov (United States)

    Chattopadhyaya, Pradip Kumar; Basu, Sushil Kumar; Majumdar, Manik Chandra

    2017-06-01

    In the recent past, risk and reliability centered maintenance (RRCM) framework is introduced with a shift in the methodological focus from reliability and probabilities (expected values) to reliability, uncertainty and risk. In this paper authors explain a novel methodology for risk quantification and ranking the critical items for prioritizing the maintenance actions on the basis of condition-based risk and reliability centered maintenance (CBRRCM). The critical items are identified through criticality analysis of RPN values of items of a system and the maintenance significant precipitating factors (MSPF) of items are evaluated. The criticality of risk is assessed using three risk coefficients. The likelihood risk coefficient treats the probability as a fuzzy number. The abstract risk coefficient deduces risk influenced by uncertainty, sensitivity besides other factors. The third risk coefficient is called hazardous risk coefficient, which is due to anticipated hazards which may occur in the future and the risk is deduced from criteria of consequences on safety, environment, maintenance and economic risks with corresponding cost for consequences. The characteristic values of all the three risk coefficients are obtained with a particular test. With few more tests on the system, the values may change significantly within controlling range of each coefficient, hence `random number simulation' is resorted to obtain one distinctive value for each coefficient. The risk coefficients are statistically added to obtain final risk coefficient of each critical item and then the final rankings of critical items are estimated. The prioritization in ranking of critical items using the developed mathematical model for risk assessment shall be useful in optimization of financial losses and timing of maintenance actions.

  15. Reliability Analysis of a Composite Wind Turbine Blade Section Using the Model Correction Factor Method: Numerical Study and Validation

    DEFF Research Database (Denmark)

    Dimitrov, Nikolay Krasimirov; Friis-Hansen, Peter; Berggreen, Christian

    2013-01-01

    by the composite failure criteria. Each failure mode has been considered in a separate component reliability analysis, followed by a system analysis which gives the total probability of failure of the structure. The Model Correction Factor method used in connection with FORM (First-Order Reliability Method) proved...

  16. The constant failure rate model for fault tree evaluation as a tool for unit protection reliability assessment

    International Nuclear Information System (INIS)

    Vichev, S.; Bogdanov, D.

    2000-01-01

    The purpose of this paper is to introduce the fault tree analysis method as a tool for unit protection reliability estimation. The constant failure rate model applies for making reliability assessment, and especially availability assessment. For that purpose an example for unit primary equipment structure and fault tree example for simplified unit protection system is presented (author)

  17. Bayesian and Classical Estimation of Stress-Strength Reliability for Inverse Weibull Lifetime Models

    Directory of Open Access Journals (Sweden)

    Qixuan Bi

    2017-06-01

    Full Text Available In this paper, we consider the problem of estimating stress-strength reliability for inverse Weibull lifetime models having the same shape parameters but different scale parameters. We obtain the maximum likelihood estimator and its asymptotic distribution. Since the classical estimator doesn’t hold explicit forms, we propose an approximate maximum likelihood estimator. The asymptotic confidence interval and two bootstrap intervals are obtained. Using the Gibbs sampling technique, Bayesian estimator and the corresponding credible interval are obtained. The Metropolis-Hastings algorithm is used to generate random variates. Monte Carlo simulations are conducted to compare the proposed methods. Analysis of a real dataset is performed.

  18. Validated Loads Prediction Models for Offshore Wind Turbines for Enhanced Component Reliability

    DEFF Research Database (Denmark)

    Koukoura, Christina

    To improve the reliability of offshore wind turbines, accurate prediction of their response is required. Therefore, validation of models with site measurements is imperative. In the present thesis a 3.6MW pitch regulated-variable speed offshore wind turbine on a monopole foundation is built...... are used for the modification of the sub-structure/foundation design for possible material savings. First, the background of offshore wind engineering, including wind-wave conditions, support structure, blade loading and wind turbine dynamics are presented. Second, a detailed description of the site...

  19. A new lifetime estimation model for a quicker LED reliability prediction

    Science.gov (United States)

    Hamon, B. H.; Mendizabal, L.; Feuillet, G.; Gasse, A.; Bataillou, B.

    2014-09-01

    LED reliability and lifetime prediction is a key point for Solid State Lighting adoption. For this purpose, one hundred and fifty LEDs have been aged for a reliability analysis. LEDs have been grouped following nine current-temperature stress conditions. Stress driving current was fixed between 350mA and 1A and ambient temperature between 85C and 120°C. Using integrating sphere and I(V) measurements, a cross study of the evolution of electrical and optical characteristics has been done. Results show two main failure mechanisms regarding lumen maintenance. The first one is the typically observed lumen depreciation and the second one is a much more quicker depreciation related to an increase of the leakage and non radiative currents. Models of the typical lumen depreciation and leakage resistance depreciation have been made using electrical and optical measurements during the aging tests. The combination of those models allows a new method toward a quicker LED lifetime prediction. These two models have been used for lifetime predictions for LEDs.

  20. Systems reliability/structural reliability

    International Nuclear Information System (INIS)

    Green, A.E.

    1980-01-01

    The question of reliability technology using quantified techniques is considered for systems and structures. Systems reliability analysis has progressed to a viable and proven methodology whereas this has yet to be fully achieved for large scale structures. Structural loading variants over the half-time of the plant are considered to be more difficult to analyse than for systems, even though a relatively crude model may be a necessary starting point. Various reliability characteristics and environmental conditions are considered which enter this problem. The rare event situation is briefly mentioned together with aspects of proof testing and normal and upset loading conditions. (orig.)

  1. Reliably Modeling the Mechanical Stability of Rigid and Flexible Metal-Organic Frameworks.

    Science.gov (United States)

    Rogge, Sven M J; Waroquier, Michel; Van Speybroeck, Veronique

    2018-01-16

    Over the past two decades, metal-organic frameworks (MOFs) have matured from interesting academic peculiarities toward a continuously expanding class of hybrid, nanoporous materials tuned for targeted technological applications such as gas storage and heterogeneous catalysis. These oft-times crystalline materials, composed of inorganic moieties interconnected by organic ligands, can be endowed with desired structural and chemical features by judiciously functionalizing or substituting these building blocks. As a result of this reticular synthesis, MOF research is situated at the intriguing intersection between chemistry and physics, and the building block approach could pave the way toward the construction of an almost infinite number of possible crystalline structures, provided that they exhibit stability under the desired operational conditions. However, this enormous potential is largely untapped to date, as MOFs have not yet found a major breakthrough in technological applications. One of the remaining challenges for this scale-up is the densification of MOF powders, which is generally achieved by subjecting the material to a pressurization step. However, application of an external pressure may substantially alter the chemical and physical properties of the material. A reliable theoretical guidance that can presynthetically identify the most stable materials could help overcome this technological challenge. In this Account, we describe the recent research the progress on computational characterization of the mechanical stability of MOFs. So far, three complementary approaches have been proposed, focusing on different aspects of mechanical stability: (i) the Born stability criteria, (ii) the anisotropy in mechanical moduli such as the Young and shear moduli, and (iii) the pressure-versus-volume equations of state. As these three methods are grounded in distinct computational approaches, it is expected that their accuracy and efficiency will vary. To date

  2. [Reliability of three dimensional resin model by rapid prototyping manufacturing and digital modeling].

    Science.gov (United States)

    Zeng, Fei-huang; Xu, Yuan-zhi; Fang, Li; Tang, Xiao-shan

    2012-02-01

    To describe a new technique for fabricating an 3D resin model by 3D reconstruction and rapid prototyping, and to analyze the precision of this method. An optical grating scanner was used to acquire the data of silastic cavity block , digital dental cast was reconstructed with the data through Geomagic Studio image processing software. The final 3D reconstruction was saved in the pattern of Stl. The 3D resin model was fabricated by fuse deposition modeling, and was compared with the digital model and gypsum model. The data of three groups were statistically analyzed using SPSS 16.0 software package. No significant difference was found in gypsum model,digital dental cast and 3D resin model (P>0.05). Rapid prototyping manufacturing and digital modeling would be helpful for dental information acquisition, treatment design, appliance manufacturing, and can improve the communications between patients and doctors.

  3. Predicting climate-induced range shifts: model differences and model reliability.

    Science.gov (United States)

    Joshua J. Lawler; Denis White; Ronald P. Neilson; Andrew R. Blaustein

    2006-01-01

    Predicted changes in the global climate are likely to cause large shifts in the geographic ranges of many plant and animal species. To date, predictions of future range shifts have relied on a variety of modeling approaches with different levels of model accuracy. Using a common data set, we investigated the potential implications of alternative modeling approaches for...

  4. Reliability and validity of Champion's Health Belief Model Scale for breast cancer screening among Malaysian women.

    Science.gov (United States)

    Parsa, P; Kandiah, M; Mohd Nasir, M T; Hejar, A R; Nor Afiah, M Z

    2008-11-01

    Breast cancer is the leading cause of cancer deaths in Malaysian women, and the use of breast self-examination (BSE), clinical breast examination (CBE) and mammography remain low in Malaysia. Therefore, there is a need to develop a valid and reliable tool to measure the beliefs that influence breast cancer screening practices. The Champion's Health Belief Model Scale (CHBMS) is a valid and reliable tool to measure beliefs about breast cancer and screening methods in the Western culture. The purpose of this study was to translate the use of CHBMS into the Malaysian context and validate the scale among Malaysian women. A random sample of 425 women teachers was taken from 24 secondary schools in Selangor state, Malaysia. The CHBMS was translated into the Malay language, validated by an expert's panel, back translated, and pretested. Analyses included descriptive statistics of all the study variables, reliability estimates, and construct validity using factor analysis. The mean age of the respondents was 37.2 (standard deviation 7.1) years. Factor analysis yielded ten factors for BSE with eigenvalue greater than 1 (four factors more than the original): confidence 1 (ability to differentiate normal and abnormal changes in the breasts), barriers to BSE, susceptibility for breast cancer, benefits of BSE, health motivation 1 (general health), seriousness 1 (fear of breast cancer), confidence 2 (ability to detect size of lumps), seriousness 2 (fear of long-term effects of breast cancer), health motivation 2 (preventive health practice), and confidence 3 (ability to perform BSE correctly). For CBE and mammography scales, seven factors each were identified. Factors for CBE scale include susceptibility, health motivation 1, benefits of CBE, seriousness 1, barriers of CBE, seriousness 2 and health motivation 2. For mammography the scale includes benefits of mammography, susceptibility, health motivation 1, seriousness 1, barriers to mammography seriousness 2 and health

  5. Identifying the effects of parameter uncertainty on the reliability of riverbank stability modelling

    Science.gov (United States)

    Samadi, A.; Amiri-Tokaldany, E.; Darby, S. E.

    2009-05-01

    Bank retreat is a key process in fluvial dynamics affecting a wide range of physical, ecological and socioeconomic issues in the fluvial environment. To predict the undesirable effects of bank retreat and to inform effective measures to prevent it, a wide range of bank stability models have been presented in the literature. These models typically express bank stability by defining a factor of safety as the ratio of driving and resisting forces acting on the incipient failure block. These forces are affected by a range of controlling factors that include such aspects as the bank profile (bank height and angle), the geotechnical properties of the bank materials, as well as the hydrological status of the riverbanks. In this paper we evaluate the extent to which uncertainties in the parameterization of these controlling factors feed through to influence the reliability of the resulting bank stability estimate. This is achieved by employing a simple model of riverbank stability with respect to planar failure (which is the most common type of bank stability model) in a series of sensitivity tests and Monte Carlo analyses to identify, for each model parameter, the range of values that induce significant changes in the simulated factor of safety. These identified parameter value ranges are compared to empirically derived parameter uncertainties to determine whether they are likely to confound the reliability of the resulting bank stability calculations. Our results show that parameter uncertainties are typically high enough that the likelihood of generating unreliable predictions is typically very high (> ˜ 80% for predictions requiring a precision of < ± 15%). Because parameter uncertainties are derived primarily from the natural variability of the parameters, rather than measurement errors, much more careful attention should be paid to field sampling strategies, such that the parameter uncertainties and consequent prediction unreliabilities can be quantified more

  6. Reliability of some ageing nuclear power plant systems: a simple stochastic model

    International Nuclear Information System (INIS)

    Suarez Antola, R.

    2007-01-01

    The random number of failure-related events in certain repairable ageing systems, like certain nuclear power plant components, during a given time interval, may be often modelled by a compound Poisson distribution. One of these is the Polya-Aeppli distribution. The derivation of a stationary Polya-Aeppli distribution as a limiting distribution of rare events for stationary Bernouilli trials with first order Markov dependence is considered. But if the parameters of the Polya-Aeppli distribution are suitable time functions, we could expect that the resulting distribution would allow us to take into account the distribution of failure-related events in an ageing system. Assuming that a critical number of damages produce an emergent failure, the abovementioned results can be applied in a reliability analysis. It is natural to ask under what conditions a Polya-Aeppli distribution could be a limiting distribution for non-homogeneous Bernouilli trials with first order Markov dependence. In this paper this problem is analyzed and possible applications of the obtained results to ageing or deteriorating nuclear power plant components are considered. The two traditional ways of modelling repairable systems in reliability theory: the - as bad as old - concept, that assumes that the replaced component is exactly under the same conditions as was the aged component before failure, and the - as good as new - concept, that assumes that the new component is under the same conditions of the replaced component when it was new, are briefly discussed in relation with the findings of the present work

  7. Physics-Based Stress Corrosion Cracking Component Reliability Model cast in an R7-Compatible Cumulative Damage Framework

    Energy Technology Data Exchange (ETDEWEB)

    Unwin, Stephen D.; Lowry, Peter P.; Layton, Robert F.; Toloczko, Mychailo B.; Johnson, Kenneth I.; Sanborn, Scott E.

    2011-07-01

    This is a working report drafted under the Risk-Informed Safety Margin Characterization pathway of the Light Water Reactor Sustainability Program, describing statistical models of passives component reliabilities.

  8. Characterization of System Level Single Event Upset (SEU) Responses using SEU Data, Classical Reliability Models, and Space Environment Data

    Science.gov (United States)

    Berg, Melanie; Label, Kenneth; Campola, Michael; Xapsos, Michael

    2017-01-01

    We propose a method for the application of single event upset (SEU) data towards the analysis of complex systems using transformed reliability models (from the time domain to the particle fluence domain) and space environment data.

  9. Halo Models of Large Scale Structure and Reliability of Cosmological N-Body Simulations

    Directory of Open Access Journals (Sweden)

    José Gaite

    2013-05-01

    Full Text Available Halo models of the large scale structure of the Universe are critically examined, focusing on the definition of halos as smooth distributions of cold dark matter. This definition is essentially based on the results of cosmological N-body simulations. By a careful analysis of the standard assumptions of halo models and N-body simulations and by taking into account previous studies of self-similarity of the cosmic web structure, we conclude that N-body cosmological simulations are not fully reliable in the range of scales where halos appear. Therefore, to have a consistent definition of halos is necessary either to define them as entities of arbitrary size with a grainy rather than smooth structure or to define their size in terms of small-scale baryonic physics.

  10. Random effects model for the reliability management of modules of a fighter aircraft

    Energy Technology Data Exchange (ETDEWEB)

    Sohn, So Young [Department of Computer Science and Industrial Systems Engineering, Yonsei University, Shinchondong 134, Seoul 120-749 (Korea, Republic of)]. E-mail: sohns@yonsei.ac.kr; Yoon, Kyung Bok [Department of Computer Science and Industrial Systems Engineering, Yonsei University, Shinchondong 134, Seoul 120-749 (Korea, Republic of)]. E-mail: ykb@yonsei.ac.kr; Chang, In Sang [Department of Computer Science and Industrial Systems Engineering, Yonsei University, Shinchondong 134, Seoul 120-749 (Korea, Republic of)]. E-mail: isjang@yonsei.ac.kr

    2006-04-15

    The operational availability of fighter aircrafts plays an important role in the national defense. Low operational availability of fighter aircrafts can cause many problems and ROKA (Republic of Korea Airforce) needs proper strategies to improve the current practice of reliability management by accurately forecasting both MTBF (mean time between failure) and MTTR (mean time to repair). In this paper, we develop a random effects model to forecast both MTBF and MTTR of installed modules of fighter aircrafts based on their characteristics and operational conditions. Advantage of using such a random effects model is the ability of accommodating not only the individual characteristics of each module and operational conditions but also the uncertainty caused by random error that cannot be explained by them. Our study is expected to contribute to ROKA in improving operational availability of fighter aircrafts and establishing effective logistics management.

  11. The Development of Marine Accidents Human Reliability Assessment Approach: HEART Methodology and MOP Model

    Directory of Open Access Journals (Sweden)

    Ludfi Pratiwi Bowo

    2017-06-01

    Full Text Available Humans are one of the important factors in the assessment of accidents, particularly marine accidents. Hence, studies are conducted to assess the contribution of human factors in accidents. There are two generations of Human Reliability Assessment (HRA that have been developed. Those methodologies are classified by the differences of viewpoints of problem-solving, as the first generation and second generation. The accident analysis can be determined using three techniques of analysis; sequential techniques, epidemiological techniques and systemic techniques, where the marine accidents are included in the epidemiological technique. This study compares the Human Error Assessment and Reduction Technique (HEART methodology and the 4M Overturned Pyramid (MOP model, which are applied to assess marine accidents. Furthermore, the MOP model can effectively describe the relationships of other factors which affect the accidents; whereas, the HEART methodology is only focused on human factors.

  12. Alternative approaches to reliability modeling of a multiple engineered barrier system

    International Nuclear Information System (INIS)

    Ananda, M.M.A.; Singh, A.K.

    1994-01-01

    The lifetime of the engineered barrier system used for containment of high-level radioactive waste will significantly impact the total performance of a geological repository facility. Currently two types of designs are under consideration for an engineered barrier system, single engineered barrier system and multiple engineered barrier system. Multiple engineered barrier system consists of several metal barriers and the waste form (cladding). Some recent work show that a significant improvement of performance can be achieved by utilizing multiple engineered barrier systems. Considering sequential failures for each barrier, we model the reliability of the multiple engineered barrier system. Weibull and exponential lifetime distributions are used through out the analysis. Furthermore, the number of failed engineered barrier systems in a repository at a given time is modeled using a poisson approximation

  13. A Simplified Network Model for Travel Time Reliability Analysis in a Road Network

    Directory of Open Access Journals (Sweden)

    Kenetsu Uchida

    2017-01-01

    Full Text Available This paper proposes a simplified network model which analyzes travel time reliability in a road network. A risk-averse driver is assumed in the simplified model. The risk-averse driver chooses a path by taking into account both a path travel time variance and a mean path travel time. The uncertainty addressed in this model is that of traffic flows (i.e., stochastic demand flows. In the simplified network model, the path travel time variance is not calculated by considering all travel time covariance between two links in the network. The path travel time variance is calculated by considering all travel time covariance between two adjacent links in the network. Numerical experiments are carried out to illustrate the applicability and validity of the proposed model. The experiments introduce the path choice behavior of a risk-neutral driver and several types of risk-averse drivers. It is shown that the mean link flows calculated by introducing the risk-neutral driver differ as a whole from those calculated by introducing several types of risk-averse drivers. It is also shown that the mean link flows calculated by the simplified network model are almost the same as the flows calculated by using the exact path travel time variance.

  14. Two-terminal reliability of a mobile ad hoc network under the asymptotic spatial distribution of the random waypoint model

    International Nuclear Information System (INIS)

    Chen, Binchao; Phillips, Aaron; Matis, Timothy I.

    2012-01-01

    The random waypoint (RWP) mobility model is frequently used in describing the movement pattern of mobile users in a mobile ad hoc network (MANET). As the asymptotic spatial distribution of nodes under a RWP model exhibits central tendency, the two-terminal reliability of the MANET is investigated as a function of the source node location. In particular, analytical expressions for one and two hop connectivities are developed as well as an efficient simulation methodology for two-terminal reliability. A study is then performed to assess the effect of nodal density and network topology on network reliability.

  15. Reliability of Degree-Day Models to Predict the Development Time of Plutella xylostella (L.) under Field Conditions.

    Science.gov (United States)

    Marchioro, C A; Krechemer, F S; de Moraes, C P; Foerster, L A

    2015-12-01

    The diamondback moth, Plutella xylostella (L.), is a cosmopolitan pest of brassicaceous crops occurring in regions with highly distinct climate conditions. Several studies have investigated the relationship between temperature and P. xylostella development rate, providing degree-day models for populations from different geographical regions. However, there are no data available to date to demonstrate the suitability of such models to make reliable projections on the development time for this species in field conditions. In the present study, 19 models available in the literature were tested regarding their ability to accurately predict the development time of two cohorts of P. xylostella under field conditions. Only 11 out of the 19 models tested accurately predicted the development time for the first cohort of P. xylostella, but only seven for the second cohort. Five models correctly predicted the development time for both cohorts evaluated. Our data demonstrate that the accuracy of the models available for P. xylostella varies widely and therefore should be used with caution for pest management purposes.

  16. A mid-layer model for human reliability analysis: understanding the cognitive causes of human failure events

    International Nuclear Information System (INIS)

    Shen, Song-Hua; Chang, James Y.H.; Boring, Ronald L.; Whaley, April M.; Lois, Erasmia; Langfitt Hendrickson, Stacey M.; Oxstrand, Johanna H.; Forester, John Alan; Kelly, Dana L.; Mosleh, Ali

    2010-01-01

    The Office of Nuclear Regulatory Research (RES) at the US Nuclear Regulatory Commission (USNRC) is sponsoring work in response to a Staff Requirements Memorandum (SRM) directing an effort to establish a single human reliability analysis (HRA) method for the agency or guidance for the use of multiple methods. As part of this effort an attempt to develop a comprehensive HRA qualitative approach is being pursued. This paper presents a draft of the method's middle layer, a part of the qualitative analysis phase that links failure mechanisms to performance shaping factors. Starting with a Crew Response Tree (CRT) that has identified human failure events, analysts identify potential failure mechanisms using the mid-layer model. The mid-layer model presented in this paper traces the identification of the failure mechanisms using the Information-Diagnosis/Decision-Action (IDA) model and cognitive models from the psychological literature. Each failure mechanism is grouped according to a phase of IDA. Under each phase of IDA, the cognitive models help identify the relevant performance shaping factors for the failure mechanism. The use of IDA and cognitive models can be traced through fault trees, which provide a detailed complement to the CRT.

  17. A Mid-Layer Model for Human Reliability Analysis: Understanding the Cognitive Causes of Human Failure Events

    Energy Technology Data Exchange (ETDEWEB)

    Stacey M. L. Hendrickson; April M. Whaley; Ronald L. Boring; James Y. H. Chang; Song-Hua Shen; Ali Mosleh; Johanna H. Oxstrand; John A. Forester; Dana L. Kelly; Erasmia L. Lois

    2010-06-01

    The Office of Nuclear Regulatory Research (RES) is sponsoring work in response to a Staff Requirements Memorandum (SRM) directing an effort to establish a single human reliability analysis (HRA) method for the agency or guidance for the use of multiple methods. As part of this effort an attempt to develop a comprehensive HRA qualitative approach is being pursued. This paper presents a draft of the method’s middle layer, a part of the qualitative analysis phase that links failure mechanisms to performance shaping factors. Starting with a Crew Response Tree (CRT) that has identified human failure events, analysts identify potential failure mechanisms using the mid-layer model. The mid-layer model presented in this paper traces the identification of the failure mechanisms using the Information-Diagnosis/Decision-Action (IDA) model and cognitive models from the psychological literature. Each failure mechanism is grouped according to a phase of IDA. Under each phase of IDA, the cognitive models help identify the relevant performance shaping factors for the failure mechanism. The use of IDA and cognitive models can be traced through fault trees, which provide a detailed complement to the CRT.

  18. Microgrid Design Analysis Using Technology Management Optimization and the Performance Reliability Model

    Energy Technology Data Exchange (ETDEWEB)

    Stamp, Jason E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Eddy, John P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jensen, Richard P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Munoz-Ramos, Karina [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-01-01

    Microgrids are a focus of localized energy production that support resiliency, security, local con- trol, and increased access to renewable resources (among other potential benefits). The Smart Power Infrastructure Demonstration for Energy Reliability and Security (SPIDERS) Joint Capa- bility Technology Demonstration (JCTD) program between the Department of Defense (DOD), Department of Energy (DOE), and Department of Homeland Security (DHS) resulted in the pre- liminary design and deployment of three microgrids at military installations. This paper is focused on the analysis process and supporting software used to determine optimal designs for energy surety microgrids (ESMs) in the SPIDERS project. There are two key pieces of software, an ex- isting software application developed by Sandia National Laboratories (SNL) called Technology Management Optimization (TMO) and a new simulation developed for SPIDERS called the per- formance reliability model (PRM). TMO is a decision support tool that performs multi-objective optimization over a mixed discrete/continuous search space for which the performance measures are unrestricted in form. The PRM is able to statistically quantify the performance and reliability of a microgrid operating in islanded mode (disconnected from any utility power source). Together, these two software applications were used as part of the ESM process to generate the preliminary designs presented by SNL-led DOE team to the DOD. Acknowledgements Sandia National Laboratories and the SPIDERS technical team would like to acknowledge the following for help in the project: * Mike Hightower, who has been the key driving force for Energy Surety Microgrids * Juan Torres and Abbas Akhil, who developed the concept of microgrids for military instal- lations * Merrill Smith, U.S. Department of Energy SPIDERS Program Manager * Ross Roley and Rich Trundy from U.S. Pacific Command * Bill Waugaman and Bill Beary from U.S. Northern Command * Tarek Abdallah, Melanie

  19. Validation and selection of ODE based systems biology models: how to arrive at more reliable decisions.

    Science.gov (United States)

    Hasdemir, Dicle; Hoefsloot, Huub C J; Smilde, Age K

    2015-07-08

    Most ordinary differential equation (ODE) based modeling studies in systems biology involve a hold-out validation step for model validation. In this framework a pre-determined part of the data is used as validation data and, therefore it is not used for estimating the parameters of the model. The model is assumed to be validated if the model predictions on the validation dataset show good agreement with the data. Model selection between alternative model structures can also be performed in the same setting, based on the predictive power of the model structures on the validation dataset. However, drawbacks associated with this approach are usually under-estimated. We have carried out simulations by using a recently published High Osmolarity Glycerol (HOG) pathway from S.cerevisiae to demonstrate these drawbacks. We have shown that it is very important how the data is partitioned and which part of the data is used for validation purposes. The hold-out validation strategy leads to biased conclusions, since it can lead to different validation and selection decisions when different partitioning schemes are used. Furthermore, finding sensible partitioning schemes that would lead to reliable decisions are heavily dependent on the biology and unknown model parameters which turns the problem into a paradox. This brings the need for alternative validation approaches that offer flexible partitioning of the data. For this purpose, we have introduced a stratified random cross-validation (SRCV) approach that successfully overcomes these limitations. SRCV leads to more stable decisions for both validation and selection which are not biased by underlying biological phenomena. Furthermore, it is less dependent on the specific noise realization in the data. Therefore, it proves to be a promising alternative to the standard hold-out validation strategy.

  20. The EZ diffusion model provides a powerful test of simple empirical effects.

    Science.gov (United States)

    van Ravenzwaaij, Don; Donkin, Chris; Vandekerckhove, Joachim

    2017-04-01

    Over the last four decades, sequential accumulation models for choice response times have spread through cognitive psychology like wildfire. The most popular style of accumulator model is the diffusion model (Ratcliff Psychological Review, 85, 59-108, 1978), which has been shown to account for data from a wide range of paradigms, including perceptual discrimination, letter identification, lexical decision, recognition memory, and signal detection. Since its original inception, the model has become increasingly complex in order to account for subtle, but reliable, data patterns. The additional complexity of the diffusion model renders it a tool that is only for experts. In response, Wagenmakers et al. (Psychonomic Bulletin & Review, 14, 3-22, 2007) proposed that researchers could use a more basic version of the diffusion model, the EZ diffusion. Here, we simulate experimental effects on data generated from the full diffusion model and compare the power of the full diffusion model and EZ diffusion to detect those effects. We show that the EZ diffusion model, by virtue of its relative simplicity, will be sometimes better able to detect experimental effects than the data-generating full diffusion model.

  1. Science-Based Simulation Model of Human Performance for Human Reliability Analysis

    International Nuclear Information System (INIS)

    Kelly, Dana L.; Boring, Ronald L.; Mosleh, Ali; Smidts, Carol

    2011-01-01

    Human reliability analysis (HRA), a component of an integrated probabilistic risk assessment (PRA), is the means by which the human contribution to risk is assessed, both qualitatively and quantitatively. However, among the literally dozens of HRA methods that have been developed, most cannot fully model and quantify the types of errors that occurred at Three Mile Island. Furthermore, all of the methods lack a solid empirical basis, relying heavily on expert judgment or empirical results derived in non-reactor domains. Finally, all of the methods are essentially static, and are thus unable to capture the dynamics of an accident in progress. The objective of this work is to begin exploring a dynamic simulation approach to HRA, one whose models have a basis in psychological theories of human performance, and whose quantitative estimates have an empirical basis. This paper highlights a plan to formalize collaboration among the Idaho National Laboratory (INL), the University of Maryland, and The Ohio State University (OSU) to continue development of a simulation model initially formulated at the University of Maryland. Initial work will focus on enhancing the underlying human performance models with the most recent psychological research, and on planning follow-on studies to establish an empirical basis for the model, based on simulator experiments to be carried out at the INL and at the OSU.

  2. A consistent modelling methodology for secondary settling tanks: a reliable numerical method.

    Science.gov (United States)

    Bürger, Raimund; Diehl, Stefan; Farås, Sebastian; Nopens, Ingmar; Torfs, Elena

    2013-01-01

    The consistent modelling methodology for secondary settling tanks (SSTs) leads to a partial differential equation (PDE) of nonlinear convection-diffusion type as a one-dimensional model for the solids concentration as a function of depth and time. This PDE includes a flux that depends discontinuously on spatial position modelling hindered settling and bulk flows, a singular source term describing the feed mechanism, a degenerating term accounting for sediment compressibility, and a dispersion term for turbulence. In addition, the solution itself is discontinuous. A consistent, reliable and robust numerical method that properly handles these difficulties is presented. Many constitutive relations for hindered settling, compression and dispersion can be used within the model, allowing the user to switch on and off effects of interest depending on the modelling goal as well as investigate the suitability of certain constitutive expressions. Simulations show the effect of the dispersion term on effluent suspended solids and total sludge mass in the SST. The focus is on correct implementation whereas calibration and validation are not pursued.

  3. Development of corrosion models to ensure reliable performance of nuclear power plants

    International Nuclear Information System (INIS)

    Kritzky, V.G.; Stjazhkin, P.S.

    1993-01-01

    The safety and reliability of the coolant circuits in nuclear power plants depend much on corrosion and corrosion products transfer processes. Various empirical models have been developed which are applicable to particular sets of operational conditions. In our laboratory a corrosion model has been worked out, which is based on the thermodynamic properties of the compounds, participating in corrosion process and on the assumption, that the corrosion process is controlled by the solubilities of the corrosion products forming the surface oxide layer. The validity of the model has been verified by use of retrospective experimental data, which have been obtained for a series of structural materials such as e.g., carbon and stainless steels, Cu-, Al-, and Zr alloys. With regard for hydriding the model satisfactorily describes stress corrosion cracking process in water-salt environments. This report describes a model based on the thermodynamic properties of the compounds participating in the corrosion process, and on the assumption that the corrosion process is controlled by the solubilities of the corrosion products forming the surface oxide layer

  4. Energy supplies in the Federal Republic of Germany. The reliability of supplies and strategies to provide for it. Die Energieversorgung der Bundesrepublik. Sicherungsbedarf und Sicherungsstrategien

    Energy Technology Data Exchange (ETDEWEB)

    Matthies, K

    1985-06-01

    The IEA crisis management has so far not been required to demonstrate its full capacity and power. In spite of a number of crises in the Middle East oil has always been sufficiently available in the Western world. Even with the Gulf at war there have been abundant oil supplies and once again slackening petroleum prices on the world market for several years. This very situation did not fail to affect the different opinions on the reliability of supplies. While the Western German petroleum industry is being suffocated by accumulating idle refinery stocks and therefore protests against excessive provision making with the help of restrictive policies, the mining industry still meets with political support for a policy of stockpiling for the sake of reliable energy supplies. In view of the inflationary use of the reliability and safety concepts in political discussion the publication abstracted is trying to throw light on the meaning per se, the development of the supply situation in the Federal Republic of Germany since 1973 and on the applicability and costs of possible precautionary strategies taking effect in the case of suspended energy imports.

  5. Event and fault tree model for reliability analysis of the greek research reactor

    International Nuclear Information System (INIS)

    Albuquerque, Tob R.; Guimaraes, Antonio C.F.; Moreira, Maria de Lourdes

    2013-01-01

    Fault trees and event trees are widely used in industry to model and to evaluate the reliability of safety systems. Detailed analyzes in nuclear installations require the combination of these two techniques. This work uses the methods of fault tree (FT) and event tree (ET) to perform the Probabilistic Safety Assessment (PSA) in research reactors. The PSA according to IAEA (International Atomic Energy Agency) is divided into Level 1, Level 2 and level 3. At Level 1, conceptually safety systems act to prevent the accident, at Level 2, the accident occurred and seeks to minimize the consequences, known as stage management of the accident, and at Level 3 are determined consequences. This paper focuses on Level 1 studies, and searches through the acquisition of knowledge consolidation of methodologies for future reliability studies. The Greek Research Reactor, GRR - 1, was used as a case example. The LOCA (Loss of Coolant Accident) was chosen as the initiating event and from there were developed the possible accident sequences, using event tree, which could lead damage to the core. Furthermore, for each of the affected systems, the possible accidents sequences were made fault tree and evaluated the probability of each event top of the FT. The studies were conducted using a commercial computational tool SAPHIRE. The results thus obtained, performance or failure to act of the systems analyzed were considered satisfactory. This work is directed to the Greek Research Reactor due to data availability. (author)

  6. Reliability of pulse oximetry during cardiopulmonary resuscitation in a piglet model of neonatal cardiac arrest.

    Science.gov (United States)

    Hassan, Mohammad Ahmad; Mendler, Marc; Maurer, Miriam; Waitz, Markus; Huang, Li; Hummler, Helmut D

    2015-01-01

    Pulse oximetry is widely used in intensive care and emergency conditions to monitor arterial oxygenation and to guide oxygen therapy. To study the reliability of pulse oximetry in comparison with CO-oximetry in newborn piglets during cardiopulmonary resuscitation (CPR). In a prospective cohort study in 30 healthy newborn piglets, cardiac arrest was induced, and thereafter each piglet received CPR for 20 min. Arterial oxygen saturation was monitored continuously by pulse oximetry (SpO2). Arterial blood was analyzed for functional oxygenation (SaO2) every 2 min. SpO2 was compared with coinciding SaO2 values and bias considered whenever the difference (SpO2 - SaO2) was beyond ±5%. Bias values were decreased at the baseline measurements (mean: 2.5 ± 4.6%) with higher precision and accuracy compared with values across the experiment. Two minutes after cardiac arrest, there was a marked decrease in precision and accuracy as well as an increase in bias up to 13 ± 34%, reaching a maximum of 45.6 ± 28.3% after 10 min over a mean SaO2 range of 29-58%. Pulse oximetry showed increased bias and decreased accuracy and precision during CPR in a model of neonatal cardiac arrest. We recommend further studies to clarify the exact mechanisms of these false readings to improve reliability of pulse oximetry during the marked desaturation and hypoperfusion found during CPR. © 2014 S. Karger AG, Basel.

  7. Reliable four-point flexion test and model for die-to-wafer direct bonding

    Energy Technology Data Exchange (ETDEWEB)

    Tabata, T., E-mail: toshiyuki.tabata@cea.fr; Sanchez, L.; Fournel, F.; Moriceau, H. [Univ. Grenoble Alpes, F-38000 Grenoble, France and CEA, LETI, MINATEC Campus, F-38054 Grenoble (France)

    2015-07-07

    For many years, wafer-to-wafer (W2W) direct bonding has been very developed particularly in terms of bonding energy measurement and bonding mechanism comprehension. Nowadays, die-to-wafer (D2W) direct bonding has gained significant attention, for instance, in photonics and microelectro-mechanics, which supposes controlled and reliable fabrication processes. So, whatever the stuck materials may be, it is not obvious whether bonded D2W structures have the same bonding strength as bonded W2W ones, because of possible edge effects of dies. For that reason, it has been strongly required to develop a bonding energy measurement technique which is suitable for D2W structures. In this paper, both D2W- and W2W-type standard SiO{sub 2}-to-SiO{sub 2} direct bonding samples are fabricated from the same full-wafer bonding. Modifications of the four-point flexion test (4PT) technique and applications for measuring D2W direct bonding energies are reported. Thus, the comparison between the modified 4PT and the double-cantilever beam techniques is drawn, also considering possible impacts of the conditions of measures such as the water stress corrosion at the debonding interface and the friction error at the loading contact points. Finally, reliability of a modified technique and a new model established for measuring D2W direct bonding energies is demonstrated.

  8. Event and fault tree model for reliability analysis of the greek research reactor

    Energy Technology Data Exchange (ETDEWEB)

    Albuquerque, Tob R.; Guimaraes, Antonio C.F.; Moreira, Maria de Lourdes, E-mail: atalbuquerque@ien.gov.br, E-mail: btony@ien.gov.br, E-mail: malu@ien.gov.br [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil)

    2013-07-01

    Fault trees and event trees are widely used in industry to model and to evaluate the reliability of safety systems. Detailed analyzes in nuclear installations require the combination of these two techniques. This work uses the methods of fault tree (FT) and event tree (ET) to perform the Probabilistic Safety Assessment (PSA) in research reactors. The PSA according to IAEA (International Atomic Energy Agency) is divided into Level 1, Level 2 and level 3. At Level 1, conceptually safety systems act to prevent the accident, at Level 2, the accident occurred and seeks to minimize the consequences, known as stage management of the accident, and at Level 3 are determined consequences. This paper focuses on Level 1 studies, and searches through the acquisition of knowledge consolidation of methodologies for future reliability studies. The Greek Research Reactor, GRR - 1, was used as a case example. The LOCA (Loss of Coolant Accident) was chosen as the initiating event and from there were developed the possible accident sequences, using event tree, which could lead damage to the core. Furthermore, for each of the affected systems, the possible accidents sequences were made fault tree and evaluated the probability of each event top of the FT. The studies were conducted using a commercial computational tool SAPHIRE. The results thus obtained, performance or failure to act of the systems analyzed were considered satisfactory. This work is directed to the Greek Research Reactor due to data availability. (author)

  9. Improving Metrological Reliability of Information-Measuring Systems Using Mathematical Modeling of Their Metrological Characteristics

    Science.gov (United States)

    Kurnosov, R. Yu; Chernyshova, T. I.; Chernyshov, V. N.

    2018-05-01

    The algorithms for improving the metrological reliability of analogue blocks of measuring channels and information-measuring systems are developed. The proposed algorithms ensure the optimum values of their metrological reliability indices for a given analogue circuit block solution.

  10. Pipeline integrity model-a formative approach towards reliability and life assessment

    International Nuclear Information System (INIS)

    Sayed, A.M.; Jaffery, M.A.

    2005-01-01

    Pipe forms an integral part of transmission medium in oil and gas industry. This holds true for both upstream and downstream segments of this global energy business. With the aging of this asset base, emphasis on its operational aspects has been under immense considerations from the operators and regulators sides. Moreover, the milieu of information area and enhancement in global trade has lifted the barriers on means to forge forward towards better utilization of resources. This has resulted in optimized solutions as priority for business and technical manager's world over. There is a paradigm shift from mere development of 'smart materials' to 'low life cycle cost material'. The force inducing this change is a rationale one: the recovery of development costs is no more a problem in a global community; rather it is the pay-off time which matters most to the materials end users. This means that decision makers are not evaluating just the price offered but are keen to judge the entire life cycle cost of a product. The integrity of pipe are affected by factors such as corrosion, fatigue-crack growth, stress-corrosion cracking, and mechanical damage. Extensive research in the area of reliability and life assessment has been carried out. A number of models concerning with the reliability issues of pipes have been developed and are being used by a number of pipeline operators worldwide. Yet, it is emphasised that there are no substitute for sound engineering judgment and allowance for factors of safety. The ability of a laid down pipe network to transport the intended fluid under pre-defined conditions for the entire project envisaged life, is referred to the reliability of system. The reliability is built into the product through extensive benchmarking against industry standard codes. The process of pipes construction for oil and gas service is regulated through American Petroleum Institute's Specification for Line Pipe. Subsequently, specific programs have been

  11. Probabilistic physics-of-failure models for component reliabilities using Monte Carlo simulation and Weibull analysis: a parametric study

    International Nuclear Information System (INIS)

    Hall, P.L.; Strutt, J.E.

    2003-01-01

    In reliability engineering, component failures are generally classified in one of three ways: (1) early life failures; (2) failures having random onset times; and (3) late life or 'wear out' failures. When the time-distribution of failures of a population of components is analysed in terms of a Weibull distribution, these failure types may be associated with shape parameters β having values 1 respectively. Early life failures are frequently attributed to poor design (e.g. poor materials selection) or problems associated with manufacturing or assembly processes. We describe a methodology for the implementation of physics-of-failure models of component lifetimes in the presence of parameter and model uncertainties. This treats uncertain parameters as random variables described by some appropriate statistical distribution, which may be sampled using Monte Carlo methods. The number of simulations required depends upon the desired accuracy of the predicted lifetime. Provided that the number of sampled variables is relatively small, an accuracy of 1-2% can be obtained using typically 1000 simulations. The resulting collection of times-to-failure are then sorted into ascending order and fitted to a Weibull distribution to obtain a shape factor β and a characteristic life-time η. Examples are given of the results obtained using three different models: (1) the Eyring-Peck (EP) model for corrosion of printed circuit boards; (2) a power-law corrosion growth (PCG) model which represents the progressive deterioration of oil and gas pipelines; and (3) a random shock-loading model of mechanical failure. It is shown that for any specific model the values of the Weibull shape parameters obtained may be strongly dependent on the degree of uncertainty of the underlying input parameters. Both the EP and PCG models can yield a wide range of values of β, from β>1, characteristic of wear-out behaviour, to β<1, characteristic of early-life failure, depending on the degree of

  12. The precision and reliability evaluation of 3-dimensional printed damaged bone and prosthesis models by stereo lithography appearance.

    Science.gov (United States)

    Zou, Yun; Han, Qing; Weng, Xisheng; Zou, Yongwei; Yang, Yingying; Zhang, Kesong; Yang, Kerong; Xu, Xiaolin; Wang, Chenyu; Qin, Yanguo; Wang, Jincheng

    2018-02-01

    Recently, clinical application of 3D printed model was increasing. However, there was no systemic study for confirming the precision and reliability of 3D printed model. Some senior clinical doctors mistrusted its reliability in clinical application. The purpose of this study was to evaluate the precision and reliability of stereolithography appearance (SLA) 3D printed model.Some related parameters were selected to research the reliability of SLA 3D printed model. The computed tomography (CT) data of bone/prosthesis and model were collected and 3D reconstructed. Some anatomical parameters were measured and statistical analysis was performed; the intraclass correlation coefficient (ICC) was used to was used to evaluate the similarity between the model and real bone/prosthesis. the absolute difference (mm) and relative difference (%) were conducted. For prosthesis model, the 3-dimensional error was measured.There was no significant difference in the anatomical parameters except max height (MH) of long bone. All the ICCs were greater than 0.990. The maximum absolute and relative difference were 0.45 mm and 1.10%; The 3-dimensional error analysis showed that positive/minus distance were 0.273 mm/0.237 mm.The application of SLA 3D printed model in diagnosis and treatment process of complex orthopedic disease was reliable and precise.

  13. Assessment of the human factor in the quantification of technical system reliability taking into consideration cognitive-causal aspects. Partial project 2. Modeling of the human behavior for reliability considerations. Final report

    International Nuclear Information System (INIS)

    Jennerich, Marco; Imbsweiler, Jonas; Straeter, Oliver; Arenius, Marcus

    2015-03-01

    This report presents the findings of the project for the consideration of human factor in the quantification of the reliability of technical systems, taking into account cognitive-causal aspects concerning the modeling of human behavior of reliability issues (funded by the Federal Ministry of Economics and Technology; grant number 15014328). This project is part of a joint project with the University of Applied Sciences Zittau / Goerlitz for assessing the human factor in the quantification of the reliability of technical systems. The concern of the University of Applied Sciences Zittau / Goerlitz is the mathematical modeling of human reliability by means of a fuzzy set approach (grant number 1501432A). The part of the project presented here provides the necessary data basis for the evaluation of the mathematical modeling using fuzzy set approach. At the appropriate places in this report, the interfaces and data bases between the two projects are outlined accordingly. HRA-methods (Human Reliability Analysis) are an essential component to analyze the reliability of socio-technical systems. Various methods have been established and are used in different areas of application. The established HRA methods have been checked on their congruence. In particular the underlying models and their parameters such as performance-influencing factors and situational influences have been investigated. The elaborated parameters were combined into a hierarchical class structure. Cross-domain incidents were studied. The specific performance-influencing factors have been worked out and have been integrated into a cross-domain database. The dominant (critical) situational factors and their interactions within the event data were identified using the CAHR method (connectionism Assessment of Human Reliability). Task dependent cognitive load profiles have been defined. Within these profiles qualitative and quantitative data of the possibility of emergence of errors have been acquired. This

  14. A review of the models for evaluating organizational factors in human reliability analysis

    International Nuclear Information System (INIS)

    Alvarenga, Marco Antonio Bayout; Fonseca, Renato Alves da; Melo, Paulo Fernando Ferreira Frutuoso e

    2009-01-01

    Human factors should be evaluated in three hierarchical levels. The first level should concern the cognitive behavior of human beings during the control of processes that occur through the man-machine interface. Here, one evaluates human errors through human reliability models of first and second generation, like THERP, ASEP and HCR (first generation) and ATHEANA and CREAM (second generation). In the second level, the focus is in the cognitive behavior of human beings when they work in groups, as in nuclear power plants. The focus here is in the anthropological aspects that govern the interaction among human beings. In the third level, one is interested in the influence that the organizational culture exerts on human beings as well as on the tasks being performed. Here, one adds to the factors of the second level the economical and political aspects that shape the company organizational culture. Nowadays, the methodologies of HRA incorporate organizational factors in the group and organization levels through performance shaping factors. This work makes a critical evaluation of the deficiencies concerning human factors and evaluates the potential of quantitative techniques that have been proposed in the last decade to model organizational factors, including the interaction among groups, with the intention of eliminating this chronic deficiency of HRA models. Two important techniques will be discussed in this context: STAMP, based on system theory and FRAM, which aims at modeling the nonlinearities of socio-technical systems. (author)

  15. A review of the models for evaluating organizational factors in human reliability analysis

    Energy Technology Data Exchange (ETDEWEB)

    Alvarenga, Marco Antonio Bayout; Fonseca, Renato Alves da [Comissao Nacional de Energia Nuclear (CNEN), Rio de Janeiro, RJ (Brazil)], e-mail: bayout@cnen.gov.br, e-mail: rfonseca@cnen.gov.br; Melo, Paulo Fernando Ferreira Frutuoso e [Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear], e-mail: frutuoso@con.ufrj.br

    2009-07-01

    Human factors should be evaluated in three hierarchical levels. The first level should concern the cognitive behavior of human beings during the control of processes that occur through the man-machine interface. Here, one evaluates human errors through human reliability models of first and second generation, like THERP, ASEP and HCR (first generation) and ATHEANA and CREAM (second generation). In the second level, the focus is in the cognitive behavior of human beings when they work in groups, as in nuclear power plants. The focus here is in the anthropological aspects that govern the interaction among human beings. In the third level, one is interested in the influence that the organizational culture exerts on human beings as well as on the tasks being performed. Here, one adds to the factors of the second level the economical and political aspects that shape the company organizational culture. Nowadays, the methodologies of HRA incorporate organizational factors in the group and organization levels through performance shaping factors. This work makes a critical evaluation of the deficiencies concerning human factors and evaluates the potential of quantitative techniques that have been proposed in the last decade to model organizational factors, including the interaction among groups, with the intention of eliminating this chronic deficiency of HRA models. Two important techniques will be discussed in this context: STAMP, based on system theory and FRAM, which aims at modeling the nonlinearities of socio-technical systems. (author)

  16. Factor structure and internal reliability of an exercise health belief model scale in a Mexican population

    Directory of Open Access Journals (Sweden)

    Oscar Armando Esparza-Del Villar

    2017-03-01

    Full Text Available Abstract Background Mexico is one of the countries with the highest rates of overweight and obesity around the world, with 68.8% of men and 73% of women reporting both. This is a public health problem since there are several health related consequences of not exercising, like having cardiovascular diseases or some types of cancers. All of these problems can be prevented by promoting exercise, so it is important to evaluate models of health behaviors to achieve this goal. Among several models the Health Belief Model is one of the most studied models to promote health related behaviors. This study validates the first exercise scale based on the Health Belief Model (HBM in Mexicans with the objective of studying and analyzing this model in Mexico. Methods Items for the scale called the Exercise Health Belief Model Scale (EHBMS were developed by a health research team, then the items were applied to a sample of 746 participants, male and female, from five cities in Mexico. The factor structure of the items was analyzed with an exploratory factor analysis and the internal reliability with Cronbach’s alpha. Results The exploratory factor analysis reported the expected factor structure based in the HBM. The KMO index (0.92 and the Barlett’s sphericity test (p < 0.01 indicated an adequate and normally distributed sample. Items had adequate factor loadings, ranging from 0.31 to 0.92, and the internal consistencies of the factors were also acceptable, with alpha values ranging from 0.67 to 0.91. Conclusions The EHBMS is a validated scale that can be used to measure exercise based on the HBM in Mexican populations.

  17. Testing effort dependent software reliability model for imperfect debugging process considering both detection and correction

    International Nuclear Information System (INIS)

    Peng, R.; Li, Y.F.; Zhang, W.J.; Hu, Q.P.

    2014-01-01

    This paper studies the fault detection process (FDP) and fault correction process (FCP) with the incorporation of testing effort function and imperfect debugging. In order to ensure high reliability, it is essential for software to undergo a testing phase, during which faults can be detected and corrected by debuggers. The testing resource allocation during this phase, which is usually depicted by the testing effort function, considerably influences not only the fault detection rate but also the time to correct a detected fault. In addition, testing is usually far from perfect such that new faults may be introduced. In this paper, we first show how to incorporate testing effort function and fault introduction into FDP and then develop FCP as delayed FDP with a correction effort. Various specific paired FDP and FCP models are obtained based on different assumptions of fault introduction and correction effort. An illustrative example is presented. The optimal release policy under different criteria is also discussed

  18. Modelling a reliability system governed by discrete phase-type distributions

    International Nuclear Information System (INIS)

    Ruiz-Castro, Juan Eloy; Perez-Ocon, Rafael; Fernandez-Villodre, Gemma

    2008-01-01

    We present an n-system with one online unit and the others in cold standby. There is a repairman. When the online fails it goes to repair, and instantaneously a standby unit becomes the online one. The operational and repair times follow discrete phase-type distributions. Given that any discrete distribution defined on the positive integers is a discrete phase-type distribution, the system can be considered a general one. A model with unlimited number of units is considered for approximating a system with a great number of units. We show that the process that governs the system is a quasi-birth-and-death process. For this system, performance reliability measures; the up and down periods, and the involved costs are calculated in a matrix and algorithmic form. We show that the discrete case is not a trivial case of the continuous one. The results given in this paper have been implemented computationally with Matlab

  19. Modelling a reliability system governed by discrete phase-type distributions

    Energy Technology Data Exchange (ETDEWEB)

    Ruiz-Castro, Juan Eloy [Departamento de Estadistica e Investigacion Operativa, Universidad de Granada, 18071 Granada (Spain)], E-mail: jeloy@ugr.es; Perez-Ocon, Rafael [Departamento de Estadistica e Investigacion Operativa, Universidad de Granada, 18071 Granada (Spain)], E-mail: rperezo@ugr.es; Fernandez-Villodre, Gemma [Departamento de Estadistica e Investigacion Operativa, Universidad de Granada, 18071 Granada (Spain)

    2008-11-15

    We present an n-system with one online unit and the others in cold standby. There is a repairman. When the online fails it goes to repair, and instantaneously a standby unit becomes the online one. The operational and repair times follow discrete phase-type distributions. Given that any discrete distribution defined on the positive integers is a discrete phase-type distribution, the system can be considered a general one. A model with unlimited number of units is considered for approximating a system with a great number of units. We show that the process that governs the system is a quasi-birth-and-death process. For this system, performance reliability measures; the up and down periods, and the involved costs are calculated in a matrix and algorithmic form. We show that the discrete case is not a trivial case of the continuous one. The results given in this paper have been implemented computationally with Matlab.

  20. A multi-objective reliable programming model for disruption in supply chain

    Directory of Open Access Journals (Sweden)

    Emran Mohammadi

    2013-05-01

    Full Text Available One of the primary concerns on supply chain management is to handle risk components, properly. There are various reasons for having risk in supply chain such as natural disasters, unexpected incidents, etc. When a series of facilities are built and deployed, one or a number of them could probably fail at any time due to bad weather conditions, labor strikes, economic crises, sabotage or terrorist attacks and changes in ownership of the system. The objective of risk management is to reduce the effects of different domains to an acceptable level. To overcome the risk, we propose a reliable capacitated supply chain network design (RSCND model by considering random disruptions risk in both distribution centers and suppliers. The proposed study of this paper considers three objective functions and the implementation is verified using some instance.

  1. Extended object-oriented Petri net model for mission reliability simulation of repairable PMS with common cause failures

    International Nuclear Information System (INIS)

    Wu, Xin-yang; Wu, Xiao-Yue

    2015-01-01

    Phased Mission Systems (PMS) have several phases with different success criteria. Generally, traditional analytical methods need to make some assumptions when they are applied for reliability evaluation and analysis of complex PMS, for example, the components are non-repairable or components are not subjected to common cause failures (CCF). However, the evaluation and analysis results may be inapplicable when the assumptions do not agree with practical situation. In this article, we propose an extended object-oriented Petri net (EOOPN) model for mission reliability simulation of repairable PMS with CCFs. Based on object-oriented Petri net (OOPN), EOOPN defines four reusable sub-models to depict PMS at system, phase, or component levels respectively, logic transitions to depict complex components reliability logics in a more readable form, and broadcast place to transmit shared information among components synchronously. After extension, EOOPN could deal with repairable PMS with both external and internal CCFs conveniently. The mission reliability modelling, simulation and analysis using EOOPN are illustrated by a PMS example. The results demonstrate that the proposed EOOPN model is effective. - Highlights: • EOOPN model was effective in reliability simulation for repairable PMS with CCFs. • EOOPN had modular and hierarchical structure. • New elements of EOOPN made the modelling process more convenient and friendlier. • EOOPN had better model reusability and readability than other PNs

  2. Reliability of some ageing nuclear power plant system: a simple stochastic model

    Energy Technology Data Exchange (ETDEWEB)

    Suarez-Antola, Roberto [Catholic University of Uruguay, Montevideo (Uruguay). School of Engineering and Technologies; Ministerio de Industria, Energia y Mineria, Montevideo (Uruguay). Direccion Nacional de Energia y Tecnologia Nuclear; E-mail: rsuarez@ucu.edu.uy

    2007-07-01

    The random number of failure-related events in certain repairable ageing systems, like certain nuclear power plant components, during a given time interval, may be often modelled by a compound Poisson distribution. One of these is the Polya-Aeppli distribution. The derivation of a stationary Polya-Aeppli distribution as a limiting distribution of rare events for stationary Bernouilli trials with first order Markov dependence is considered. But if the parameters of the Polya-Aeppli distribution are suitable time functions, we could expect that the resulting distribution would allow us to take into account the distribution of failure-related events in an ageing system. Assuming that a critical number of damages produce an emergent failure, the above mentioned results can be applied in a reliability analysis. It is natural to ask under what conditions a Polya-Aeppli distribution could be a limiting distribution for non-homogeneous Bernouilli trials with first order Markov dependence. In this paper this problem is analyzed and possible applications of the obtained results to ageing or deteriorating nuclear power plant components are considered. The two traditional ways of modelling repairable systems in reliability theory: the 'as bad as old' concept, that assumes that the replaced component is exactly under the same conditions as was the aged component before failure, and the 'as good as new' concept, that assumes that the new component is under the same conditions of the replaced component when it was new, are briefly discussed in relation with the findings of the present work. (author)

  3. Human reliability data, human error and accident models--illustration through the Three Mile Island accident analysis

    International Nuclear Information System (INIS)

    Le Bot, Pierre

    2004-01-01

    Our first objective is to provide a panorama of Human Reliability data used in EDF's Safety Probabilistic Studies, and then, since these concepts are at the heart of Human Reliability and its methods, to go over the notion of human error and the understanding of accidents. We are not sure today that it is actually possible to provide in this field a foolproof and productive theoretical framework. Consequently, the aim of this article is to suggest potential paths of action and to provide information on EDF's progress along those paths which enables us to produce the most potentially useful Human Reliability analyses while taking into account current knowledge in Human Sciences. The second part of this article illustrates our point of view as EDF researchers through the analysis of the most famous civil nuclear accident, the Three Mile Island unit accident in 1979. Analysis of this accident allowed us to validate our positions regarding the need to move, in the case of an accident, from the concept of human error to that of systemic failure in the operation of systems such as a nuclear power plant. These concepts rely heavily on the notion of distributed cognition and we will explain how we applied it. These concepts were implemented in the MERMOS Human Reliability Probabilistic Assessment methods used in the latest EDF Probabilistic Human Reliability Assessment. Besides the fact that it is not very productive to focus exclusively on individual psychological error, the design of the MERMOS method and its implementation have confirmed two things: the significance of qualitative data collection for Human Reliability, and the central role held by Human Reliability experts in building knowledge about emergency operation, which in effect consists of Human Reliability data collection. The latest conclusion derived from the implementation of MERMOS is that, considering the difficulty in building 'generic' Human Reliability data in the field we are involved in, the best

  4. Reliable Dual Tensor Model Estimation in Single and Crossing Fibers Based on Jeffreys Prior

    Science.gov (United States)

    Yang, Jianfei; Poot, Dirk H. J.; Caan, Matthan W. A.; Su, Tanja; Majoie, Charles B. L. M.; van Vliet, Lucas J.; Vos, Frans M.

    2016-01-01

    Purpose This paper presents and studies a framework for reliable modeling of diffusion MRI using a data-acquisition adaptive prior. Methods Automated relevance determination estimates the mean of the posterior distribution of a rank-2 dual tensor model exploiting Jeffreys prior (JARD). This data-acquisition prior is based on the Fisher information matrix and enables the assessment whether two tensors are mandatory to describe the data. The method is compared to Maximum Likelihood Estimation (MLE) of the dual tensor model and to FSL’s ball-and-stick approach. Results Monte Carlo experiments demonstrated that JARD’s volume fractions correlated well with the ground truth for single and crossing fiber configurations. In single fiber configurations JARD automatically reduced the volume fraction of one compartment to (almost) zero. The variance in fractional anisotropy (FA) of the main tensor component was thereby reduced compared to MLE. JARD and MLE gave a comparable outcome in data simulating crossing fibers. On brain data, JARD yielded a smaller spread in FA along the corpus callosum compared to MLE. Tract-based spatial statistics demonstrated a higher sensitivity in detecting age-related white matter atrophy using JARD compared to both MLE and the ball-and-stick approach. Conclusions The proposed framework offers accurate and precise estimation of diffusion properties in single and dual fiber regions. PMID:27760166

  5. A computational model for reliability calculation of steam generators from defects in its tubes

    International Nuclear Information System (INIS)

    Rivero, Paulo C.M.; Melo, P.F. Frutuoso e

    2000-01-01

    Nowadays, probability approaches are employed for calculating the reliability of steam generators as a function of defects in their tubes without any deterministic association with warranty assurance. Unfortunately, probability models produce large failure values, as opposed to the recommendation of the U.S. Code of Federal Regulations, that is, failure probabilities must be as small as possible In this paper, we propose the association of the deterministic methodology with the probabilistic one. At first, the failure probability evaluation of steam generators follows a probabilistic methodology: to find the failure probability, critical cracks - obtained from Monte Carlo simulations - are limited to have length's in the interval defined by their lower value and the plugging limit one, so as to obtain a failure probability of at most 1%. The distribution employed for modeling the observed (measured) cracks considers the same interval. Any length outside the mentioned interval is not considered for the probability evaluation: it is approached by the deterministic model. The deterministic approach is to plug the tube when any anomalous crack is detected in it. Such a crack is an observed one placed in the third region on the plot of the logarithmic time derivative of crack lengths versus the mode I stress intensity factor, while for normal cracks the plugging of tubes occurs in the second region of that plot - if they are dangerous, of course, considering their random evolution. A methodology for identifying anomalous cracks is also presented. (author)

  6. A Correlated Model for Evaluating Performance and Energy of Cloud System Given System Reliability

    Directory of Open Access Journals (Sweden)

    Hongli Zhang

    2015-01-01

    Full Text Available The serious issue of energy consumption for high performance computing systems has attracted much attention. Performance and energy-saving have become important measures of a computing system. In the cloud computing environment, the systems usually allocate various resources (such as CPU, Memory, Storage, etc. on multiple virtual machines (VMs for executing tasks. Therefore, the problem of resource allocation for running VMs should have significant influence on both system performance and energy consumption. For different processor utilizations assigned to the VM, there exists the tradeoff between energy consumption and task completion time when a given task is executed by the VMs. Moreover, the hardware failure, software failure and restoration characteristics also have obvious influences on overall performance and energy. In this paper, a correlated model is built to analyze both performance and energy in the VM execution environment given the reliability restriction, and an optimization model is presented to derive the most effective solution of processor utilization for the VM. Then, the tradeoff between energy-saving and task completion time is studied and balanced when the VMs execute given tasks. Numerical examples are illustrated to build the performance-energy correlated model and evaluate the expected values of task completion time and consumed energy.

  7. Modeling and Quantification of Team Performance in Human Reliability Analysis for Probabilistic Risk Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Jeffrey C. JOe; Ronald L. Boring

    2014-06-01

    Probabilistic Risk Assessment (PRA) and Human Reliability Assessment (HRA) are important technical contributors to the United States (U.S.) Nuclear Regulatory Commission’s (NRC) risk-informed and performance based approach to regulating U.S. commercial nuclear activities. Furthermore, all currently operating commercial NPPs in the U.S. are required by federal regulation to be staffed with crews of operators. Yet, aspects of team performance are underspecified in most HRA methods that are widely used in the nuclear industry. There are a variety of "emergent" team cognition and teamwork errors (e.g., communication errors) that are 1) distinct from individual human errors, and 2) important to understand from a PRA perspective. The lack of robust models or quantification of team performance is an issue that affects the accuracy and validity of HRA methods and models, leading to significant uncertainty in estimating HEPs. This paper describes research that has the objective to model and quantify team dynamics and teamwork within NPP control room crews for risk informed applications, thereby improving the technical basis of HRA, which improves the risk-informed approach the NRC uses to regulate the U.S. commercial nuclear industry.

  8. Integrating prospect theory and Stackleberg games to model strategic dyad behavior of information providers and travelers: theory and numercial simulations

    NARCIS (Netherlands)

    Han, Q.; Dellaert, B.G.C.; Raaij, van W.F.; Timmermans, H.J.P.

    2005-01-01

    Existing policy models of optimal guidance strategies are typically concerned with single-objective optimization based on reliable forecasts in terms of the consistency between predicted and observed aggregate activity-travel patterns. The interaction and interdependencies between policy objective

  9. The reliable solution and computation time of variable parameters Logistic model

    OpenAIRE

    Pengfei, Wang; Xinnong, Pan

    2016-01-01

    The reliable computation time (RCT, marked as Tc) when applying a double precision computation of a variable parameters logistic map (VPLM) is studied. First, using the method proposed, the reliable solutions for the logistic map are obtained. Second, for a time-dependent non-stationary parameters VPLM, 10000 samples of reliable experiments are constructed, and the mean Tc is then computed. The results indicate that for each different initial value, the Tcs of the VPLM are generally different...

  10. A Model to Partly but Reliably Distinguish DDOS Flood Traffic from Aggregated One

    Directory of Open Access Journals (Sweden)

    Ming Li

    2012-01-01

    Full Text Available Reliable distinguishing DDOS flood traffic from aggregated traffic is desperately desired by reliable prevention of DDOS attacks. By reliable distinguishing, we mean that flood traffic can be distinguished from aggregated one for a predetermined probability. The basis to reliably distinguish flood traffic from aggregated one is reliable detection of signs of DDOS flood attacks. As is known, reliably distinguishing DDOS flood traffic from aggregated traffic becomes a tough task mainly due to the effects of flash-crowd traffic. For this reason, this paper studies reliable detection in the underlying DiffServ network to use static-priority schedulers. In this network environment, we present a method for reliable detection of signs of DDOS flood attacks for a given class with a given priority. There are two assumptions introduced in this study. One is that flash-crowd traffic does not have all priorities but some. The other is that attack traffic has all priorities in all classes, otherwise an attacker cannot completely achieve its DDOS goal. Further, we suppose that the protected site is equipped with a sensor that has a signature library of the legitimate traffic with the priorities flash-crowd traffic does not have. Based on those, we are able to reliably distinguish attack traffic from aggregated traffic with the priorities that flash-crowd traffic does not have according to a given detection probability.

  11. New Provider Models for Sweden and Spain: Public, Private or Non-profit? Comment on "Governance, Government, and the Search for New Provider Models".

    Science.gov (United States)

    Jeurissen, Patrick P T; Maarse, Hans

    2016-06-29

    Sweden and Spain experiment with different provider models to reform healthcare provision. Both models have in common that they extend the role of the for-profit sector in healthcare. As the analysis of Saltman and Duran demonstrates, privatisation is an ambiguous and contested strategy that is used for quite different purposes. In our comment, we emphasize that their analysis leaves questions open on the consequences of privatisation for the performance of healthcare and the role of the public sector in healthcare provision. Furthermore, we briefly address the absence of the option of healthcare provision by not-for-profit providers in the privatisation strategy of Sweden and Spain. © 2016 The Author(s); Published by Kerman University of Medical Sciences. This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

  12. A simulation model for reliability-based appraisal of an energy policy: The case of Lebanon

    International Nuclear Information System (INIS)

    Hamdan, H.A.; Ghajar, R.F.; Chedid, R.B.

    2012-01-01

    The Lebanese Electric Power System (LEPS) has been suffering from technical and financial deficiencies for decades and mirrors the problems encountered in many developing countries suffering from inadequate or no power systems planning resulting in incomplete and ill-operating infrastructure, and suffering from effects of political instability, huge debts, unavailability of financing desired projects and inefficiency in operation. The upgrade and development of the system necessitate the adoption of a comprehensive energy policy that introduces solutions to a diversity of problems addressing the technical, financial, administrative and governance aspects of the system. In this paper, an energy policy for Lebanon is proposed and evaluated based on integration between energy modeling and financial modeling. The paper utilizes the Load Modification Technique (LMT) as a probabilistic tool to assess the impact of policy implementation on energy production, overall cost, technical/commercial losses and reliability. Scenarios reflecting implementation of policy projects are assessed and their impacts are compared with business-as-usual scenarios which assume no new investment is to take place in the sector. Conclusions are drawn on the usefulness of the proposed evaluation methodology and the effectiveness of the adopted energy policy for Lebanon and other developing countries suffering from similar power system problems. - Highlights: ► Evaluation methodology based on a probabilistic simulation tool is proposed. ► A business-as-usual scenario for a given study period of the LEPS was modeled. ► Mitigation scenarios reflecting implementation of the energy policy are modeled. ► Policy simulated and compared with business-as-usual scenarios of the LEPS. ► Results reflect usefulness of proposed methodology and the adopted energy policy.

  13. An improved mixing model providing joint statistics of scalar and scalar dissipation

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Daniel W. [Department of Energy Resources Engineering, Stanford University, Stanford, CA (United States); Jenny, Patrick [Institute of Fluid Dynamics, ETH Zurich (Switzerland)

    2008-11-15

    For the calculation of nonpremixed turbulent flames with thin reaction zones the joint probability density function (PDF) of the mixture fraction and its dissipation rate plays an important role. The corresponding PDF transport equation involves a mixing model for the closure of the molecular mixing term. Here, the parameterized scalar profile (PSP) mixing model is extended to provide the required joint statistics. Model predictions are validated using direct numerical simulation (DNS) data of a passive scalar mixing in a statistically homogeneous turbulent flow. Comparisons between the DNS and the model predictions are provided, which involve different initial scalar-field lengthscales. (author)

  14. Forecasting consequences of accidental release: how reliable are current assessment models

    International Nuclear Information System (INIS)

    Rohwer, P.S.; Hoffman, F.O.; Miller, C.W.

    1983-01-01

    This paper focuses on uncertainties in model output used to assess accidents. We begin by reviewing the historical development of assessment models and the associated interest in uncertainties as these evolutionary processes occurred in the United States. This is followed by a description of the sources of uncertainties in assessment calculations. Types of models appropriate for assessment of accidents are identified. A summary of results from our analysis of uncertainty is provided in results obtained with current methodology for assessing routine and accidental radionuclide releases to the environment. We conclude with discussion of preferred procedures and suggested future directions to improve the state-of-the-art of radiological assessments

  15. Reliability Engineering

    International Nuclear Information System (INIS)

    Lee, Sang Yong

    1992-07-01

    This book is about reliability engineering, which describes definition and importance of reliability, development of reliability engineering, failure rate and failure probability density function about types of it, CFR and index distribution, IFR and normal distribution and Weibull distribution, maintainability and movability, reliability test and reliability assumption in index distribution type, normal distribution type and Weibull distribution type, reliability sampling test, reliability of system, design of reliability and functionality failure analysis by FTA.

  16. Are mixed explicit/implicit solvation models reliable for studying phosphate hydrolysis? A comparative study of continuum, explicit and mixed solvation models.

    Energy Technology Data Exchange (ETDEWEB)

    Kamerlin, Shina C. L.; Haranczyk, Maciej; Warshel, Arieh

    2009-05-01

    Phosphate hydrolysis is ubiquitous in biology. However, despite intensive research on this class of reactions, the precise nature of the reaction mechanism remains controversial. In this work, we have examined the hydrolysis of three homologous phosphate diesters. The solvation free energy was simulated by means of either an implicit solvation model (COSMO), hybrid quantum mechanical / molecular mechanical free energy perturbation (QM/MM-FEP) or a mixed solvation model in which N water molecules were explicitly included in the ab initio description of the reacting system (where N=1-3), with the remainder of the solvent being implicitly modelled as a continuum. Here, both COSMO and QM/MM-FEP reproduce Delta Gobs within an error of about 2kcal/mol. However, we demonstrate that in order to obtain any form of reliable results from a mixed model, it is essential to carefully select the explicit water molecules from short QM/MM runs that act as a model for the true infinite system. Additionally, the mixed models tend to be increasingly inaccurate the more explicit water molecules are placed into the system. Thus, our analysis indicates that this approach provides an unreliable way for modelling phosphate hydrolysis in solution.

  17. JUPITER: Joint Universal Parameter IdenTification and Evaluation of Reliability - An Application Programming Interface (API) for Model Analysis

    Science.gov (United States)

    Banta, Edward R.; Poeter, Eileen P.; Doherty, John E.; Hill, Mary C.

    2006-01-01

    he Joint Universal Parameter IdenTification and Evaluation of Reliability Application Programming Interface (JUPITER API) improves the computer programming resources available to those developing applications (computer programs) for model analysis.The JUPITER API consists of eleven Fortran-90 modules that provide for encapsulation of data and operations on that data. Each module contains one or more entities: data, data types, subroutines, functions, and generic interfaces. The modules do not constitute computer programs themselves; instead, they are used to construct computer programs. Such computer programs are called applications of the API. The API provides common modeling operations for use by a variety of computer applications.The models being analyzed are referred to here as process models, and may, for example, represent the physics, chemistry, and(or) biology of a field or laboratory system. Process models commonly are constructed using published models such as MODFLOW (Harbaugh et al., 2000; Harbaugh, 2005), MT3DMS (Zheng and Wang, 1996), HSPF (Bicknell et al., 1997), PRMS (Leavesley and Stannard, 1995), and many others. The process model may be accessed by a JUPITER API application as an external program, or it may be implemented as a subroutine within a JUPITER API application . In either case, execution of the model takes place in a framework designed by the application programmer. This framework can be designed to take advantage of any parallel processing capabilities possessed by the process model, as well as the parallel-processing capabilities of the JUPITER API.Model analyses for which the JUPITER API could be useful include, for example: Compare model results to observed values to determine how well the model reproduces system processes and characteristics.Use sensitivity analysis to determine the information provided by observations to parameters and predictions of interest.Determine the additional data needed to improve selected model

  18. 75 FR 50853 - Special Conditions: Cirrus Design Corporation Model SF50 Airplane; Function and Reliability Testing

    Science.gov (United States)

    2010-08-18

    ... airplanes. 1. Function and Reliability Testing. Flight tests: In place of 14 CFR 21.35(b)(2), the following...; Function and Reliability Testing AGENCY: Federal Aviation Administration (FAA), DOT. ACTION: Final special... to CAR part 3 became effective January 15, 1951, and deleted the service test requirements in Section...

  19. 75 FR 29962 - Special Conditions: Cirrus Design Corporation Model SF50 Airplane; Function and Reliability Testing

    Science.gov (United States)

    2010-05-28

    ... SF50 airplanes. 1. Function and Reliability Testing Flight tests: In place of 14 CFR part 21.35(b)(2... Reliability Testing AGENCY: Federal Aviation Administration (FAA), DOT. ACTION: Notice of proposed special..., 1951, and deleted the service test requirements in Section 3.19 for airplanes of 6,000 pounds maximum...

  20. An Appropriate Wind Model for Wind Integrated Power Systems Reliability Evaluation Considering Wind Speed Correlations

    Directory of Open Access Journals (Sweden)

    Rajesh Karki

    2013-02-01

    Full Text Available Adverse environmental impacts of carbon emissions are causing increasing concerns to the general public throughout the world. Electric energy generation from conventional energy sources is considered to be a major contributor to these harmful emissions. High emphasis is therefore being given to green alternatives of energy, such as wind and solar. Wind energy is being perceived as a promising alternative. This source of energy technology and its applications have undergone significant research and development over the past decade. As a result, many modern power systems include a significant portion of power generation from wind energy sources. The impact of wind generation on the overall system performance increases substantially as wind penetration in power systems continues to increase to relatively high levels. It becomes increasingly important to accurately model the wind behavior, the interaction with other wind sources and conventional sources, and incorporate the characteristics of the energy demand in order to carry out a realistic evaluation of system reliability. Power systems with high wind penetrations are often connected to multiple wind farms at different geographic locations. Wind speed correlations between the different wind farms largely affect the total wind power generation characteristics of such systems, and therefore should be an important parameter in the wind modeling process. This paper evaluates the effect of the correlation between multiple wind farms on the adequacy indices of wind-integrated systems. The paper also proposes a simple and appropriate probabilistic analytical model that incorporates wind correlations, and can be used for adequacy evaluation of multiple wind-integrated systems.