WorldWideScience

Sample records for software reliability modeling

  1. Software reliability models for critical applications

    Energy Technology Data Exchange (ETDEWEB)

    Pham, H.; Pham, M.

    1991-12-01

    This report presents the results of the first phase of the ongoing EG&G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place. 407 refs., 4 figs., 2 tabs.

  2. Software reliability models for critical applications

    Energy Technology Data Exchange (ETDEWEB)

    Pham, H.; Pham, M.

    1991-12-01

    This report presents the results of the first phase of the ongoing EG G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place. 407 refs., 4 figs., 2 tabs.

  3. Multinomial-exponential reliability function: a software reliability model

    International Nuclear Information System (INIS)

    Saiz de Bustamante, Amalio; Saiz de Bustamante, Barbara

    2003-01-01

    The multinomial-exponential reliability function (MERF) was developed during a detailed study of the software failure/correction processes. Later on MERF was approximated by a much simpler exponential reliability function (EARF), which keeps most of MERF mathematical properties, so the two functions together makes up a single reliability model. The reliability model MERF/EARF considers the software failure process as a non-homogeneous Poisson process (NHPP), and the repair (correction) process, a multinomial distribution. The model supposes that both processes are statistically independent. The paper discusses the model's theoretical basis, its mathematical properties and its application to software reliability. Nevertheless it is foreseen model applications to inspection and maintenance of physical systems. The paper includes a complete numerical example of the model application to a software reliability analysis

  4. Possibilities and limitations of applying software reliability growth models to safety-critical software

    International Nuclear Information System (INIS)

    Kim, Man Cheol; Jang, Seung Cheol; Ha, Jae Joo

    2007-01-01

    It is generally known that software reliability growth models such as the Jelinski-Moranda model and the Goel-Okumoto's Non-Homogeneous Poisson Process (NHPP) model cannot be applied to safety-critical software due to a lack of software failure data. In this paper, by applying two of the most widely known software reliability growth models to sample software failure data, we demonstrate the possibility of using the software reliability growth models to prove the high reliability of safety-critical software. The high sensitivity of a piece of software's reliability to software failure data, as well as a lack of sufficient software failure data, is also identified as a possible limitation when applying the software reliability growth models to safety-critical software

  5. Software reliability

    CERN Document Server

    Bendell, A

    1986-01-01

    Software Reliability reviews some fundamental issues of software reliability as well as the techniques, models, and metrics used to predict the reliability of software. Topics covered include fault avoidance, fault removal, and fault tolerance, along with statistical methods for the objective assessment of predictive accuracy. Development cost models and life-cycle cost models are also discussed. This book is divided into eight sections and begins with a chapter on adaptive modeling used to predict software reliability, followed by a discussion on failure rate in software reliability growth mo

  6. Possibilities and Limitations of Applying Software Reliability Growth Models to Safety- Critical Software

    International Nuclear Information System (INIS)

    Kim, Man Cheol; Jang, Seung Cheol; Ha, Jae Joo

    2006-01-01

    As digital systems are gradually introduced to nuclear power plants (NPPs), the need of quantitatively analyzing the reliability of the digital systems is also increasing. Kang and Sung identified (1) software reliability, (2) common-cause failures (CCFs), and (3) fault coverage as the three most critical factors in the reliability analysis of digital systems. For the estimation of the safety-critical software (the software that is used in safety-critical digital systems), the use of Bayesian Belief Networks (BBNs) seems to be most widely used. The use of BBNs in reliability estimation of safety-critical software is basically a process of indirectly assigning a reliability based on various observed information and experts' opinions. When software testing results or software failure histories are available, we can use a process of directly estimating the reliability of the software using various software reliability growth models such as Jelinski- Moranda model and Goel-Okumoto's nonhomogeneous Poisson process (NHPP) model. Even though it is generally known that software reliability growth models cannot be applied to safety-critical software due to small number of expected failure data from the testing of safety-critical software, we try to find possibilities and corresponding limitations of applying software reliability growth models to safety critical software

  7. NHPP-Based Software Reliability Models Using Equilibrium Distribution

    Science.gov (United States)

    Xiao, Xiao; Okamura, Hiroyuki; Dohi, Tadashi

    Non-homogeneous Poisson processes (NHPPs) have gained much popularity in actual software testing phases to estimate the software reliability, the number of remaining faults in software and the software release timing. In this paper, we propose a new modeling approach for the NHPP-based software reliability models (SRMs) to describe the stochastic behavior of software fault-detection processes. The fundamental idea is to apply the equilibrium distribution to the fault-detection time distribution in NHPP-based modeling. We also develop efficient parameter estimation procedures for the proposed NHPP-based SRMs. Through numerical experiments, it can be concluded that the proposed NHPP-based SRMs outperform the existing ones in many data sets from the perspective of goodness-of-fit and prediction performance.

  8. Maximum Entropy Discrimination Poisson Regression for Software Reliability Modeling.

    Science.gov (United States)

    Chatzis, Sotirios P; Andreou, Andreas S

    2015-11-01

    Reliably predicting software defects is one of the most significant tasks in software engineering. Two of the major components of modern software reliability modeling approaches are: 1) extraction of salient features for software system representation, based on appropriately designed software metrics and 2) development of intricate regression models for count data, to allow effective software reliability data modeling and prediction. Surprisingly, research in the latter frontier of count data regression modeling has been rather limited. More specifically, a lack of simple and efficient algorithms for posterior computation has made the Bayesian approaches appear unattractive, and thus underdeveloped in the context of software reliability modeling. In this paper, we try to address these issues by introducing a novel Bayesian regression model for count data, based on the concept of max-margin data modeling, effected in the context of a fully Bayesian model treatment with simple and efficient posterior distribution updates. Our novel approach yields a more discriminative learning technique, making more effective use of our training data during model inference. In addition, it allows of better handling uncertainty in the modeled data, which can be a significant problem when the training data are limited. We derive elegant inference algorithms for our model under the mean-field paradigm and exhibit its effectiveness using the publicly available benchmark data sets.

  9. Stochastic Differential Equation-Based Flexible Software Reliability Growth Model

    Directory of Open Access Journals (Sweden)

    P. K. Kapur

    2009-01-01

    Full Text Available Several software reliability growth models (SRGMs have been developed by software developers in tracking and measuring the growth of reliability. As the size of software system is large and the number of faults detected during the testing phase becomes large, so the change of the number of faults that are detected and removed through each debugging becomes sufficiently small compared with the initial fault content at the beginning of the testing phase. In such a situation, we can model the software fault detection process as a stochastic process with continuous state space. In this paper, we propose a new software reliability growth model based on Itô type of stochastic differential equation. We consider an SDE-based generalized Erlang model with logistic error detection function. The model is estimated and validated on real-life data sets cited in literature to show its flexibility. The proposed model integrated with the concept of stochastic differential equation performs comparatively better than the existing NHPP-based models.

  10. Conceptual Software Reliability Prediction Models for Nuclear Power Plant Safety Systems

    International Nuclear Information System (INIS)

    Johnson, G.; Lawrence, D.; Yu, H.

    2000-01-01

    The objective of this project is to develop a method to predict the potential reliability of software to be used in a digital system instrumentation and control system. The reliability prediction is to make use of existing measures of software reliability such as those described in IEEE Std 982 and 982.2. This prediction must be of sufficient accuracy to provide a value for uncertainty that could be used in a nuclear power plant probabilistic risk assessment (PRA). For the purposes of the project, reliability was defined to be the probability that the digital system will successfully perform its intended safety function (for the distribution of conditions under which it is expected to respond) upon demand with no unintended functions that might affect system safety. The ultimate objective is to use the identified measures to develop a method for predicting the potential quantitative reliability of a digital system. The reliability prediction models proposed in this report are conceptual in nature. That is, possible prediction techniques are proposed and trial models are built, but in order to become a useful tool for predicting reliability, the models must be tested, modified according to the results, and validated. Using methods outlined by this project, models could be constructed to develop reliability estimates for elements of software systems. This would require careful review and refinement of the models, development of model parameters from actual experience data or expert elicitation, and careful validation. By combining these reliability estimates (generated from the validated models for the constituent parts) in structural software models, the reliability of the software system could then be predicted. Modeling digital system reliability will also require that methods be developed for combining reliability estimates for hardware and software. System structural models must also be developed in order to predict system reliability based upon the reliability

  11. Procedure for Application of Software Reliability Growth Models to NPP PSA

    International Nuclear Information System (INIS)

    Son, Han Seong; Kang, Hyun Gook; Chang, Seung Cheol

    2009-01-01

    As the use of software increases at nuclear power plants (NPPs), the necessity for including software reliability and/or safety into the NPP Probabilistic Safety Assessment (PSA) rises. This work proposes an application procedure of software reliability growth models (RGMs), which are most widely used to quantify software reliability, to NPP PSA. Through the proposed procedure, it can be determined if a software reliability growth model can be applied to the NPP PSA before its real application. The procedure proposed in this work is expected to be very helpful for incorporating software into NPP PSA

  12. A SOFTWARE RELIABILITY ESTIMATION METHOD TO NUCLEAR SAFETY SOFTWARE

    Directory of Open Access Journals (Sweden)

    GEE-YONG PARK

    2014-02-01

    Full Text Available A method for estimating software reliability for nuclear safety software is proposed in this paper. This method is based on the software reliability growth model (SRGM, where the behavior of software failure is assumed to follow a non-homogeneous Poisson process. Two types of modeling schemes based on a particular underlying method are proposed in order to more precisely estimate and predict the number of software defects based on very rare software failure data. The Bayesian statistical inference is employed to estimate the model parameters by incorporating software test cases as a covariate into the model. It was identified that these models are capable of reasonably estimating the remaining number of software defects which directly affects the reactor trip functions. The software reliability might be estimated from these modeling equations, and one approach of obtaining software reliability value is proposed in this paper.

  13. Understanding software faults and their role in software reliability modeling

    Science.gov (United States)

    Munson, John C.

    1994-01-01

    This study is a direct result of an on-going project to model the reliability of a large real-time control avionics system. In previous modeling efforts with this system, hardware reliability models were applied in modeling the reliability behavior of this system. In an attempt to enhance the performance of the adapted reliability models, certain software attributes were introduced in these models to control for differences between programs and also sequential executions of the same program. As the basic nature of the software attributes that affect software reliability become better understood in the modeling process, this information begins to have important implications on the software development process. A significant problem arises when raw attribute measures are to be used in statistical models as predictors, for example, of measures of software quality. This is because many of the metrics are highly correlated. Consider the two attributes: lines of code, LOC, and number of program statements, Stmts. In this case, it is quite obvious that a program with a high value of LOC probably will also have a relatively high value of Stmts. In the case of low level languages, such as assembly language programs, there might be a one-to-one relationship between the statement count and the lines of code. When there is a complete absence of linear relationship among the metrics, they are said to be orthogonal or uncorrelated. Usually the lack of orthogonality is not serious enough to affect a statistical analysis. However, for the purposes of some statistical analysis such as multiple regression, the software metrics are so strongly interrelated that the regression results may be ambiguous and possibly even misleading. Typically, it is difficult to estimate the unique effects of individual software metrics in the regression equation. The estimated values of the coefficients are very sensitive to slight changes in the data and to the addition or deletion of variables in the

  14. An architectural model for software reliability quantification: sources of data

    International Nuclear Information System (INIS)

    Smidts, C.; Sova, D.

    1999-01-01

    Software reliability assessment models in use today treat software as a monolithic block. An aversion towards 'atomic' models seems to exist. These models appear to add complexity to the modeling, to the data collection and seem intrinsically difficult to generalize. In 1997, we introduced an architecturally based software reliability model called FASRE. The model is based on an architecture derived from the requirements which captures both functional and nonfunctional requirements and on a generic classification of functions, attributes and failure modes. The model focuses on evaluation of failure mode probabilities and uses a Bayesian quantification framework. Failure mode probabilities of functions and attributes are propagated to the system level using fault trees. It can incorporate any type of prior information such as results of developers' testing, historical information on a specific functionality and its attributes, and, is ideally suited for reusable software. By building an architecture and deriving its potential failure modes, the model forces early appraisal and understanding of the weaknesses of the software, allows reliability analysis of the structure of the system, provides assessments at a functional level as well as at a systems' level. In order to quantify the probability of failure (or the probability of success) of a specific element of our architecture, data are needed. The term element of the architecture is used here in its broadest sense to mean a single failure mode or a higher level of abstraction such as a function. The paper surveys the potential sources of software reliability data available during software development. Next the mechanisms for incorporating these sources of relevant data to the FASRE model are identified

  15. Software reliability growth model for safety systems of nuclear reactor

    International Nuclear Information System (INIS)

    Thirugnana Murthy, D.; Murali, N.; Sridevi, T.; Satya Murty, S.A.V.; Velusamy, K.

    2014-01-01

    The demand for complex software systems has increased more rapidly than the ability to design, implement, test, and maintain them, and the reliability of software systems has become a major concern for our, modern society.Software failures have impaired several high visibility programs in space, telecommunications, defense and health industries. Besides the costs involved, it setback the projects. The ways of quantifying it and using it for improvement and control of the software development and maintenance process. This paper discusses need for systematic approaches for measuring and assuring software reliability which is a major share of project development resources. It covers the reliability models with the concern on 'Reliability Growth'. It includes data collection on reliability, statistical estimation and prediction, metrics and attributes of product architecture, design, software development, and the operational environment. Besides its use for operational decisions like deployment, it includes guiding software architecture, development, testing and verification and validation. (author)

  16. Software reliability growth models with normal failure time distributions

    International Nuclear Information System (INIS)

    Okamura, Hiroyuki; Dohi, Tadashi; Osaki, Shunji

    2013-01-01

    This paper proposes software reliability growth models (SRGM) where the software failure time follows a normal distribution. The proposed model is mathematically tractable and has sufficient ability of fitting to the software failure data. In particular, we consider the parameter estimation algorithm for the SRGM with normal distribution. The developed algorithm is based on an EM (expectation-maximization) algorithm and is quite simple for implementation as software application. Numerical experiment is devoted to investigating the fitting ability of the SRGMs with normal distribution through 16 types of failure time data collected in real software projects

  17. A Method of Nuclear Software Reliability Estimation

    International Nuclear Information System (INIS)

    Park, Gee Yong; Eom, Heung Seop; Cheon, Se Woo; Jang, Seung Cheol

    2011-01-01

    A method on estimating software reliability for nuclear safety software is proposed. This method is based on the software reliability growth model (SRGM) where the behavior of software failure is assumed to follow the non-homogeneous Poisson process. Several modeling schemes are presented in order to estimate and predict more precisely the number of software defects based on a few of software failure data. The Bayesian statistical inference is employed to estimate the model parameters by incorporating the software test cases into the model. It is identified that this method is capable of accurately estimating the remaining number of software defects which are on-demand type directly affecting safety trip functions. The software reliability can be estimated from a model equation and one method of obtaining the software reliability is proposed

  18. Reliability analysis of software based safety functions

    International Nuclear Information System (INIS)

    Pulkkinen, U.

    1993-05-01

    The methods applicable in the reliability analysis of software based safety functions are described in the report. Although the safety functions also include other components, the main emphasis in the report is on the reliability analysis of software. The check list type qualitative reliability analysis methods, such as failure mode and effects analysis (FMEA), are described, as well as the software fault tree analysis. The safety analysis based on the Petri nets is discussed. The most essential concepts and models of quantitative software reliability analysis are described. The most common software metrics and their combined use with software reliability models are discussed. The application of software reliability models in PSA is evaluated; it is observed that the recent software reliability models do not produce the estimates needed in PSA directly. As a result from the study some recommendations and conclusions are drawn. The need of formal methods in the analysis and development of software based systems, the applicability of qualitative reliability engineering methods in connection to PSA and the need to make more precise the requirements for software based systems and their analyses in the regulatory guides should be mentioned. (orig.). (46 refs., 13 figs., 1 tab.)

  19. Reliability modeling of digital RPS with consideration of undetected software faults

    Energy Technology Data Exchange (ETDEWEB)

    Khalaquzzaman, M.; Lee, Seung Jun; Jung, Won Dea [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kim, Man Cheol [Chung Ang Univ., Seoul (Korea, Republic of)

    2013-10-15

    This paper provides overview of different software reliability methodologies and proposes a technic for estimating the reliability of RPS with consideration of undetected software faults. Software reliability analysis of safety critical software has been challenging despite spending a huge effort for developing large number of software reliability models, and no consensus yet to attain on an appropriate modeling methodology. However, it is realized that the combined application of BBN based SDLC fault prediction method and random black-box testing of software would provide better ground for reliability estimation of safety critical software. Digitalizing the reactor protection system of nuclear power plant has been initiated several decades ago and now full digitalization has been adopted in the new generation of NPPs around the world because digital I and C systems have many better technical features like easier configurability and maintainability over analog I and C systems. Digital I and C systems are also drift-free and incorporation of new features is much easier. Rules and regulation for safe operation of NPPs are established and has been being practiced by the operators as well as regulators of NPPs to ensure safety. The failure mechanism of hardware and analog systems well understood and the risk analysis methods for these components and systems are well established. However, digitalization of I and C system in NPP introduces some crisis and uncertainty in reliability analysis methods of the digital systems/components because software failure mechanisms are still unclear.

  20. Some remarks on software reliability

    International Nuclear Information System (INIS)

    Gonzalez Hernando, J.; Sanchez Izquierdo, J.

    1978-01-01

    Trend in modern NPPCI is toward a broad use of programmable elements. Some aspects concerning present status of programmable digital systems reliability are reported. Basic differences between software and hardware concept require a specific approach in all the reliability topics concerning software systems. The software reliability theory was initialy developed upon hardware models analogies. At present this approach is changing and specific models are being developed. The growing use of programmable systems necessitates emphasizing the importance of more adequate regulatory requirements to include this technology in NPPCI. (author)

  1. A Survey of Software Reliability Modeling and Estimation

    Science.gov (United States)

    1983-09-01

    considered include: the Jelinski-Moranda Model, the ,Geometric Model,’ and Musa’s Model. A Monte -Carlo study of the behavior of the ’V"’"*least squares...ceedings Number 261, 1979, pp. 34-1, 34-11. IoelAmrit, AGieboSSukert, Alan and Goel, Ararat , "A Guidebookfor Software Reliability Assessment, 1980

  2. Estimating the Parameters of Software Reliability Growth Models Using the Grey Wolf Optimization Algorithm

    OpenAIRE

    Alaa F. Sheta; Amal Abdel-Raouf

    2016-01-01

    In this age of technology, building quality software is essential to competing in the business market. One of the major principles required for any quality and business software product for value fulfillment is reliability. Estimating software reliability early during the software development life cycle saves time and money as it prevents spending larger sums fixing a defective software product after deployment. The Software Reliability Growth Model (SRGM) can be used to predict the number of...

  3. A study of operational and testing reliability in software reliability analysis

    International Nuclear Information System (INIS)

    Yang, B.; Xie, M.

    2000-01-01

    Software reliability is an important aspect of any complex equipment today. Software reliability is usually estimated based on reliability models such as nonhomogeneous Poisson process (NHPP) models. Software systems are improving in testing phase, while it normally does not change in operational phase. Depending on whether the reliability is to be predicted for testing phase or operation phase, different measure should be used. In this paper, two different reliability concepts, namely, the operational reliability and the testing reliability, are clarified and studied in detail. These concepts have been mixed up or even misused in some existing literature. Using different reliability concept will lead to different reliability values obtained and it will further lead to different reliability-based decisions made. The difference of the estimated reliabilities is studied and the effect on the optimal release time is investigated

  4. Prediction of software operational reliability using testing environment factors

    International Nuclear Information System (INIS)

    Jung, Hoan Sung; Seong, Poong Hyun

    1995-01-01

    A number of software reliability models have been developed to estimate and to predict software reliability. However, there are no established standard models to quantify software reliability. Most models estimate the quality of software in reliability figures such as remaining faults, failure rate, or mean time to next failure at the testing phase, and they consider them ultimate indicators of software reliability. Experience shows that there is a large gap between predicted reliability during development and reliability measured during operation, which means that predicted reliability, or so-called test reliability, is not operational reliability. Customers prefer operational reliability to test reliability. In this study, we propose a method that predicts operational reliability rather than test reliability by introducing the testing environment factor that quantifies the changes in environments

  5. Software reliability studies

    Science.gov (United States)

    Hoppa, Mary Ann; Wilson, Larry W.

    1994-01-01

    There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Our research has shown that by improving the quality of the data one can greatly improve the predictions. We are working on methodologies which control some of the randomness inherent in the standard data generation processes in order to improve the accuracy of predictions. Our contribution is twofold in that we describe an experimental methodology using a data structure called the debugging graph and apply this methodology to assess the robustness of existing models. The debugging graph is used to analyze the effects of various fault recovery orders on the predictive accuracy of several well-known software reliability algorithms. We found that, along a particular debugging path in the graph, the predictive performance of different models can vary greatly. Similarly, just because a model 'fits' a given path's data well does not guarantee that the model would perform well on a different path. Further we observed bug interactions and noted their potential effects on the predictive process. We saw that not only do different faults fail at different rates, but that those rates can be affected by the particular debugging stage at which the rates are evaluated. Based on our experiment, we conjecture that the accuracy of a reliability prediction is affected by the fault recovery order as well as by fault interaction.

  6. Reliability of software

    International Nuclear Information System (INIS)

    Kopetz, H.

    1980-01-01

    Common factors and differences in the reliability of hardware and software; reliability increase by means of methods of software redundancy. Maintenance of software for long term operating behavior. (HP) [de

  7. Prediction of software operational reliability using testing environment factor

    International Nuclear Information System (INIS)

    Jung, Hoan Sung

    1995-02-01

    Software reliability is especially important to customers these days. The need to quantify software reliability of safety-critical systems has been received very special attention and the reliability is rated as one of software's most important attributes. Since the software is an intellectual product of human activity and since it is logically complex, the failures are inevitable. No standard models have been established to prove the correctness and to estimate the reliability of software systems by analysis and/or testing. For many years, many researches have focused on the quantification of software reliability and there are many models developed to quantify software reliability. Most software reliability models estimate the reliability with the failure data collected during the test assuming that the test environments well represent the operation profile. User's interest is on the operational reliability rather than on the test reliability, however. The experiences show that the operational reliability is higher than the test reliability. With the assumption that the difference in reliability results from the change of environment, testing environment factor comprising the aging factor and the coverage factor are defined in this work to predict the ultimate operational reliability with the failure data. It is by incorporating test environments applied beyond the operational profile into testing environment factor Test reliability can also be estimated with this approach without any model change. The application results are close to the actual data. The approach used in this thesis is expected to be applicable to ultra high reliable software systems that are used in nuclear power plants, airplanes, and other safety-critical applications

  8. Machine Learning Approach for Software Reliability Growth Modeling with Infinite Testing Effort Function

    Directory of Open Access Journals (Sweden)

    Subburaj Ramasamy

    2017-01-01

    Full Text Available Reliability is one of the quantifiable software quality attributes. Software Reliability Growth Models (SRGMs are used to assess the reliability achieved at different times of testing. Traditional time-based SRGMs may not be accurate enough in all situations where test effort varies with time. To overcome this lacuna, test effort was used instead of time in SRGMs. In the past, finite test effort functions were proposed, which may not be realistic as, at infinite testing time, test effort will be infinite. Hence in this paper, we propose an infinite test effort function in conjunction with a classical Nonhomogeneous Poisson Process (NHPP model. We use Artificial Neural Network (ANN for training the proposed model with software failure data. Here it is possible to get a large set of weights for the same model to describe the past failure data equally well. We use machine learning approach to select the appropriate set of weights for the model which will describe both the past and the future data well. We compare the performance of the proposed model with existing model using practical software failure data sets. The proposed log-power TEF based SRGM describes all types of failure data equally well and also improves the accuracy of parameter estimation more than existing TEF and can be used for software release time determination as well.

  9. Software reliability prediction using SPN | Abbasabadee | Journal of ...

    African Journals Online (AJOL)

    Software reliability prediction using SPN. ... In this research for computation of software reliability, component reliability model based on SPN would be proposed. An isomorphic markov ... EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT

  10. Discrete Address Beacon System (DABS) Software System Reliability Modeling and Prediction.

    Science.gov (United States)

    1981-06-01

    Service ( ATARS ) module because of its interim status. Reliability prediction models for software modules were derived and then verified by matching...System (A’iCR3BS) and thus can be introduced gradually and economically without ma jor olper- ational or procedural change. Since DABS uses monopulse...lineanaly- sis tools or are ured during maintenance or pre-initialization were not modeled because they are not part of the mission software. The ATARS

  11. Prediction of safety critical software operational reliability from test reliability using testing environment factors

    International Nuclear Information System (INIS)

    Jung, Hoan Sung; Seong, Poong Hyun

    1999-01-01

    It has been a critical issue to predict the safety critical software reliability in nuclear engineering area. For many years, many researches have focused on the quantification of software reliability and there have been many models developed to quantify software reliability. Most software reliability models estimate the reliability with the failure data collected during the test assuming that the test environments well represent the operation profile. User's interest is however on the operational reliability rather than on the test reliability. The experiences show that the operational reliability is higher than the test reliability. With the assumption that the difference in reliability results from the change of environment, from testing to operation, testing environment factors comprising the aging factor and the coverage factor are developed in this paper and used to predict the ultimate operational reliability with the failure data in testing phase. It is by incorporating test environments applied beyond the operational profile into testing environment factors. The application results show that the proposed method can estimate the operational reliability accurately. (Author). 14 refs., 1 tab., 1 fig

  12. Evaluation for nuclear safety-critical software reliability of DCS

    International Nuclear Information System (INIS)

    Liu Ying

    2015-01-01

    With the development of control and information technology at NPPs, software reliability is important because software failure is usually considered as one form of common cause failures in Digital I and C Systems (DCS). The reliability analysis of DCS, particularly qualitative and quantitative evaluation on the nuclear safety-critical software reliability belongs to a great challenge. To solve this problem, not only comprehensive evaluation model and stage evaluation models are built in this paper, but also prediction and sensibility analysis are given to the models. It can make besement for evaluating the reliability and safety of DCS. (author)

  13. Quantitative reliability assessment for safety critical system software

    International Nuclear Information System (INIS)

    Chung, Dae Won; Kwon, Soon Man

    2005-01-01

    An essential issue in the replacement of the old analogue I and C to computer-based digital systems in nuclear power plants is the quantitative software reliability assessment. Software reliability models have been successfully applied to many industrial applications, but have the unfortunate drawback of requiring data from which one can formulate a model. Software which is developed for safety critical applications is frequently unable to produce such data for at least two reasons. First, the software is frequently one-of-a-kind, and second, it rarely fails. Safety critical software is normally expected to pass every unit test producing precious little failure data. The basic premise of the rare events approach is that well-tested software does not fail under normal routine and input signals, which means that failures must be triggered by unusual input data and computer states. The failure data found under the reasonable testing cases and testing time for these conditions should be considered for the quantitative reliability assessment. We will present the quantitative reliability assessment methodology of safety critical software for rare failure cases in this paper

  14. Reliable software systems via chains of object models with provably correct behavior

    International Nuclear Information System (INIS)

    Yakhnis, A.; Yakhnis, V.

    1996-01-01

    This work addresses specification and design of reliable safety-critical systems, such as nuclear reactor control systems. Reliability concerns are addressed in complimentary fashion by different fields. Reliability engineers build software reliability models, etc. Safety engineers focus on prevention of potential harmful effects of systems on environment. Software/hardware correctness engineers focus on production of reliable systems on the basis of mathematical proofs. The authors think that correctness may be a crucial guiding issue in the development of reliable safety-critical systems. However, purely formal approaches are not adequate for the task, because they neglect the connection with the informal customer requirements. They alleviate that as follows. First, on the basis of the requirements, they build a model of the system interactions with the environment, where the system is viewed as a black box. They will provide foundations for automated tools which will (a) demonstrate to the customer that all of the scenarios of system behavior are presented in the model, (b) uncover scenarios not present in the requirements, and (c) uncover inconsistent scenarios. The developers will work with the customer until the black box model will not possess scenarios (b) and (c) above. Second, the authors will build a chain of several increasingly detailed models, where the first model is the black box model and the last model serves to automatically generated proved executable code. The behavior of each model will be proved to conform to the behavior of the previous one. They build each model as a cluster of interactive concurrent objects, thus they allow both top-down and bottom-up development

  15. Reliable software for unreliable hardware a cross layer perspective

    CERN Document Server

    Rehman, Semeen; Henkel, Jörg

    2016-01-01

    This book describes novel software concepts to increase reliability under user-defined constraints. The authors’ approach bridges, for the first time, the reliability gap between hardware and software. Readers will learn how to achieve increased soft error resilience on unreliable hardware, while exploiting the inherent error masking characteristics and error (stemming from soft errors, aging, and process variations) mitigations potential at different software layers. · Provides a comprehensive overview of reliability modeling and optimization techniques at different hardware and software levels; · Describes novel optimization techniques for software cross-layer reliability, targeting unreliable hardware.

  16. Prediction of software operational reliability using testing environment factors

    International Nuclear Information System (INIS)

    Jung, Hoan Sung; Seong, Poong Hyun

    1995-01-01

    For many years, many researches have focused on the quantification of software reliability and there are many models developed to quantify software reliability. Most software reliability models estimate the reliability with the failure data collected during the test assuming that the test environments well represent the operation profile. The experiences show that the operational reliability is higher than the test reliability User's interest is on the operational reliability rather than on the test reliability, however. With the assumption that the difference in reliability results from the change of environment, testing environment factors comprising the aging factor and the coverage factor are defined in this study to predict the ultimate operational reliability with the failure data. It is by incorporating test environments applied beyond the operational profile into testing environment factors. The application results are close to the actual data

  17. Software engineering practices for control system reliability

    International Nuclear Information System (INIS)

    S. K. Schaffner; K. S White

    1999-01-01

    This paper will discuss software engineering practices used to improve Control System reliability. The authors begin with a brief discussion of the Software Engineering Institute's Capability Maturity Model (CMM) which is a framework for evaluating and improving key practices used to enhance software development and maintenance capabilities. The software engineering processes developed and used by the Controls Group at the Thomas Jefferson National Accelerator Facility (Jefferson Lab), using the Experimental Physics and Industrial Control System (EPICS) for accelerator control, are described. Examples are given of how their procedures have been used to minimized control system downtime and improve reliability. While their examples are primarily drawn from their experience with EPICS, these practices are equally applicable to any control system. Specific issues addressed include resource allocation, developing reliable software lifecycle processes and risk management

  18. Integrating software reliability concepts into risk and reliability modeling of digital instrumentation and control systems used in nuclear power plants

    International Nuclear Information System (INIS)

    Arndt, S. A.

    2006-01-01

    As software-based digital systems are becoming more and more common in all aspects of industrial process control, including the nuclear power industry, it is vital that the current state of the art in quality, reliability, and safety analysis be advanced to support the quantitative review of these systems. Several research groups throughout the world are working on the development and assessment of software-based digital system reliability methods and their applications in the nuclear power, aerospace, transportation, and defense industries. However, these groups are hampered by the fact that software experts and probabilistic safety assessment experts view reliability engineering very differently. This paper discusses the characteristics of a common vocabulary and modeling framework. (authors)

  19. Using software metrics and software reliability models to attain acceptable quality software for flight and ground support software for avionic systems

    Science.gov (United States)

    Lawrence, Stella

    1992-01-01

    This paper is concerned with methods of measuring and developing quality software. Reliable flight and ground support software is a highly important factor in the successful operation of the space shuttle program. Reliability is probably the most important of the characteristics inherent in the concept of 'software quality'. It is the probability of failure free operation of a computer program for a specified time and environment.

  20. Application of Metric-based Software Reliability Analysis to Example Software

    International Nuclear Information System (INIS)

    Kim, Man Cheol; Smidts, Carol

    2008-07-01

    The software reliability of TELLERFAST ATM software is analyzed by using two metric-based software reliability analysis methods, a state transition diagram-based method and a test coverage-based method. The procedures for the software reliability analysis by using the two methods and the analysis results are provided in this report. It is found that the two methods have a relation of complementary cooperation, and therefore further researches on combining the two methods to reflect the benefit of the complementary cooperative effect to the software reliability analysis are recommended

  1. An Intuitionistic Fuzzy Methodology for Component-Based Software Reliability Optimization

    DEFF Research Database (Denmark)

    Madsen, Henrik; Grigore, Albeanu; Popenţiuvlǎdicescu, Florin

    2012-01-01

    Component-based software development is the current methodology facilitating agility in project management, software reuse in design and implementation, promoting quality and productivity, and increasing the reliability and performability. This paper illustrates the usage of intuitionistic fuzzy...... degree approach in modelling the quality of entities in imprecise software reliability computing in order to optimize management results. Intuitionistic fuzzy optimization algorithms are proposed to be used for complex software systems reliability optimization under various constraints....

  2. Usage models in reliability assessment of software-based systems

    Energy Technology Data Exchange (ETDEWEB)

    Haapanen, P.; Pulkkinen, U. [VTT Automation, Espoo (Finland); Korhonen, J. [VTT Electronics, Espoo (Finland)

    1997-04-01

    This volume in the OHA-project report series deals with the statistical reliability assessment of software based systems on the basis of dynamic test results and qualitative evidence from the system design process. Other reports to be published later on in the OHA-project report series will handle the diversity requirements in safety critical software-based systems, generation of test data from operational profiles and handling of programmable automation in plant PSA-studies. In this report the issues related to the statistical testing and especially automated test case generation are considered. The goal is to find an efficient method for building usage models for the generation of statistically significant set of test cases and to gather practical experiences from this method by applying it in a case study. The scope of the study also includes the tool support for the method, as the models may grow quite large and complex. (32 refs., 30 figs.).

  3. Usage models in reliability assessment of software-based systems

    International Nuclear Information System (INIS)

    Haapanen, P.; Pulkkinen, U.; Korhonen, J.

    1997-04-01

    This volume in the OHA-project report series deals with the statistical reliability assessment of software based systems on the basis of dynamic test results and qualitative evidence from the system design process. Other reports to be published later on in the OHA-project report series will handle the diversity requirements in safety critical software-based systems, generation of test data from operational profiles and handling of programmable automation in plant PSA-studies. In this report the issues related to the statistical testing and especially automated test case generation are considered. The goal is to find an efficient method for building usage models for the generation of statistically significant set of test cases and to gather practical experiences from this method by applying it in a case study. The scope of the study also includes the tool support for the method, as the models may grow quite large and complex. (32 refs., 30 figs.)

  4. Statistical reliability assessment of software-based systems

    International Nuclear Information System (INIS)

    Korhonen, J.; Pulkkinen, U.; Haapanen, P.

    1997-01-01

    Plant vendors nowadays propose software-based systems even for the most critical safety functions. The reliability estimation of safety critical software-based systems is difficult since the conventional modeling techniques do not necessarily apply to the analysis of these systems, and the quantification seems to be impossible. Due to lack of operational experience and due to the nature of software faults, the conventional reliability estimation methods can not be applied. New methods are therefore needed for the safety assessment of software-based systems. In the research project Programmable automation systems in nuclear power plants (OHA), financed together by the Finnish Centre for Radiation and Nuclear Safety (STUK), the Ministry of Trade and Industry and the Technical Research Centre of Finland (VTT), various safety assessment methods and tools for software based systems are developed and evaluated. This volume in the OHA-report series deals with the statistical reliability assessment of software based systems on the basis of dynamic test results and qualitative evidence from the system design process. Other reports to be published later on in OHA-report series will handle the diversity requirements in safety critical software-based systems, generation of test data from operational profiles and handling of programmable automation in plant PSA-studies. (orig.) (25 refs.)

  5. Model reliability and software quality assurance in simulation of nuclear fuel waste management systems

    International Nuclear Information System (INIS)

    Oeren, T.I.; Elzas, M.S.; Sheng, G.; Wageningen Agricultural Univ., Netherlands; McMaster Univ., Hamilton, Ontario)

    1985-01-01

    As is the case with all scientific simulation studies, computerized simulation of nuclear fuel waste management systems can introduce and hide various types of errors. Frameworks to clarify issues of model reliability and software quality assurance are offered. Potential problems with reference to the main areas of concern for reliability and quality are discussed; e.g., experimental issues, decomposition, scope, fidelity, verification, requirements, testing, correctness, robustness are treated with reference to the experience gained in the past. A list comprising over 80 most common computerization errors is provided. Software tools and techniques used to detect and to correct computerization errors are discussed

  6. software reliability: failures, consequences and improvement

    African Journals Online (AJOL)

    BARTH EKWUEME

    2009-07-16

    Jul 16, 2009 ... function of time, but it is believed that some modeling technique for software reliability is reaching propensity, by ..... February 25, 1991 during the Gulf war, the chopping ... Let us consider a few key concepts that apply to both.

  7. Review of Quantitative Software Reliability Methods

    Energy Technology Data Exchange (ETDEWEB)

    Chu, T.L.; Yue, M.; Martinez-Guridi, M.; Lehner, J.

    2010-09-17

    The current U.S. Nuclear Regulatory Commission (NRC) licensing process for digital systems rests on deterministic engineering criteria. In its 1995 probabilistic risk assessment (PRA) policy statement, the Commission encouraged the use of PRA technology in all regulatory matters to the extent supported by the state-of-the-art in PRA methods and data. Although many activities have been completed in the area of risk-informed regulation, the risk-informed analysis process for digital systems has not yet been satisfactorily developed. Since digital instrumentation and control (I&C) systems are expected to play an increasingly important role in nuclear power plant (NPP) safety, the NRC established a digital system research plan that defines a coherent set of research programs to support its regulatory needs. One of the research programs included in the NRC's digital system research plan addresses risk assessment methods and data for digital systems. Digital I&C systems have some unique characteristics, such as using software, and may have different failure causes and/or modes than analog I&C systems; hence, their incorporation into NPP PRAs entails special challenges. The objective of the NRC's digital system risk research is to identify and develop methods, analytical tools, and regulatory guidance for (1) including models of digital systems into NPP PRAs, and (2) using information on the risks of digital systems to support the NRC's risk-informed licensing and oversight activities. For several years, Brookhaven National Laboratory (BNL) has worked on NRC projects to investigate methods and tools for the probabilistic modeling of digital systems, as documented mainly in NUREG/CR-6962 and NUREG/CR-6997. However, the scope of this research principally focused on hardware failures, with limited reviews of software failure experience and software reliability methods. NRC also sponsored research at the Ohio State University investigating the modeling of

  8. Software reliability for safety-critical applications

    International Nuclear Information System (INIS)

    Everett, B.; Musa, J.

    1994-01-01

    In this talk, the authors address the question open-quotes Can Software Reliability Engineering measurement and modeling techniques be applied to safety-critical applications?close quotes Quantitative techniques have long been applied in engineering hardware components of safety-critical applications. The authors have seen a growing acceptance and use of quantitative techniques in engineering software systems but a continuing reluctance in using such techniques in safety-critical applications. The general case posed against using quantitative techniques for software components runs along the following lines: safety-critical applications should be engineered such that catastrophic failures occur less frequently than one in a billion hours of operation; current software measurement/modeling techniques rely on using failure history data collected during testing; one would have to accumulate over a billion operational hours to verify failure rate objectives of about one per billion hours

  9. Results of the EC research project REQUEST on software quality and reliability

    International Nuclear Information System (INIS)

    Kersken, M.; Saglietti, F.

    1990-01-01

    GRS work in software safety was mainly concerned with the qualitative assessment of software reliability and quality. As a supplement to these activities the work within the REQUEST project emphasized the quantitative determination of the respective parameters. The three-level quality model COQUAMO serves for the computation - and partly for the prediction - of quality factors during the software life cycle. PERFIDE controls the application of software reliability models during the test phase and in early operational life. Specific attention was paid to the assessment of fault-tolerant diverse software systems. (orig.) [de

  10. Software reliability through fault-avoidance and fault-tolerance

    Science.gov (United States)

    Vouk, Mladen A.; Mcallister, David F.

    1992-01-01

    Accomplishments in the following research areas are summarized: structure based testing, reliability growth, and design testability with risk evaluation; reliability growth models and software risk management; and evaluation of consensus voting, consensus recovery block, and acceptance voting. Four papers generated during the reporting period are included as appendices.

  11. The problem of software reliability

    International Nuclear Information System (INIS)

    Ballard, G.M.

    1989-01-01

    The state of the art in safety and reliability assessment of the software of industrial computer systems is reviewed and likely progress over the next few years is identified and compared with the perceived needs of the user. Some of the current projects contributing to the development of new techniques for assessing software reliability are described. One is the software test and evaluation method which looked at the faults within and between two manufacturers specifications, faults in the codes and inconsistencies between the codes and specifications. The results are given. (author)

  12. A study of software reliability growth from the perspective of learning effects

    International Nuclear Information System (INIS)

    Chiu, K.-C.; Huang, Y.-S.; Lee, T.-Z.

    2008-01-01

    For the last three decades, reliability growth has been studied to predict software reliability in the testing/debugging phase. Most of the models developed were based on the non-homogeneous Poisson process (NHPP), and S-shaped type or exponential-shaped type of behavior is usually assumed. Unfortunately, such models may be suitable only for particular software failure data, thus narrowing the scope of applications. Therefore, from the perspective of learning effects that can influence the process of software reliability growth, we considered that efficiency in testing/debugging concerned not only the ability of the testing staff but also the learning effect that comes from inspecting the testing/debugging codes. The proposed approach can reasonably describe the S-shaped and exponential-shaped types of behaviors simultaneously, and the results in the experiment show good fit. A comparative analysis to evaluate the effectiveness for the proposed model and other software failure models was also performed. Finally, an optimal software release policy is suggested

  13. Reliability estimation of safety-critical software-based systems using Bayesian networks

    International Nuclear Information System (INIS)

    Helminen, A.

    2001-06-01

    Due to the nature of software faults and the way they cause system failures new methods are needed for the safety and reliability evaluation of software-based safety-critical automation systems in nuclear power plants. In the research project 'Programmable automation system safety integrity assessment (PASSI)', belonging to the Finnish Nuclear Safety Research Programme (FINNUS, 1999-2002), various safety assessment methods and tools for software based systems are developed and evaluated. The project is financed together by the Radiation and Nuclear Safety Authority (STUK), the Ministry of Trade and Industry (KTM) and the Technical Research Centre of Finland (VTT). In this report the applicability of Bayesian networks to the reliability estimation of software-based systems is studied. The applicability is evaluated by building Bayesian network models for the systems of interest and performing simulations for these models. In the simulations hypothetical evidence is used for defining the parameter relations and for determining the ability to compensate disparate evidence in the models. Based on the experiences from modelling and simulations we are able to conclude that Bayesian networks provide a good method for the reliability estimation of software-based systems. (orig.)

  14. High level issues in reliability quantification of safety-critical software

    International Nuclear Information System (INIS)

    Kim, Man Cheol

    2012-01-01

    For the purpose of developing a consensus method for the reliability assessment of safety-critical digital instrumentation and control systems in nuclear power plants, several high level issues in reliability assessment of the safety-critical software based on Bayesian belief network modeling and statistical testing are discussed. Related to the Bayesian belief network modeling, the relation between the assessment approach and the sources of evidence, the relation between qualitative evidence and quantitative evidence, how to consider qualitative evidence, and the cause-consequence relation are discussed. Related to the statistical testing, the need of the consideration of context-specific software failure probabilities and the inability to perform a huge number of tests in the real world are discussed. The discussions in this paper are expected to provide a common basis for future discussions on the reliability assessment of safety-critical software. (author)

  15. Optimal Release Time and Sensitivity Analysis Using a New NHPP Software Reliability Model with Probability of Fault Removal Subject to Operating Environments

    Directory of Open Access Journals (Sweden)

    Kwang Yoon Song

    2018-05-01

    Full Text Available With the latest technological developments, the software industry is at the center of the fourth industrial revolution. In today’s complex and rapidly changing environment, where software applications must be developed quickly and easily, software must be focused on rapidly changing information technology. The basic goal of software engineering is to produce high-quality software at low cost. However, because of the complexity of software systems, software development can be time consuming and expensive. Software reliability models (SRMs are used to estimate and predict the reliability, number of remaining faults, failure intensity, total and development cost, etc., of software. Additionally, it is very important to decide when, how, and at what cost to release the software to users. In this study, we propose a new nonhomogeneous Poisson process (NHPP SRM with a fault detection rate function affected by the probability of fault removal on failure subject to operating environments and discuss the optimal release time and software reliability with the new NHPP SRM. The example results show a good fit to the proposed model, and we propose an optimal release time for a given change in the proposed model.

  16. Prediction of Software Reliability using Bio Inspired Soft Computing Techniques.

    Science.gov (United States)

    Diwaker, Chander; Tomar, Pradeep; Poonia, Ramesh C; Singh, Vijander

    2018-04-10

    A lot of models have been made for predicting software reliability. The reliability models are restricted to using particular types of methodologies and restricted number of parameters. There are a number of techniques and methodologies that may be used for reliability prediction. There is need to focus on parameters consideration while estimating reliability. The reliability of a system may increase or decreases depending on the selection of different parameters used. Thus there is need to identify factors that heavily affecting the reliability of the system. In present days, reusability is mostly used in the various area of research. Reusability is the basis of Component-Based System (CBS). The cost, time and human skill can be saved using Component-Based Software Engineering (CBSE) concepts. CBSE metrics may be used to assess those techniques which are more suitable for estimating system reliability. Soft computing is used for small as well as large-scale problems where it is difficult to find accurate results due to uncertainty or randomness. Several possibilities are available to apply soft computing techniques in medicine related problems. Clinical science of medicine using fuzzy-logic, neural network methodology significantly while basic science of medicine using neural-networks-genetic algorithm most frequently and preferably. There is unavoidable interest shown by medical scientists to use the various soft computing methodologies in genetics, physiology, radiology, cardiology and neurology discipline. CBSE boost users to reuse the past and existing software for making new products to provide quality with a saving of time, memory space, and money. This paper focused on assessment of commonly used soft computing technique like Genetic Algorithm (GA), Neural-Network (NN), Fuzzy Logic, Support Vector Machine (SVM), Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), and Artificial Bee Colony (ABC). This paper presents working of soft computing

  17. Safety and reliability of automatization software

    Energy Technology Data Exchange (ETDEWEB)

    Kapp, K; Daum, R [Karlsruhe Univ. (TH) (Germany, F.R.). Lehrstuhl fuer Angewandte Informatik, Transport- und Verkehrssysteme

    1979-02-01

    Automated technical systems have to meet very high requirements concerning safety, security and reliability. Today, modern computers, especially microcomputers, are used as integral parts of those systems. In consequence computer programs must work in a safe and reliable mannter. Methods are discussed which allow to construct safe and reliable software for automatic systems such as reactor protection systems and to prove that the safety requirements are met. As a result it is shown that only the method of total software diversification can satisfy all safety requirements at tolerable cost. In order to achieve a high degree of reliability, structured and modular programming in context with high level programming languages are recommended.

  18. An adaptive neuro fuzzy model for estimating the reliability of component-based software systems

    Directory of Open Access Journals (Sweden)

    Kirti Tyagi

    2014-01-01

    Full Text Available Although many algorithms and techniques have been developed for estimating the reliability of component-based software systems (CBSSs, much more research is needed. Accurate estimation of the reliability of a CBSS is difficult because it depends on two factors: component reliability and glue code reliability. Moreover, reliability is a real-world phenomenon with many associated real-time problems. Soft computing techniques can help to solve problems whose solutions are uncertain or unpredictable. A number of soft computing approaches for estimating CBSS reliability have been proposed. These techniques learn from the past and capture existing patterns in data. The two basic elements of soft computing are neural networks and fuzzy logic. In this paper, we propose a model for estimating CBSS reliability, known as an adaptive neuro fuzzy inference system (ANFIS, that is based on these two basic elements of soft computing, and we compare its performance with that of a plain FIS (fuzzy inference system for different data sets.

  19. Method for assessing software reliability of the document management system using the RFID technology

    Directory of Open Access Journals (Sweden)

    Kiedrowicz Maciej

    2016-01-01

    Full Text Available The deliberations presented in this study refer to the method for assessing software reliability of the docu-ment management system, using the RFID technology. A method for determining the reliability structure of the dis-cussed software, understood as the index vector for assessing reliability of its components, was proposed. The model of the analyzed software is the control transfer graph, in which the probability of activating individual components during the system's operation results from the so-called operational profile, which characterizes the actual working environment. The reliability structure is established as a result of the solution of a specific mathematical software task. The knowledge of the reliability structure of the software makes it possible to properly plan the time and finan-cial expenses necessary to build the software, which would meet the reliability requirements. The application of the presented method is illustrated by the number example, corresponding to the software reality of the RFID document management system.

  20. Building fast, reliable, and adaptive software for computational science

    International Nuclear Information System (INIS)

    Rendell, A P; Antony, J; Armstrong, W; Janes, P; Yang, R

    2008-01-01

    Building fast, reliable, and adaptive software is a constant challenge for computational science, especially given recent developments in computer architecture. This paper outlines some of our efforts to address these three issues in the context of computational chemistry. First, a simple linear performance that can be used to model and predict the performance of Hartree-Fock calculations is discussed. Second, the use of interval arithmetic to assess the numerical reliability of the sort of integrals used in electronic structure methods is presented. Third, use of dynamic code modification as part of a framework to support adaptive software is outlined

  1. Reliability Assessment Method of Reactor Protection System Software by Using V and Vbased Bayesian Nets

    International Nuclear Information System (INIS)

    Eom, H. S.; Park, G. Y.; Kang, H. G.; Son, H. S.

    2010-07-01

    Developed a methodology which can be practically used in quantitative reliability assessment of a safety c ritical software for a protection system of nuclear power plants. The base of the proposed methodology is V and V being used in the nuclear industry, which means that it is not affected with specific software development environments or parameters that are necessary for the reliability calculation. Modular and formal sub-BNs in the proposed methodology is useful tool to constitute the whole BN model for reliability assessment of a target software. The proposed V and V based BN model estimates the defects in the software according to the performance of V and V results and then calculate reliability of the software. A case study was carried out to validate the proposed methodology. The target software is the RPS SW which was developed by KNICS project

  2. A Bayesian belief nets based quantitative software reliability assessment for PSA: COTS case study

    International Nuclear Information System (INIS)

    Eom, H. S.; Sung, T. Y.; Jeong, H. S.; Park, J. H.; Kang, H. G.; Lee, K. Y.; Park, J. K

    2002-03-01

    Current reliability assessments of safety critical software embedded in the digital systems in nuclear power plants are based on the rule-based qualitative assessment methods. Then recently practical needs require the quantitative features of software reliability for Probabilistic Safety Assessment (PSA) that is one of important methods being used in assessing the whole safety of nuclear power plant. But conventional quantitative software reliability assessment methods are not enough to get the necessary results in assessing the safety critical software used in nuclear power plants. Thus, current reliability assessment methods for these digital systems exclude the software part or use arbitrary values for the software reliability in the assessment. This reports discusses a Bayesian Belief Nets (BBN) based quantification method that models current qualitative software assessment in formal way and produces quantitative results required for PSA. Commercial Off-The-Shelf (COTS) software dedication process that KAERI developed was applied to the discussed BBN based method for evaluating the plausibility of the proposed method in PSA

  3. A general software reliability process simulation technique

    Science.gov (United States)

    Tausworthe, Robert C.

    1991-01-01

    The structure and rationale of the generalized software reliability process, together with the design and implementation of a computer program that simulates this process are described. Given assumed parameters of a particular project, the users of this program are able to generate simulated status timelines of work products, numbers of injected anomalies, and the progress of testing, fault isolation, repair, validation, and retest. Such timelines are useful in comparison with actual timeline data, for validating the project input parameters, and for providing data for researchers in reliability prediction modeling.

  4. A reliability evaluation method for NPP safety DCS application software

    International Nuclear Information System (INIS)

    Li Yunjian; Zhang Lei; Liu Yuan

    2014-01-01

    In the field of nuclear power plant (NPP) digital i and c application, reliability evaluation for safety DCS application software is a key obstacle to be removed. In order to quantitatively evaluate reliability of NPP safety DCS application software, this paper propose a reliability evaluating method based on software development life cycle every stage's v and v defects density characteristics, by which the operating reliability level of the software can be predicted before its delivery, and helps to improve the reliability of NPP safety important software. (authors)

  5. The contribution of instrumentation and control software to system reliability

    International Nuclear Information System (INIS)

    Fryer, M.O.

    1984-01-01

    Advanced instrumentation and control systems are usually implemented using computers that monitor the instrumentation and issue commands to control elements. The control commands are based on instrument readings and software control logic. The reliability of the total system will be affected by the software design. When comparing software designs, an evaluation of how each design can contribute to the reliability of the system is desirable. Unfortunately, the science of reliability assessment of combined hardware and software systems is in its infancy. Reliability assessment of combined hardware/software systems is often based on over-simplified assumptions about software behavior. A new method of reliability assessment of combined software/hardware systems is presented. The method is based on a procedure called fault tree analysis which determines how component failures can contribute to system failure. Fault tree analysis is a well developed method for reliability assessment of hardware systems and produces quantitative estimates of failure probability based on component failure rates. It is shown how software control logic can be mapped into a fault tree that depicts both software and hardware contributions to system failure. The new method is important because it provides a way for quantitatively evaluating the reliability contribution of software designs. In many applications, this can help guide designers in producing safer and more reliable systems. An application to the nuclear power research industry is discussed

  6. A quantitative calculation for software reliability evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Young-Jun; Lee, Jang-Soo [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    To meet these regulatory requirements, the software used in the nuclear safety field has been ensured through the development, validation, safety analysis, and quality assurance activities throughout the entire process life cycle from the planning phase to the installation phase. A variety of activities, such as the quality assurance activities are also required to improve the quality of a software. However, there are limitations to ensure that the quality is improved enough. Therefore, the effort to calculate the reliability of the software continues for a quantitative evaluation instead of a qualitative evaluation. In this paper, we propose a quantitative calculation method for the software to be used for a specific operation of the digital controller in an NPP. After injecting random faults in the internal space of a developed controller and calculating the ability to detect the injected faults using diagnostic software, we can evaluate the software reliability of a digital controller in an NPP. We tried to calculate the software reliability of the controller in an NPP using a new method that differs from a traditional method. It calculates the fault detection coverage after injecting the faults into the software memory space rather than the activity through the life cycle process. We attempt differentiation by creating a new definition of the fault, imitating the software fault using the hardware, and giving a consideration and weights for injection faults.

  7. STARS software tool for analysis of reliability and safety

    International Nuclear Information System (INIS)

    Poucet, A.; Guagnini, E.

    1989-01-01

    This paper reports on the STARS (Software Tool for the Analysis of Reliability and Safety) project aims at developing an integrated set of Computer Aided Reliability Analysis tools for the various tasks involved in systems safety and reliability analysis including hazard identification, qualitative analysis, logic model construction and evaluation. The expert system technology offers the most promising perspective for developing a Computer Aided Reliability Analysis tool. Combined with graphics and analysis capabilities, it can provide a natural engineering oriented environment for computer assisted reliability and safety modelling and analysis. For hazard identification and fault tree construction, a frame/rule based expert system is used, in which the deductive (goal driven) reasoning and the heuristic, applied during manual fault tree construction, is modelled. Expert system can explain their reasoning so that the analyst can become aware of the why and the how results are being obtained. Hence, the learning aspect involved in manual reliability and safety analysis can be maintained and improved

  8. Research on the evaluation model of the software reliability in nuclear safety class digital instrumentation and control system

    International Nuclear Information System (INIS)

    Liu Ying; Yang Ming; Li Fengjun; Ma Zhanguo; Zeng Hai

    2014-01-01

    In order to analyze the software reliability (SR) in nuclear safety class digital instrumentation and control system (D-I and C), firstly, the international software design standards were analyzed, the standards' framework was built, and we found that the D-I and C software standards should follow the NUREG-0800 BTP7-14, according to the NRC NUREG-0800 review of requirements. Secondly, the quantitative evaluation model of SR using Bayesian Belief Network and thirteen sub-model frameworks were established. Thirdly, each sub-models and the weight of corresponding indexes in the evaluation model were analyzed. Finally, the safety case was introduced. The models lay a foundation for review and quantitative evaluation on the SR in nuclear safety class D-I and C. (authors)

  9. Reliability improvement of multiversion software by exchanging modules

    International Nuclear Information System (INIS)

    Shima, Kazuyuki; Matsumoto, Ken-ichi; Torii, Koji

    1996-01-01

    In this paper, we proposes a method to improve reliability of multiversion software. In CER proposed in, checkpoints are put in versions of program and errors of versions are detected and recovered at the checkpoints. It prevent versions from failing and improve the reliability of multiversion software. But it is point out that CER decreases the reliability of the multiversion software if the detection and recovery of errors are assumed to be able to fail. In the method proposed in this paper, versions of program are developed following the same module specifications. When failures of versions of program are detected, faulty modules are identified and replaced them to other modules. It create versions without faulty modules and improve the reliability of multiversion software. The failure probability of multiversion software is estimated to become about a hundredth of the failure probability by the proposed method where the failure probability of each version is 0.000698, the number of versions is 5 and the number of modules is 20. (author)

  10. A hybrid approach to quantify software reliability in nuclear safety systems

    International Nuclear Information System (INIS)

    Arun Babu, P.; Senthil Kumar, C.; Murali, N.

    2012-01-01

    Highlights: ► A novel method to quantify software reliability using software verification and mutation testing in nuclear safety systems. ► Contributing factors that influence software reliability estimate. ► Approach to help regulators verify the reliability of safety critical software system during software licensing process. -- Abstract: Technological advancements have led to the use of computer based systems in safety critical applications. As computer based systems are being introduced in nuclear power plants, effective and efficient methods are needed to ensure dependability and compliance to high reliability requirements of systems important to safety. Even after several years of research, quantification of software reliability remains controversial and unresolved issue. Also, existing approaches have assumptions and limitations, which are not acceptable for safety applications. This paper proposes a theoretical approach combining software verification and mutation testing to quantify the software reliability in nuclear safety systems. The theoretical results obtained suggest that the software reliability depends on three factors: the test adequacy, the amount of software verification carried out and the reusability of verified code in the software. The proposed approach may help regulators in licensing computer based safety systems in nuclear reactors.

  11. Development of an Environment for Software Reliability Model Selection

    Science.gov (United States)

    1992-09-01

    now is directed to other related problems such as tools for model selection, multiversion programming, and software fault tolerance modeling... multiversion programming, 7. Hlardware can be repaired by spare modules, which is not. the case for software, 2-6 N. Preventive maintenance is very important

  12. Review of Software Reliability Assessment Methodologies for Digital I and C Software of Nuclear Power Plants

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Jae Hyun; Lee, Seung Jun; Jung, Won Dea [KAERI, Daejeon (Korea, Republic of)

    2014-08-15

    Digital instrumentation and control (I and C) systems are increasingly being applied to current nuclear power plants (NPPs) due to its advantages; zero drift, advanced data calculation capacity, and design flexibility. Accordingly, safety issues of software that is main part of the digital I and C system have been raised. As with hardware components, the software failure in NPPs could lead to a large disaster, therefore failure rate test and reliability assessment of software should be properly performed, and after that adopted in NPPs. However, the reliability assessment of the software is quite different with that of hardware, owing to the nature difference between software and hardware. The one of the most different thing is that the software failures arising from design faults as 'error crystal', whereas the hardware failures are caused by deficiencies in design, production, and maintenance. For this reason, software reliability assessment has been focused on the optimal release time considering the economy. However, the safety goal and public acceptance of the NPPs is so distinctive with other industries that the software in NPPs is dependent on reliability quantitative value rather than economy. The safety goal of NPPs compared to other industries is exceptionally high, so conventional methodologies on software reliability assessment already used in other industries could not adjust to safety goal of NPPs. Thus, the new reliability assessment methodology of the software of digital I and C on NPPs need to be developed. In this paper, existing software reliability assessment methodologies are reviewed to obtain the pros and cons of them, and then to assess the usefulness of each method to software of NPPs.

  13. Review of Software Reliability Assessment Methodologies for Digital I and C Software of Nuclear Power Plants

    International Nuclear Information System (INIS)

    Cho, Jae Hyun; Lee, Seung Jun; Jung, Won Dea

    2014-01-01

    Digital instrumentation and control (I and C) systems are increasingly being applied to current nuclear power plants (NPPs) due to its advantages; zero drift, advanced data calculation capacity, and design flexibility. Accordingly, safety issues of software that is main part of the digital I and C system have been raised. As with hardware components, the software failure in NPPs could lead to a large disaster, therefore failure rate test and reliability assessment of software should be properly performed, and after that adopted in NPPs. However, the reliability assessment of the software is quite different with that of hardware, owing to the nature difference between software and hardware. The one of the most different thing is that the software failures arising from design faults as 'error crystal', whereas the hardware failures are caused by deficiencies in design, production, and maintenance. For this reason, software reliability assessment has been focused on the optimal release time considering the economy. However, the safety goal and public acceptance of the NPPs is so distinctive with other industries that the software in NPPs is dependent on reliability quantitative value rather than economy. The safety goal of NPPs compared to other industries is exceptionally high, so conventional methodologies on software reliability assessment already used in other industries could not adjust to safety goal of NPPs. Thus, the new reliability assessment methodology of the software of digital I and C on NPPs need to be developed. In this paper, existing software reliability assessment methodologies are reviewed to obtain the pros and cons of them, and then to assess the usefulness of each method to software of NPPs

  14. Towards early software reliability prediction for computer forensic tools (case study).

    Science.gov (United States)

    Abu Talib, Manar

    2016-01-01

    Versatility, flexibility and robustness are essential requirements for software forensic tools. Researchers and practitioners need to put more effort into assessing this type of tool. A Markov model is a robust means for analyzing and anticipating the functioning of an advanced component based system. It is used, for instance, to analyze the reliability of the state machines of real time reactive systems. This research extends the architecture-based software reliability prediction model for computer forensic tools, which is based on Markov chains and COSMIC-FFP. Basically, every part of the computer forensic tool is linked to a discrete time Markov chain. If this can be done, then a probabilistic analysis by Markov chains can be performed to analyze the reliability of the components and of the whole tool. The purposes of the proposed reliability assessment method are to evaluate the tool's reliability in the early phases of its development, to improve the reliability assessment process for large computer forensic tools over time, and to compare alternative tool designs. The reliability analysis can assist designers in choosing the most reliable topology for the components, which can maximize the reliability of the tool and meet the expected reliability level specified by the end-user. The approach of assessing component-based tool reliability in the COSMIC-FFP context is illustrated with the Forensic Toolkit Imager case study.

  15. Survey of industry methods for producing highly reliable software

    International Nuclear Information System (INIS)

    Lawrence, J.D.; Persons, W.L.

    1994-11-01

    The Nuclear Reactor Regulation Office of the US Nuclear Regulatory Commission is charged with assessing the safety of new instrument and control designs for nuclear power plants which may use computer-based reactor protection systems. Lawrence Livermore National Laboratory has evaluated the latest techniques in software reliability for measurement, estimation, error detection, and prediction that can be used during the software life cycle as a means of risk assessment for reactor protection systems. One aspect of this task has been a survey of the software industry to collect information to help identify the design factors used to improve the reliability and safety of software. The intent was to discover what practices really work in industry and what design factors are used by industry to achieve highly reliable software. The results of the survey are documented in this report. Three companies participated in the survey: Computer Sciences Corporation, International Business Machines (Federal Systems Company), and TRW. Discussions were also held with NASA Software Engineering Lab/University of Maryland/CSC, and the AIAA Software Reliability Project

  16. Assessing the performance of commercial Agisoft PhotoScan software to deliver reliable data for accurate3D modelling

    Directory of Open Access Journals (Sweden)

    Jebur Ahmed

    2018-01-01

    Full Text Available 3D models delivered from digital photogrammetric techniques have massively increased and developed to meet the requirements of many applications. The reliability of these models is basically dependent on the data processing cycle and the adopted tool solution in addition to data quality. Agisoft PhotoScan is a professional image-based 3D modelling software, which seeks to create orderly, precise n 3D content from fixed images. It works with arbitrary images those qualified in both controlled and uncontrolled conditions. Following the recommendations of many users all around the globe, Agisoft PhotoScan, has become an important source to generate precise 3D data for different applications. How reliable is this data for accurate 3D modelling applications is the current question that needs an answer. Therefore; in this paper, the performance of the Agisoft PhotoScan software was assessed and analyzed to show the potential of the software for accurate 3D modelling applications. To investigate this, a study was carried out in the University of Baghdad / Al-Jaderia campus using data collected from airborne metric camera with 457m flying height. The Agisoft results show potential according to the research objective and the dataset quality following statistical and validation shape analysis.

  17. Software reliability and safety in nuclear reactor protection systems

    Energy Technology Data Exchange (ETDEWEB)

    Lawrence, J.D. [Lawrence Livermore National Lab., CA (United States)

    1993-11-01

    Planning the development, use and regulation of computer systems in nuclear reactor protection systems in such a way as to enhance reliability and safety is a complex issue. This report is one of a series of reports from the Computer Safety and Reliability Group, Lawrence Livermore that investigates different aspects of computer software in reactor National Laboratory, that investigates different aspects of computer software in reactor protection systems. There are two central themes in the report, First, software considerations cannot be fully understood in isolation from computer hardware and application considerations. Second, the process of engineering reliability and safety into a computer system requires activities to be carried out throughout the software life cycle. The report discusses the many activities that can be carried out during the software life cycle to improve the safety and reliability of the resulting product. The viewpoint is primarily that of the assessor, or auditor.

  18. Software reliability and safety in nuclear reactor protection systems

    International Nuclear Information System (INIS)

    Lawrence, J.D.

    1993-11-01

    Planning the development, use and regulation of computer systems in nuclear reactor protection systems in such a way as to enhance reliability and safety is a complex issue. This report is one of a series of reports from the Computer Safety and Reliability Group, Lawrence Livermore that investigates different aspects of computer software in reactor National Laboratory, that investigates different aspects of computer software in reactor protection systems. There are two central themes in the report, First, software considerations cannot be fully understood in isolation from computer hardware and application considerations. Second, the process of engineering reliability and safety into a computer system requires activities to be carried out throughout the software life cycle. The report discusses the many activities that can be carried out during the software life cycle to improve the safety and reliability of the resulting product. The viewpoint is primarily that of the assessor, or auditor

  19. Infusing Reliability Techniques into Software Safety Analysis

    Science.gov (United States)

    Shi, Ying

    2015-01-01

    Software safety analysis for a large software intensive system is always a challenge. Software safety practitioners need to ensure that software related hazards are completely identified, controlled, and tracked. This paper discusses in detail how to incorporate the traditional reliability techniques into the entire software safety analysis process. In addition, this paper addresses how information can be effectively shared between the various practitioners involved in the software safety analyses. The author has successfully applied the approach to several aerospace applications. Examples are provided to illustrate the key steps of the proposed approach.

  20. An integrated model for reliability estimation of digital nuclear protection system based on fault tree and software control flow methodologies

    International Nuclear Information System (INIS)

    Kim, Man Cheol; Seong, Poong Hyun

    2000-01-01

    In the nuclear industry, the difficulty of proving the reliabilities of digital systems prohibits the widespread use of digital systems in various nuclear application such as plant protection system. Even though there exist a few models which are used to estimate the reliabilities of digital systems, we develop a new integrated model which is more realistic than the existing models. We divide the process of estimating the reliability of a digital system into two phases, a high-level phase and a low-level phase, and the boundary of two phases is the reliabilities of subsystems. We apply software control flow method to the low-level phase and fault tree analysis to the high-level phase. The application of the model to Dynamic Safety System(DDS) shows that the estimated reliability of the system is quite reasonable and realistic

  1. An integrated model for reliability estimation of digital nuclear protection system based on fault tree and software control flow methodologies

    International Nuclear Information System (INIS)

    Kim, Man Cheol; Seong, Poong Hyun

    2000-01-01

    In nuclear industry, the difficulty of proving the reliabilities of digital systems prohibits the widespread use of digital systems in various nuclear application such as plant protection system. Even though there exist a few models which are used to estimate the reliabilities of digital systems, we develop a new integrated model which is more realistic than the existing models. We divide the process of estimating the reliability of a digital system into two phases, a high-level phase and a low-level phase, and the boundary of two phases is the reliabilities of subsystems. We apply software control flow method to the low-level phase and fault tree analysis to the high-level phase. The application of the model of dynamic safety system (DSS) shows that the estimated reliability of the system is quite reasonable and realistic. (author)

  2. Software vulnerability: Definition, modelling, and practical evaluation for E-mail: transfer software

    International Nuclear Information System (INIS)

    Kimura, Mitsuhiro

    2006-01-01

    This paper proposes a method of assessing software vulnerability quantitatively. By expanding the concept of the IPO (input-program-output) model, we first define the software vulnerability and construct a stochastic model. Then we evaluate the software vulnerability of the sendmail system by analyzing the actual security-hole data, which were collected from its release note. Also we show the relationship between the estimated software reliability and vulnerability of the analyzed system

  3. RAVONSICS-challenging for assuring software reliability of nuclear I and C system

    International Nuclear Information System (INIS)

    Hai Zeng; Ming Yang; Yoshikawa, Hidekazu

    2015-01-01

    As the “central nerve system”, the highly reliable Instrumentation and Control (I and C) systems, which provide the right functions and functions correctly, are always desirable not only for the end users of NPPs but also the suppliers of I and C systems. The Digitalization of nuclear I and C system happened in recent years brought a lot of new features for nuclear I and C system. On one side digital technology provides more functionalities, and it should be more reliable and robust; on the other side, digital technology brings new challenge for nuclear I and C system, especially the software running in the hardware component. The software provides flexible functionalities for nuclear I and C system, but it also brings the difficulties to evaluate the reliability and safety of it because of the complexity of software. The reliability of software, which is indispensable part of I and C system, will have essential impact on the reliability of the whole system, and people definitely want to know what the reliability of this intangible part is. The methods used for the evaluation of reliability of system and hardware hardly work for software, because the inherent difference of failure mechanism exists between software and hardware. Failure in software is systematically induced by design error, but failure in hardware is randomly induced by material and production. To continue the effort on this hot topic and to try to achieve consensus on the potential methodology for software reliability evaluation, a cooperative research project called RAVONSICS (Reliability and Verification and Validation of Nuclear Safety I and C Software) is being carried on by 7 Chinese partners, which includes University, research institute, utility, vendor, and safety regulatory body. The objective of RAVONSICS is to bring forwards the methodology for the software reliability evaluation, and the software verification technique. RAVONSICS works cooperatively with its European sister project

  4. Theory and state-of-the-art technology of software reliability

    International Nuclear Information System (INIS)

    Suzudo, Tomoaki; Watanabe, Norio

    1999-11-01

    Since FY 1997 , the Japan Atomic Energy Research Institute has been conducting a project , Study on Reliability of Digital I and C Systems. As part of the project , the methodologies and tools to improve software reliability were reviewed in order to examine the theory and the state-of-the-art technology in this field. It is surmised, as results from the review, that computerized software design and implementation tool (CASE tool), algebraic analysis to ensure the consistency between software requirement framework and its detailed design specification, and efficient test method using the internal information of the software (white-box test) at the validation phase just before the completion of the development will play a key role to enhance software reliability in the future. (author)

  5. Increasing the reliability of ecological models using modern software engineering techniques

    Science.gov (United States)

    Robert M. Scheller; Brian R. Sturtevant; Eric J. Gustafson; Brendan C. Ward; David J. Mladenoff

    2009-01-01

    Modern software development techniques are largely unknown to ecologists. Typically, ecological models and other software tools are developed for limited research purposes, and additional capabilities are added later, usually in an ad hoc manner. Modern software engineering techniques can substantially increase scientific rigor and confidence in ecological models and...

  6. The architecture of a reliable software monitoring system for embedded software systems

    International Nuclear Information System (INIS)

    Munson, J.; Krings, A.; Hiromoto, R.

    2006-01-01

    We develop the notion of a measurement-based methodology for embedded software systems to ensure properties of reliability, survivability and security, not only under benign faults but under malicious and hazardous conditions as well. The driving force is the need to develop a dynamic run-time monitoring system for use in these embedded mission critical systems. These systems must run reliably, must be secure and they must fail gracefully. That is, they must continue operating in the face of the departures from their nominal operating scenarios, the failure of one or more system components due to normal hardware and software faults, as well as malicious acts. To insure the integrity of embedded software systems, the activity of these systems must be monitored as they operate. For each of these systems, it is possible to establish a very succinct representation of nominal system activity. Furthermore, it is possible to detect departures from the nominal operating scenario in a timely fashion. Such departure may be due to various circumstances, e.g., an assault from an outside agent, thus forcing the system to operate in an off-nominal environment for which it was neither tested nor certified, or a hardware/software component that has ceased to operate in a nominal fashion. A well-designed system will have the property of graceful degradation. It must continue to run even though some of the functionality may have been lost. This involves the intelligent re-mapping of system functions. Those functions that are impacted by the failure of a system component must be identified and isolated. Thus, a system must be designed so that its basic operations may be re-mapped onto system components still operational. That is, the mission objectives of the software must be reassessed in terms of the current operational capabilities of the software system. By integrating the mechanisms to support observation and detection directly into the design methodology, we propose to shift

  7. A technical survey on issues of the quantitative evaluation of software reliability

    International Nuclear Information System (INIS)

    Park, J. K; Sung, T. Y.; Eom, H. S.; Jeong, H. S.; Park, J. H.; Kang, H. G.; Lee, K. Y.; Park, J. K.

    2000-04-01

    To develop the methodology for evaluating the software reliability included in digital instrumentation and control system (I and C), many kinds of methodologies/techniques that have been proposed from the software reliability engineering fuel are analyzed to identify the strong and week points of them. According to analysis results, methodologies/techniques that can be directly applied for the evaluation of the software reliability are not exist. Thus additional researches to combine the most appropriate methodologies/techniques from existing ones would be needed to evaluate the software reliability. (author)

  8. Model-driven software migration a methodology

    CERN Document Server

    Wagner, Christian

    2014-01-01

    Today, reliable software systems are the basis of any business or company. The continuous further development of those systems is the central component in software evolution. It requires a huge amount of time- man power- as well as financial resources. The challenges are size, seniority and heterogeneity of those software systems. Christian Wagner addresses software evolution: the inherent problems and uncertainties in the process. He presents a model-driven method which leads to a synchronization between source code and design. As a result the model layer will be the central part in further e

  9. Techniques to maximize software reliability in radiation fields

    International Nuclear Information System (INIS)

    Eichhorn, G.; Piercey, R.B.

    1986-01-01

    Microprocessor system failures due to memory corruption by single event upsets (SEUs) and/or latch-up in RAM or ROM memory are common in environments where there is high radiation flux. Traditional methods to harden microcomputer systems against SEUs and memory latch-up have usually involved expensive large scale hardware redundancy. Such systems offer higher reliability, but they tend to be more complex and non-standard. At the Space Astronomy Laboratory the authors have developed general programming techniques for producing software which is resistant to such memory failures. These techniques, which may be applied to standard off-the-shelf hardware, as well as custom designs, include an implementation of Maximally Redundant Software (MRS) model, error detection algorithms and memory verification and management

  10. Integrated modeling of software cost and quality

    International Nuclear Information System (INIS)

    Rone, K.Y.; Olson, K.M.

    1994-01-01

    In modeling the cost and quality of software systems, the relationship between cost and quality must be considered. This explicit relationship is dictated by the criticality of the software being developed. The balance between cost and quality is a viable software engineering trade-off throughout the life cycle. Therefore, the ability to accurately estimate the cost and quality of software systems is essential to providing reliable software on time and within budget. Software cost models relate the product error rate to the percent of the project labor that is required for independent verification and validation. The criticality of the software determines which cost model is used to estimate the labor required to develop the software. Software quality models yield an expected error discovery rate based on the software size, criticality, software development environment, and the level of competence of the project and the developers with respect to the processes being employed

  11. A dependability modeling of software under hardware faults digitized system in nuclear power plants

    International Nuclear Information System (INIS)

    Choi, Jong Gyun

    1996-02-01

    An analytic approach to the dependability evaluation of software in the operational phase is suggested in this work with special attention to the physical fault effects on the software dependability : The physical faults considered are memory faults and the dependability measure in question is the reliability. The model is based on the simple reliability theory and the graph theory with the path decomposition micro model. The model represents an application software with a graph consisting of nodes and arcs that probabilistic ally determine the flow from node to node. Through proper transformation of nodes and arcs, the graph can be reduced to a simple two-node graph and the software failure probability is derived from this graph. This model can be extended to the software system which consists of several complete modules without modification. The derived model is validated by the computer simulation, where the software is transformed to a probabilistic control flow graph. Simulation also shows a different viewpoint of software failure behavior. Using this model, we predict the reliability of an application software and a software system in a digitized system(ILS system) in the nuclear power plant and show the sensitivity of the software reliability to the major physical parameters which affect the software failure in the normal operation phase. The derived model is validated by the computer simulation, where the software is transformed to a probabilistic control flow graph. Simulation also shows a different viewpoint of software failure behavior. Using this model, we predict the reliability of an application software and a software system in a digitized system (ILS system) is the nuclear power plant and show the sensitivity of the software reliability to the major physical parameters which affect the software failure in the normal operation phase. This modeling method is particularly attractive for medium size programs such as software used in digitized systems of

  12. Quantitative software-reliability analysis of computer codes relevant to nuclear safety

    International Nuclear Information System (INIS)

    Mueller, C.J.

    1981-12-01

    This report presents the results of the first year of an ongoing research program to determine the probability of failure characteristics of computer codes relevant to nuclear safety. An introduction to both qualitative and quantitative aspects of nuclear software is given. A mathematical framework is presented which will enable the a priori prediction of the probability of failure characteristics of a code given the proper specification of its properties. The framework consists of four parts: (1) a classification system for software errors and code failures; (2) probabilistic modeling for selected reliability characteristics; (3) multivariate regression analyses to establish predictive relationships among reliability characteristics and generic code property and development parameters; and (4) the associated information base. Preliminary data of the type needed to support the modeling and the predictions of this program are described. Illustrations of the use of the modeling are given but the results so obtained, as well as all results of code failure probabilities presented herein, are based on data which at this point are preliminary, incomplete, and possibly non-representative of codes relevant to nuclear safety

  13. The reliability and usability of district health information software ...

    African Journals Online (AJOL)

    The reliability and usability of district health information software: case studies from Tanzania. ... The District Health Information System (DHIS) software from the Health Information System ... EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT

  14. Techniques, processes, and measures for software safety and reliability

    International Nuclear Information System (INIS)

    Sparkman, D.

    1992-01-01

    The purpose of this report is to provide a detailed survey of current recommended practices and measurement techniques for the development of reliable and safe software-based systems. This report is intended to assist the United States Nuclear Reaction Regulation (NRR) in determining the importance and maturity of the available techniques and in assessing the relevance of individual standards for application to instrumentation and control systems in nuclear power generating stations. Lawrence Livermore National Laboratory (LLNL) provides technical support for the Instrumentation and Control System Branch (ICSB) of NRRin advanced instrumentation and control systems, distributed digital systems, software reliability, and the application of verificafion and validafion for the development of software

  15. Reliability Analysis and Optimal Release Problem Considering Maintenance Time of Software Components for an Embedded OSS Porting Phase

    Science.gov (United States)

    Tamura, Yoshinobu; Yamada, Shigeru

    OSS (open source software) systems which serve as key components of critical infrastructures in our social life are still ever-expanding now. Especially, embedded OSS systems have been gaining a lot of attention in the embedded system area, i.e., Android, BusyBox, TRON, etc. However, the poor handling of quality problem and customer support prohibit the progress of embedded OSS. Also, it is difficult for developers to assess the reliability and portability of embedded OSS on a single-board computer. In this paper, we propose a method of software reliability assessment based on flexible hazard rates for the embedded OSS. Also, we analyze actual data of software failure-occurrence time-intervals to show numerical examples of software reliability assessment for the embedded OSS. Moreover, we compare the proposed hazard rate model for the embedded OSS with the typical conventional hazard rate models by using the comparison criteria of goodness-of-fit. Furthermore, we discuss the optimal software release problem for the porting-phase based on the total expected software maintenance cost.

  16. A study on the quantitative evaluation of the reliability for safety critical software using Bayesian belief nets

    International Nuclear Information System (INIS)

    Eom, H. S.; Jang, S. C.; Ha, J. J.

    2003-01-01

    Despite the efforts to avoid undesirable risks, or at least to bring them under control in the world, new risks that are highly difficult to manage continue to emerge from the use of new technologies, such as the use of digital instrumentation and control (I and C) components in nuclear power plant. Whenever new risk issues came out by now, we have endeavored to find the most effective ways to reduce risks, or to allocate limited resources to do this. One of the major challenges is the reliability analysis of safety-critical software associated with digital safety systems. Though many activities such as testing, verification and validation (V and V) techniques have been carried out in the design stage of software, however, the process of quantitatively evaluating the reliability of safety-critical software has not yet been developed because of the irrelevance of the conventional software reliability techniques to apply for the digital safety systems. This paper focuses on the applicability of Bayesian Belief Net (BBN) techniques to quantitatively estimate the reliability of safety-critical software adopted in digital safety system. In this paper, a typical BBN model was constructed using the dedication process of the Commercial-Off-The-Shelf (COTS) installed by KAERI. In conclusion, the adoption of BBN technique can facilitate the process of evaluating the safety-critical software reliability in nuclear power plant, as well as provide very useful information (e.g., 'what if' analysis) associated with software reliability in the viewpoint of practicality

  17. Software Reliability Issues Concerning Large and Safety Critical Software Systems

    Science.gov (United States)

    Kamel, Khaled; Brown, Barbara

    1996-01-01

    This research was undertaken to provide NASA with a survey of state-of-the-art techniques using in industrial and academia to provide safe, reliable, and maintainable software to drive large systems. Such systems must match the complexity and strict safety requirements of NASA's shuttle system. In particular, the Launch Processing System (LPS) is being considered for replacement. The LPS is responsible for monitoring and commanding the shuttle during test, repair, and launch phases. NASA built this system in the 1970's using mostly hardware techniques to provide for increased reliability, but it did so often using custom-built equipment, which has not been able to keep up with current technologies. This report surveys the major techniques used in industry and academia to ensure reliability in large and critical computer systems.

  18. Integrated Reliability Estimation of a Nuclear Maintenance Robot including a Software

    Energy Technology Data Exchange (ETDEWEB)

    Eom, Heung Seop; Kim, Jae Hee; Jeong, Kyung Min [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2011-10-15

    Conventional reliability estimation techniques such as Fault Tree Analysis (FTA), Reliability Block Diagram (RBD), Markov Model, and Event Tree Analysis (ETA) have been used widely and approved in some industries. Then there are some limitations when we use them for a complicate robot systems including software such as intelligent reactor inspection robots. Therefore an expert's judgment plays an important role in estimating the reliability of a complicate system in practice, because experts can deal with diverse evidence related to the reliability and then perform an inference based on them. The proposed method in this paper combines qualitative and quantitative evidences and performs an inference like experts. Furthermore, it does the work in a formal and in a quantitative way unlike human experts, by the benefits of Bayesian Nets (BNs)

  19. Reliability assessment using Bayesian networks. Case study on quantative reliability estimation of a software-based motor protection relay

    International Nuclear Information System (INIS)

    Helminen, A.; Pulkkinen, U.

    2003-06-01

    In this report a quantitative reliability assessment of motor protection relay SPAM 150 C has been carried out. The assessment focuses to the methodological analysis of the quantitative reliability assessment using the software-based motor protection relay as a case study. The assessment method is based on Bayesian networks and tries to take the full advantage of the previous work done in a project called Programmable Automation System Safety Integrity assessment (PASSI). From the results and experiences achieved during the work it is justified to claim that the assessment method presented in the work enables a flexible use of qualitative and quantitative elements of reliability related evidence in a single reliability assessment. At the same time the assessment method is a concurrent way of reasoning one's beliefs and references about the reliability of the system. Full advantage of the assessment method is taken when using the method as a way to cultivate the information related to the reliability of software-based systems. The method can also be used as a communicational instrument in a licensing process of software-based systems. (orig.)

  20. Study of evaluation techniques of software configuration management and reliability

    Energy Technology Data Exchange (ETDEWEB)

    Youn, Cheong; Baek, Y. W.; Kim, H. C.; Han, H. C.; Choi, C. R. [Chungnam National Univ., Taejon (Korea, Republic of)

    2001-03-15

    The Study of activities to solve software safety and quality must be executed in base of establishing software development process for digitalized nuclear plant. Especially study of software testing and Verification and Validation must executed. For this purpose methodologies and tools which can improve software qualities are evaluated and software Testing, V and V and Configuration Management which can be applied to software life cycle are investigated. This study establish a guideline that can be used to assure software safety and reliability requirements in digitalized nuclear plant systems.

  1. A dependability modeling of software under memory faults for digital system in nuclear power plants

    International Nuclear Information System (INIS)

    Choi, J. G.; Seong, P. H.

    1997-01-01

    In this work, an analytic approach to the dependability of software in the operational phase is suggested with special attention to the hardware fault effects on the software behavior : The hardware faults considered are memory faults and the dependability measure in question is the reliability. The model is based on the simple reliability theory and the graph theory which represents the software with graph composed of nodes and arcs. Through proper transformation, the graph can be reduced to a simple two-node graph and the software reliability is derived from this graph. Using this model, we predict the reliability of an application software in the digital system (ILS) in the nuclear power plant and show the sensitivity of the software reliability to the major physical parameters which affect the software failure in the normal operation phase. We also found that the effects of the hardware faults on the software failure should be considered for predicting the software dependability accurately in operation phase, especially for the software which is executed frequently. This modeling method is particularly attractive for the medium size programs such as the microprocessor-based nuclear safety logic program. (author)

  2. Flexible, reliable software using patterns and agile development

    CERN Document Server

    Christensen, Henrik B

    2010-01-01

    …This book brings together a careful selection of topics that are relevant, indeed crucial, for developing good quality software with a carefully designed pedagogy that leads the reader through an experience of active learning. The emphasis in the content is on practical goals-how to construct reliable and flexible software systems-covering many topics that every software engineer should have studied. The emphasis in the method is on providing a practical context, hands-on projects, and guidance on process. … The text discusses not only what the end product should be like, but also how to get

  3. Study of evaluation techniques of software safety and reliability in nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Youn, Cheong; Baek, Y. W.; Kim, H. C.; Park, N. J.; Shin, C. Y. [Chungnam National Univ., Taejon (Korea, Republic of)

    1999-04-15

    Software system development process and software quality assurance activities are examined in this study. Especially software safety and reliability requirements in nuclear power plant are investigated. For this purpose methodologies and tools which can be applied to software analysis, design, implementation, testing, maintenance step are evaluated. Necessary tasks for each step are investigated. Duty, input, and detailed activity for each task are defined to establish development process of high quality software system. This means applying basic concepts of software engineering and principles of system development. This study establish a guideline that can assure software safety and reliability requirements in digitalized nuclear plant systems and can be used as a guidebook of software development process to assure software quality many software development organization.

  4. Modeling reliability measurement of interface on information system: Towards the forensic of rules

    Science.gov (United States)

    Nasution, M. K. M.; Sitompul, Darwin; Harahap, Marwan

    2018-02-01

    Today almost all machines depend on the software. As a software and hardware system depends also on the rules that are the procedures for its use. If the procedure or program can be reliably characterized by involving the concept of graph, logic, and probability, then regulatory strength can also be measured accordingly. Therefore, this paper initiates an enumeration model to measure the reliability of interfaces based on the case of information systems supported by the rules of use by the relevant agencies. An enumeration model is obtained based on software reliability calculation.

  5. Software Quality Assessment Tool Based on Meta-Models

    OpenAIRE

    Doneva Rositsa; Gaftandzhieva Silvia; Doneva Zhelyana; Staevsky Nevena

    2015-01-01

    In the software industry it is indisputably essential to control the quality of produced software systems in terms of capabilities for easy maintenance, reuse, portability and others in order to ensure reliability in the software development. But it is also clear that it is very difficult to achieve such a control through a ‘manual’ management of quality.There are a number of approaches for software quality assurance based typically on software quality models (e.g. ISO 9126, McCall’s, Boehm’s...

  6. From napkin sketches to reliable software

    NARCIS (Netherlands)

    Engelen, L.J.P.

    2012-01-01

    In the past few years, model-driven software engineering (MDSE) and domain-specific modeling languages (DSMLs) have received a lot of attention from both research and industry. The main goal of MDSE is generating software from models that describe systems on a high level of abstraction. DSMLs are

  7. A New Method to Detect and Correct the Critical Errors and Determine the Software-Reliability in Critical Software-System

    International Nuclear Information System (INIS)

    Krini, Ossmane; Börcsök, Josef

    2012-01-01

    In order to use electronic systems comprising of software and hardware components in safety related and high safety related applications, it is necessary to meet the Marginal risk numbers required by standards and legislative provisions. Existing processes and mathematical models are used to verify the risk numbers. On the hardware side, various accepted mathematical models, processes, and methods exist to provide the required proof. To this day, however, there are no closed models or mathematical procedures known that allow for a dependable prediction of software reliability. This work presents a method that makes a prognosis on the residual critical error number in software. Conventional models lack this ability and right now, there are no methods that forecast critical errors. The new method will show that an estimate of the residual error number of critical errors in software systems is possible by using a combination of prediction models, a ratio of critical errors, and the total error number. Subsequently, the critical expected value-function at any point in time can be derived from the new solution method, provided the detection rate has been calculated using an appropriate estimation method. Also, the presented method makes it possible to make an estimate on the critical failure rate. The approach is modelled on a real process and therefore describes two essential processes - detection and correction process.

  8. NASA Software Cost Estimation Model: An Analogy Based Estimation Model

    Science.gov (United States)

    Hihn, Jairus; Juster, Leora; Menzies, Tim; Mathew, George; Johnson, James

    2015-01-01

    The cost estimation of software development activities is increasingly critical for large scale integrated projects such as those at DOD and NASA especially as the software systems become larger and more complex. As an example MSL (Mars Scientific Laboratory) developed at the Jet Propulsion Laboratory launched with over 2 million lines of code making it the largest robotic spacecraft ever flown (Based on the size of the software). Software development activities are also notorious for their cost growth, with NASA flight software averaging over 50% cost growth. All across the agency, estimators and analysts are increasingly being tasked to develop reliable cost estimates in support of program planning and execution. While there has been extensive work on improving parametric methods there is very little focus on the use of models based on analogy and clustering algorithms. In this paper we summarize our findings on effort/cost model estimation and model development based on ten years of software effort estimation research using data mining and machine learning methods to develop estimation models based on analogy and clustering. The NASA Software Cost Model performance is evaluated by comparing it to COCOMO II, linear regression, and K-­ nearest neighbor prediction model performance on the same data set.

  9. Dependability modeling and assessment in UML-based software development.

    Science.gov (United States)

    Bernardi, Simona; Merseguer, José; Petriu, Dorina C

    2012-01-01

    Assessment of software nonfunctional properties (NFP) is an important problem in software development. In the context of model-driven development, an emerging approach for the analysis of different NFPs consists of the following steps: (a) to extend the software models with annotations describing the NFP of interest; (b) to transform automatically the annotated software model to the formalism chosen for NFP analysis; (c) to analyze the formal model using existing solvers; (d) to assess the software based on the results and give feedback to designers. Such a modeling→analysis→assessment approach can be applied to any software modeling language, be it general purpose or domain specific. In this paper, we focus on UML-based development and on the dependability NFP, which encompasses reliability, availability, safety, integrity, and maintainability. The paper presents the profile used to extend UML with dependability information, the model transformation to generate a DSPN formal model, and the assessment of the system properties based on the DSPN results.

  10. Usage of Modified Heuristic Model for Determination of Software Stability

    Directory of Open Access Journals (Sweden)

    Sergey Konstantinovich Marfenko

    2013-02-01

    Full Text Available The subject of this paper is analysis method for determining the stability of software against the attacks on its integrity. It is suggested to use the modified heuristic model of software reliability as mathematic basis of this method. This model is based on classic approach, but it takes into account impact levels of different software errors on system integrity. It allows to define critical characteristics of software: percentage of time in stable working, the possibility of failure.

  11. Optimal reliability allocation for large software projects through soft computing techniques

    DEFF Research Database (Denmark)

    Madsen, Henrik; Albeanu, Grigore; Popentiu-Vladicescu, Florin

    2012-01-01

    or maximizing the system reliability subject to budget constraints. These kinds of optimization problems were considered both in deterministic and stochastic frameworks in literature. Recently, the intuitionistic-fuzzy optimization approach was considered as a soft computing successful modelling approach....... Firstly, a review on existing soft computing approaches to optimization is given. The main section extends the results considering self-organizing migrating algorithms for solving intuitionistic-fuzzy optimization problems attached to complex fault-tolerant software architectures which proved...

  12. Early experiences building a software quality prediction model

    Science.gov (United States)

    Agresti, W. W.; Evanco, W. M.; Smith, M. C.

    1990-01-01

    Early experiences building a software quality prediction model are discussed. The overall research objective is to establish a capability to project a software system's quality from an analysis of its design. The technical approach is to build multivariate models for estimating reliability and maintainability. Data from 21 Ada subsystems were analyzed to test hypotheses about various design structures leading to failure-prone or unmaintainable systems. Current design variables highlight the interconnectivity and visibility of compilation units. Other model variables provide for the effects of reusability and software changes. Reported results are preliminary because additional project data is being obtained and new hypotheses are being developed and tested. Current multivariate regression models are encouraging, explaining 60 to 80 percent of the variation in error density of the subsystems.

  13. Software coding for reliable data communication in a reactor safety system

    International Nuclear Information System (INIS)

    Maghsoodi, R.

    1978-01-01

    A software coding method is proposed to improve the communication reliability of a microprocessor based fast-reactor safety system. This method which replaces the conventional coding circuitry, applies a program to code the data which is communicated between the processors via their data memories. The system requirements are studied and the suitable codes are suggested. The problems associated with hardware coders, and the advantages of software coding methods are discussed. The product code which proves a faster coding time over the cyclic code is chosen as the final code. Then the improvement of the communication reliability is derived for a processor and its data memory. The result is used to calculate the reliability improvement of the processing channel as the basic unit for the safety system. (author)

  14. Space Shuttle Program Primary Avionics Software System (PASS) Success Legacy - Quality and Reliability Date

    Science.gov (United States)

    Orr, James K.; Peltier, Daryl

    2010-01-01

    Thsi slide presentation reviews the avionics software system on board the space shuttle, with particular emphasis on the quality and reliability. The Primary Avionics Software System (PASS) provides automatic and fly-by-wire control of critical shuttle systems which executes in redundant computers. Charts given show the number of space shuttle flights vs time, PASS's development history, and other charts that point to the reliability of the system's development. The reliability of the system is also compared to predicted reliability.

  15. Software Technology for Adaptable, Reliable Systems (STARS)

    Science.gov (United States)

    1994-03-25

    Tmeline(3), SECOMO(3), SEER(3), GSFC Software Engineering Lab Model(l), SLIM(4), SEER-SEM(l), SPQR (2), PRICE-S(2), internally-developed models(3), APMSS(1...3 " Timeline - 3 " SASET (Software Architecture Sizing Estimating Tool) - 2 " MicroMan 11- 2 * LCM (Logistics Cost Model) - 2 * SPQR - 2 * PRICE-S - 2

  16. Reliability and accuracy analysis of a new semiautomatic radiographic measurement software in adult scoliosis.

    Science.gov (United States)

    Aubin, Carl-Eric; Bellefleur, Christian; Joncas, Julie; de Lanauze, Dominic; Kadoury, Samuel; Blanke, Kathy; Parent, Stefan; Labelle, Hubert

    2011-05-20

    Radiographic software measurement analysis in adult scoliosis. To assess the accuracy as well as the intra- and interobserver reliability of measuring different indices on preoperative adult scoliosis radiographs using a novel measurement software that includes a calibration procedure and semiautomatic features to facilitate the measurement process. Scoliosis requires a careful radiographic evaluation to assess the deformity. Manual and computer radiographic process measures have been studied extensively to determine the reliability and reproducibility in adolescent idiopathic scoliosis. Most studies rely on comparing given measurements, which are repeated by the same user or by an expert user. A given measure with a small intra- or interobserver error might be deemed as good repeatability, but all measurements might not be truly accurate because the ground-truth value is often unknown. Thorough accuracy assessment of radiographic measures is necessary to assess scoliotic deformities, compare these measures at different stages or to permit valid multicenter studies. Thirty-four sets of adult scoliosis digital radiographs were measured two times by three independent observers using a novel radiographic measurement software that includes semiautomatic features to facilitate the measurement process. Twenty different measures taken from the Spinal Deformity Study Group radiographic measurement manual were performed on the coronal and sagittal images. Intra- and intermeasurer reliability for each measure was assessed. The accuracy of the measurement software was also assessed using a physical spine model in six different scoliotic configurations as a true reference. The majority of the measures demonstrated good to excellent intra- and intermeasurer reliability, except for sacral obliquity. The standard variation of all the measures was very small: ≤ 4.2° for Cobb angles, ≤ 4.2° for the kyphosis, ≤ 5.7° for the lordosis, ≤ 3.9° for the pelvic angles, and

  17. Analysis and recommendations for a reliable programming of software based safety systems

    International Nuclear Information System (INIS)

    Nunez McLeod, J.; Nunez McLeod, J.E.; Rivera, S.S.

    1997-01-01

    The present paper summarizes the results of several studies performed for the development of high software on i486 microprocessors, towards its utilization for control and safety systems for nuclear power plants. The work is based on software programmed in C language. Several recommendations oriented to high reliability software are analyzed, relating the requirements on high level language to its influence on assembler level. Several metrics are implemented, that allow for the quantification of the results achieved. New metrics were developed and other were adapted, in order to obtain more efficient indexes for the software description. Such metrics are helpful to visualize the adaptation of the software under development to the quality rules under use. A specific program developed to assist the reliability analyst on this quantification is also present in the paper. It performs the analysis of an executable program written in C language, disassembling it and evaluating its inter al structures. (author)

  18. Reliability and validity of the AutoCAD software method in lumbar lordosis measurement.

    Science.gov (United States)

    Letafatkar, Amir; Amirsasan, Ramin; Abdolvahabi, Zahra; Hadadnezhad, Malihe

    2011-12-01

    The aim of this study was to determine the reliability and validity of the AutoCAD software method in lumbar lordosis measurement. Fifty healthy volunteers with a mean age of 23 ± 1.80 years were enrolled. A lumbar lateral radiograph was taken on all participants, and the lordosis was measured according to the Cobb method. Afterward, the lumbar lordosis degree was measured via AutoCAD software and flexible ruler methods. The current study is accomplished in 2 parts: intratester and intertester evaluations of reliability as well as the validity of the flexible ruler and software methods. Based on the intraclass correlation coefficient, AutoCAD's reliability and validity in measuring lumbar lordosis were 0.984 and 0.962, respectively. AutoCAD showed to be a reliable and valid method to measure lordosis. It is suggested that this method may replace those that are costly and involve health risks, such as radiography, in evaluating lumbar lordosis.

  19. Development of the software for the component reliability database system of Korean nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Han, Sang Hoon; Kim, Seung Hwan; Choi, Sun Young [Korea Atomic Energy Research Institute, Taejeon (Korea)

    2002-03-01

    A study was performed to develop the system for the component reliability database which consists of database system to store the reliability data and softwares to analyze the reliability data.This system is a part of KIND (Korea Information System for Nuclear Reliability Database).The MS-SQL database is used to stores the component population data, component maintenance history, and the results of reliability analysis. Two softwares were developed for the component reliability system. One is the KIND-InfoView for the data storing, retrieving and searching. The other is the KIND-CompRel for the statistical analysis of component reliability. 4 refs., 13 figs., 7 tabs. (Author)

  20. Reliability design of the continuous monitoring system software for an position radiation

    International Nuclear Information System (INIS)

    Kang Yuebing; Li Tiantuo; Di Yuming; Zhang Yanhong

    2004-01-01

    The reliability and stabilization is an important technical target for a continuous monitoring system. After analyzing the position's environment and the system's structure, we put forward some methods of the software's reliability design and put these into the application. The practice shows that it is important to improve the system's stability and reliability. (authors)

  1. OSS reliability measurement and assessment

    CERN Document Server

    Yamada, Shigeru

    2016-01-01

    This book analyses quantitative open source software (OSS) reliability assessment and its applications, focusing on three major topic areas: the Fundamentals of OSS Quality/Reliability Measurement and Assessment; the Practical Applications of OSS Reliability Modelling; and Recent Developments in OSS Reliability Modelling. Offering an ideal reference guide for graduate students and researchers in reliability for open source software (OSS) and modelling, the book introduces several methods of reliability assessment for OSS including component-oriented reliability analysis based on analytic hierarchy process (AHP), analytic network process (ANP), and non-homogeneous Poisson process (NHPP) models, the stochastic differential equation models and hazard rate models. These measurement and management technologies are essential to producing and maintaining quality/reliable systems using OSS.

  2. Automated Improvement of Software Architecture Models for Performance and Other Quality Attributes

    OpenAIRE

    Koziolek, Anne

    2013-01-01

    Quality attributes, such as performance or reliability, are crucial for the success of a software system and largely influenced by the software architecture. Their quantitative prediction supports systematic, goal-oriented software design and forms a base of an engineering approach to software design. This thesis proposes a method and tool to automatically improve component-based software architecture (CBA) models based on such quantitative quality prediction techniques.

  3. Verification and validation--The key to operating plant software reliability

    International Nuclear Information System (INIS)

    Daughtrey, H.T.; Daggett, P.W.; Schamp, C.A.

    1983-01-01

    This paper discusses the design and implementation of a verification and validation (V and V) plan for reviewing the microcomputer software developed for a Safety Parameter Display System (SPDS). Topics considered include a historical perspective on V and V, the function and significance of SPDS software, and testing. An SPDS provides information to nuclear power plant operators about the status of the plant under all operating conditions. It is determined that by implementing V and V activities throughout the development cycle, problems are less expensive to locate in the early phases of software development, problems are less expensive to fix in the early phases of software development, and a parallel V and V activity is more cost effective than a similar effort performed only at the end of software development. It is concluded that V and V is a proven tool for improving power plant software reliability

  4. Reliability model analysis and primary experimental evaluation of laser triggered pulse trigger

    International Nuclear Information System (INIS)

    Chen Debiao; Yang Xinglin; Li Yuan; Li Jin

    2012-01-01

    High performance pulse trigger can enhance performance and stability of the PPS. It is necessary to evaluate the reliability of the LTGS pulse trigger, so we establish the reliability analysis model of this pulse trigger based on CARMES software, the reliability evaluation is accord with the statistical results. (authors)

  5. Reliability of infarct volumetry: Its relevance and the improvement by a software-assisted approach.

    Science.gov (United States)

    Friedländer, Felix; Bohmann, Ferdinand; Brunkhorst, Max; Chae, Ju-Hee; Devraj, Kavi; Köhler, Yvette; Kraft, Peter; Kuhn, Hannah; Lucaciu, Alexandra; Luger, Sebastian; Pfeilschifter, Waltraud; Sadler, Rebecca; Liesz, Arthur; Scholtyschik, Karolina; Stolz, Leonie; Vutukuri, Rajkumar; Brunkhorst, Robert

    2017-08-01

    Despite the efficacy of neuroprotective approaches in animal models of stroke, their translation has so far failed from bench to bedside. One reason is presumed to be a low quality of preclinical study design, leading to bias and a low a priori power. In this study, we propose that the key read-out of experimental stroke studies, the volume of the ischemic damage as commonly measured by free-handed planimetry of TTC-stained brain sections, is subject to an unrecognized low inter-rater and test-retest reliability with strong implications for statistical power and bias. As an alternative approach, we suggest a simple, open-source, software-assisted method, taking advantage of automatic-thresholding techniques. The validity and the improvement of reliability by an automated method to tMCAO infarct volumetry are demonstrated. In addition, we show the probable consequences of increased reliability for precision, p-values, effect inflation, and power calculation, exemplified by a systematic analysis of experimental stroke studies published in the year 2015. Our study reveals an underappreciated quality problem in translational stroke research and suggests that software-assisted infarct volumetry might help to improve reproducibility and therefore the robustness of bench to bedside translation.

  6. Application of fuzzy-MOORA method: Ranking of components for reliability estimation of component-based software systems

    Directory of Open Access Journals (Sweden)

    Zeeshan Ali Siddiqui

    2016-01-01

    Full Text Available Component-based software system (CBSS development technique is an emerging discipline that promises to take software development into a new era. As hardware systems are presently being constructed from kits of parts, software systems may also be assembled from components. It is more reliable to reuse software than to create. It is the glue code and individual components reliability that contribute to the reliability of the overall system. Every component contributes to overall system reliability according to the number of times it is being used, some components are of critical usage, known as usage frequency of component. The usage frequency decides the weight of each component. According to their weights, each component contributes to the overall reliability of the system. Therefore, ranking of components may be obtained by analyzing their reliability impacts on overall application. In this paper, we propose the application of fuzzy multi-objective optimization on the basis of ratio analysis, Fuzzy-MOORA. The method helps us find the best suitable alternative, software component, from a set of available feasible alternatives named software components. It is an accurate and easy to understand tool for solving multi-criteria decision making problems that have imprecise and vague evaluation data. By the use of ratio analysis, the proposed method determines the most suitable alternative among all possible alternatives, and dimensionless measurement will realize the job of ranking of components for estimating CBSS reliability in a non-subjective way. Finally, three case studies are shown to illustrate the use of the proposed technique.

  7. Software reliability evaluation of digital plant protection system development process using V and V

    International Nuclear Information System (INIS)

    Lee, Na Young; Hwang, Il Soon; Seong, Seung Hwan; Oh, Seung Rok

    2001-01-01

    In the nuclear power industry, digital technology has been introduced recently for the Instrumentation and Control (I and C) of reactor systems. For its application to the safety critical system such as Reactor Protection System(RPS), a reliability assessment is indispensable. Unlike traditional reliability models, software reliability is hard to evaluate, and should be evaluated throughout development lifecycle. In the development process of Digital Plant Protection System(DPPS), the concept of verification and validation (V and V) was introduced to assure the quality of the product. Also, test should be performed to assure the reliability. Verification procedure with model checking is relatively well defined, however, test is labor intensive and not well organized. In this paper, we developed the methodological process of combining the verification with validation test case generation. For this, we used PVS for the table specification and for the theorem proving. As a result, we could not only save time to design test case but also get more effective and complete verification related test case set. Add to this, we could extract some meaningful factors useful for the reliability evaluation both from the V and V and verification combined tests

  8. Testing effort dependent software reliability model for imperfect debugging process considering both detection and correction

    International Nuclear Information System (INIS)

    Peng, R.; Li, Y.F.; Zhang, W.J.; Hu, Q.P.

    2014-01-01

    This paper studies the fault detection process (FDP) and fault correction process (FCP) with the incorporation of testing effort function and imperfect debugging. In order to ensure high reliability, it is essential for software to undergo a testing phase, during which faults can be detected and corrected by debuggers. The testing resource allocation during this phase, which is usually depicted by the testing effort function, considerably influences not only the fault detection rate but also the time to correct a detected fault. In addition, testing is usually far from perfect such that new faults may be introduced. In this paper, we first show how to incorporate testing effort function and fault introduction into FDP and then develop FCP as delayed FDP with a correction effort. Various specific paired FDP and FCP models are obtained based on different assumptions of fault introduction and correction effort. An illustrative example is presented. The optimal release policy under different criteria is also discussed

  9. Reliability modelling - PETROBRAS 2010 integrated gas supply chain

    Energy Technology Data Exchange (ETDEWEB)

    Faertes, Denise; Heil, Luciana; Saker, Leonardo; Vieira, Flavia; Risi, Francisco; Domingues, Joaquim; Alvarenga, Tobias; Carvalho, Eduardo; Mussel, Patricia

    2010-09-15

    The purpose of this paper is to present the innovative reliability modeling of Petrobras 2010 integrated gas supply chain. The model represents a challenge in terms of complexity and software robustness. It was jointly developed by PETROBRAS Gas and Power Department and Det Norske Veritas. It was carried out with the objective of evaluating security of supply of 2010 gas network design that was conceived to connect Brazilian Northeast and Southeast regions. To provide best in class analysis, state of the art software was used to quantify the availability and the efficiency of the overall network and its individual components.

  10. Test rig overview for validation and reliability testing of shutdown system software

    International Nuclear Information System (INIS)

    Zhao, M.; McDonald, A.; Dick, P.

    2007-01-01

    The test rig for Validation and Reliability Testing of shutdown system software has been upgraded from the AECL Windows-based test rig previously used for CANDU6 stations. It includes a Virtual Trip Computer, which is a software simulation of the functional specification of the trip computer, and a real-time trip computer simulator in a separate chassis, which is used during the preparation of trip computer test cases before the actual trip computers are available. This allows preparation work for Validation and Reliability Testing to be performed in advance of delivery of actual trip computers to maintain a project schedule. (author)

  11. Nurturing reliable and robust open-source scientific software

    Science.gov (United States)

    Uieda, L.; Wessel, P.

    2017-12-01

    (zenodo.org). However, citations to these sources are not always recognized when computing citation metrics. In summary, the widespread development of reliable and robust open-source software relies on the creation of formal training programs in software development best practices and the recognition of software as a valid form of scholarship.

  12. Assuring Software Reliability

    Science.gov (United States)

    2014-08-01

    technologies and processes to achieve a required level of confidence that software systems and services function in the intended manner. 1.3 Security Example...that took three high-voltage lines out of service and a software fail- ure (a race condition3) that disabled the computing service that notified the... service had failed. Instead of analyzing the details of the alarm server failure, the reviewers asked why the following software assurance claim had

  13. Transparent reliability model for fault-tolerant safety systems

    International Nuclear Information System (INIS)

    Bodsberg, Lars; Hokstad, Per

    1997-01-01

    A reliability model is presented which may serve as a tool for identification of cost-effective configurations and operating philosophies of computer-based process safety systems. The main merit of the model is the explicit relationship in the mathematical formulas between failure cause and the means used to improve system reliability such as self-test, redundancy, preventive maintenance and corrective maintenance. A component failure taxonomy has been developed which allows the analyst to treat hardware failures, human failures, and software failures of automatic systems in an integrated manner. Furthermore, the taxonomy distinguishes between failures due to excessive environmental stresses and failures initiated by humans during engineering and operation. Attention has been given to develop a transparent model which provides predictions which are in good agreement with observed system performance, and which is applicable for non-experts in the field of reliability

  14. Automatically generated acceptance test: A software reliability experiment

    Science.gov (United States)

    Protzel, Peter W.

    1988-01-01

    This study presents results of a software reliability experiment investigating the feasibility of a new error detection method. The method can be used as an acceptance test and is solely based on empirical data about the behavior of internal states of a program. The experimental design uses the existing environment of a multi-version experiment previously conducted at the NASA Langley Research Center, in which the launch interceptor problem is used as a model. This allows the controlled experimental investigation of versions with well-known single and multiple faults, and the availability of an oracle permits the determination of the error detection performance of the test. Fault interaction phenomena are observed that have an amplifying effect on the number of error occurrences. Preliminary results indicate that all faults examined so far are detected by the acceptance test. This shows promise for further investigations, and for the employment of this test method on other applications.

  15. Software reliability assessment

    International Nuclear Information System (INIS)

    Barnes, M.; Bradley, P.A.; Brewer, M.A.

    1994-01-01

    The increased usage and sophistication of computers applied to real time safety-related systems in the United Kingdom has spurred on the desire to provide a standard framework within which to assess dependable computing systems. Recent accidents and ensuing legislation have acted as a catalyst in this area. One particular aspect of dependable computing systems is that of software, which is usually designed to reduce risk at the system level, but which can increase risk if it is unreliable. Various organizations have recognized the problem of assessing the risk imposed to the system by unreliable software, and have taken initial steps to develop and use such assessment frameworks. This paper relates the approach of Consultancy Services of AEA Technology in developing a framework to assess the risk imposed by unreliable software. In addition, the paper discusses the experiences gained by Consultancy Services in applying the assessment framework to commercial and research projects. The framework is applicable to software used in safety applications, including proprietary software. Although the paper is written with Nuclear Reactor Safety applications in mind, the principles discussed can be applied to safety applications in all industries

  16. Application of neural networks to software quality modeling of a very large telecommunications system.

    Science.gov (United States)

    Khoshgoftaar, T M; Allen, E B; Hudepohl, J P; Aud, S J

    1997-01-01

    Society relies on telecommunications to such an extent that telecommunications software must have high reliability. Enhanced measurement for early risk assessment of latent defects (EMERALD) is a joint project of Nortel and Bell Canada for improving the reliability of telecommunications software products. This paper reports a case study of neural-network modeling techniques developed for the EMERALD system. The resulting neural network is currently in the prototype testing phase at Nortel. Neural-network models can be used to identify fault-prone modules for extra attention early in development, and thus reduce the risk of operational problems with those modules. We modeled a subset of modules representing over seven million lines of code from a very large telecommunications software system. The set consisted of those modules reused with changes from the previous release. The dependent variable was membership in the class of fault-prone modules. The independent variables were principal components of nine measures of software design attributes. We compared the neural-network model with a nonparametric discriminant model and found the neural-network model had better predictive accuracy.

  17. Bayesian methods in reliability

    Science.gov (United States)

    Sander, P.; Badoux, R.

    1991-11-01

    The present proceedings from a course on Bayesian methods in reliability encompasses Bayesian statistical methods and their computational implementation, models for analyzing censored data from nonrepairable systems, the traits of repairable systems and growth models, the use of expert judgment, and a review of the problem of forecasting software reliability. Specific issues addressed include the use of Bayesian methods to estimate the leak rate of a gas pipeline, approximate analyses under great prior uncertainty, reliability estimation techniques, and a nonhomogeneous Poisson process. Also addressed are the calibration sets and seed variables of expert judgment systems for risk assessment, experimental illustrations of the use of expert judgment for reliability testing, and analyses of the predictive quality of software-reliability growth models such as the Weibull order statistics.

  18. Dependability Analysis Methods For Configurable Software

    International Nuclear Information System (INIS)

    Dahll, Gustav; Pulkkinen, Urho

    1996-01-01

    Configurable software systems are systems which are built up by standard software components in the same way as a hardware system is built up by standard hardware components. Such systems are often used in the control of NPPs, also in safety related applications. A reliability analysis of such systems is therefore necessary. This report discusses what configurable software is, and what is particular with respect to reliability assessment of such software. Two very commonly used techniques in traditional reliability analysis, viz. failure mode, effect and criticality analysis (FMECA) and fault tree analysis are investigated. A real example is used to illustrate the discussed methods. Various aspects relevant to the assessment of the software reliability in such systems are discussed. Finally some models for quantitative software reliability assessment applicable on configurable software systems are described. (author)

  19. [Reliability study in the measurement of the cusp inclination angle of a chairside digital model].

    Science.gov (United States)

    Xinggang, Liu; Xiaoxian, Chen

    2018-02-01

    This study aims to evaluate the reliability of the software Picpick in the measurement of the cusp inclination angle of a digital model. Twenty-one trimmed models were used as experimental objects. The chairside digital impression was then used for the acquisition of 3D digital models, and the software Picpick was employed for the measurement of the cusp inclination of these models. The measurements were repeated three times, and the results were compared with a gold standard, which was a manually measured experimental model cusp angle. The intraclass correlation coefficient (ICC) was calculated. The paired t test value of the two measurement methods was 0.91. The ICCs between the two measurement methods and three repeated measurements were greater than 0.9. The digital model achieved a smaller coefficient of variation (9.9%). The software Picpick is reliable in measuring the cusp inclination of a digital model.

  20. V and V-based remaining fault estimation model for safety–critical software of a nuclear power plant

    International Nuclear Information System (INIS)

    Eom, Heung-seop; Park, Gee-yong; Jang, Seung-cheol; Son, Han Seong; Kang, Hyun Gook

    2013-01-01

    Highlights: ► A software fault estimation model based on Bayesian Nets and V and V. ► Use of quantified data derived from qualitative V and V results. ► Faults insertion and elimination process was modeled in the context of probability. ► Systematically estimates the expected number of remaining faults. -- Abstract: Quantitative software reliability measurement approaches have some limitations in demonstrating the proper level of reliability in cases of safety–critical software. One of the more promising alternatives is the use of software development quality information. Particularly in the nuclear industry, regulatory bodies in most countries use both probabilistic and deterministic measures for ensuring the reliability of safety-grade digital computers in NPPs. The point of deterministic criteria is to assess the whole development process and its related activities during the software development life cycle for the acceptance of safety–critical software. In addition software Verification and Validation (V and V) play an important role in this process. In this light, we propose a V and V-based fault estimation method using Bayesian Nets to estimate the remaining faults for safety–critical software after the software development life cycle is completed. By modeling the fault insertion and elimination processes during the whole development phases, the proposed method systematically estimates the expected number of remaining faults.

  1. Path generation algorithm for UML graphic modeling of aerospace test software

    Science.gov (United States)

    Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Chen, Chao

    2018-03-01

    Aerospace traditional software testing engineers are based on their own work experience and communication with software development personnel to complete the description of the test software, manual writing test cases, time-consuming, inefficient, loopholes and more. Using the high reliability MBT tools developed by our company, the one-time modeling can automatically generate test case documents, which is efficient and accurate. UML model to describe the process accurately express the need to rely on the path is reached, the existing path generation algorithm are too simple, cannot be combined into a path and branch path with loop, or too cumbersome, too complicated arrangement generates a path is meaningless, for aerospace software testing is superfluous, I rely on our experience of ten load space, tailor developed a description of aerospace software UML graphics path generation algorithm.

  2. Key attributes of the SAPHIRE risk and reliability analysis software for risk-informed probabilistic applications

    International Nuclear Information System (INIS)

    Smith, Curtis; Knudsen, James; Kvarfordt, Kellie; Wood, Ted

    2008-01-01

    The Idaho National Laboratory is a primary developer of probabilistic risk and reliability analysis (PRRA) tools, dating back over 35 years. Evolving from mainframe-based software, the current state-of-the-practice has led to the creation of the SAPHIRE software. Currently, agencies such as the Nuclear Regulatory Commission, the National Aeronautics and Aerospace Agency, the Department of Energy, and the Department of Defense use version 7 of the SAPHIRE software for many of their risk-informed activities. In order to better understand and appreciate the power of software as part of risk-informed applications, we need to recall that our current analysis methods and solution methods have built upon pioneering work done 30-40 years ago. We contrast this work with the current capabilities in the SAPHIRE analysis package. As part of this discussion, we provide information for both the typical features and special analysis capabilities, which are available. We also present the application and results typically found with state-of-the-practice PRRA models. By providing both a high-level and detailed look at the SAPHIRE software, we give a snapshot in time for the current use of software tools in a risk-informed decision arena

  3. Validity and reliability of balance assessment software using the Nintendo Wii balance board: usability and validation.

    Science.gov (United States)

    Park, Dae-Sung; Lee, GyuChang

    2014-06-10

    A balance test provides important information such as the standard to judge an individual's functional recovery or make the prediction of falls. The development of a tool for a balance test that is inexpensive and widely available is needed, especially in clinical settings. The Wii Balance Board (WBB) is designed to test balance, but there is little software used in balance tests, and there are few studies on reliability and validity. Thus, we developed a balance assessment software using the Nintendo Wii Balance Board, investigated its reliability and validity, and compared it with a laboratory-grade force platform. Twenty healthy adults participated in our study. The participants participated in the test for inter-rater reliability, intra-rater reliability, and concurrent validity. The tests were performed with balance assessment software using the Nintendo Wii balance board and a laboratory-grade force platform. Data such as Center of Pressure (COP) path length and COP velocity were acquired from the assessment systems. The inter-rater reliability, the intra-rater reliability, and concurrent validity were analyzed by an intraclass correlation coefficient (ICC) value and a standard error of measurement (SEM). The inter-rater reliability (ICC: 0.89-0.79, SEM in path length: 7.14-1.90, SEM in velocity: 0.74-0.07), intra-rater reliability (ICC: 0.92-0.70, SEM in path length: 7.59-2.04, SEM in velocity: 0.80-0.07), and concurrent validity (ICC: 0.87-0.73, SEM in path length: 5.94-0.32, SEM in velocity: 0.62-0.08) were high in terms of COP path length and COP velocity. The balance assessment software incorporating the Nintendo Wii balance board was used in our study and was found to be a reliable assessment device. In clinical settings, the device can be remarkably inexpensive, portable, and convenient for the balance assessment.

  4. A Study on the Quantitative Assessment Method of Software Requirement Documents Using Software Engineering Measures and Bayesian Belief Networks

    International Nuclear Information System (INIS)

    Eom, Heung Seop; Kang, Hyun Gook; Park, Ki Hong; Kwon, Kee Choon; Chang, Seung Cheol

    2005-01-01

    One of the major challenges in using the digital systems in a NPP is the reliability estimation of safety critical software embedded in the digital safety systems. Precise quantitative assessment of the reliability of safety critical software is nearly impossible, since many of the aspects to be considered are of qualitative nature and not directly measurable, but they have to be estimated for a practical use. Therefore an expert's judgment plays an important role in estimating the reliability of the software embedded in safety-critical systems in practice, because they can deal with all the diverse evidence relevant to the reliability and can perform an inference based on the evidence. But, in general, the experts' way of combining the diverse evidence and performing an inference is usually informal and qualitative, which is hard to discuss and will eventually lead to a debate about the conclusion. We have been carrying out research on a quantitative assessment of the reliability of safety critical software using Bayesian Belief Networks (BBN). BBN has been proven to be a useful modeling formalism because a user can represent a complex set of events and relationships in a fashion that can easily be interpreted by others. In the previous works we have assessed a software requirement specification of a reactor protection system by using our BBN-based assessment model. The BBN model mainly employed an expert's subjective probabilities as inputs. In the process of assessing the software requirement documents we found out that the BBN model was excessively dependent on experts' subjective judgments in a large part. Therefore, to overcome the weakness of our methodology we employed conventional software engineering measures into the BBN model as shown in this paper. The quantitative relationship between the conventional software measures and the reliability of software were not identified well in the past. Then recently there appeared a few researches on a ranking of

  5. Presenting an evaluation model of the trauma registry software.

    Science.gov (United States)

    Asadi, Farkhondeh; Paydar, Somayeh

    2018-04-01

    Trauma is a major cause of 10% death in the worldwide and is considered as a global concern. This problem has made healthcare policy makers and managers to adopt a basic strategy in this context. Trauma registry has an important and basic role in decreasing the mortality and the disabilities due to injuries resulted from trauma. Today, different software are designed for trauma registry. Evaluation of this software improves management, increases efficiency and effectiveness of these systems. Therefore, the aim of this study is to present an evaluation model for trauma registry software. The present study is an applied research. In this study, general and specific criteria of trauma registry software were identified by reviewing literature including books, articles, scientific documents, valid websites and related software in this domain. According to general and specific criteria and related software, a model for evaluating trauma registry software was proposed. Based on the proposed model, a checklist designed and its validity and reliability evaluated. Mentioned model by using of the Delphi technique presented to 12 experts and specialists. To analyze the results, an agreed coefficient of %75 was determined in order to apply changes. Finally, when the model was approved by the experts and professionals, the final version of the evaluation model for the trauma registry software was presented. For evaluating of criteria of trauma registry software, two groups were presented: 1- General criteria, 2- Specific criteria. General criteria of trauma registry software were classified into four main categories including: 1- usability, 2- security, 3- maintainability, and 4-interoperability. Specific criteria were divided into four main categories including: 1- data submission and entry, 2- reporting, 3- quality control, 4- decision and research support. The presented model in this research has introduced important general and specific criteria of trauma registry software

  6. Survey of bayesian belif nets for quantitative reliability assessment of safety critical software used in nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Eom, H.S.; Sung, T.Y.; Jeong, H.S.; Park, J.H.; Kang, H.G.; Lee, K

    2001-03-01

    As part of the Probabilistic Safety Assessment of safety grade digital systems used in Nuclear Power plants research, measures and methodologies applicable to quantitative reliability assessment of safety critical software were surveyed. Among the techniques proposed in the literature we selected those which are in use widely and investigated their limitations in quantitative software reliability assessment. One promising methodology from the survey is Bayesian Belief Nets (BBN) which has a formalism and can combine various disparate evidences relevant to reliability into final decision under uncertainty. Thus we analyzed BBN and its application cases in digital systems assessment area and finally studied the possibility of its application to the quantitative reliability assessment of safety critical software.

  7. Survey of bayesian belif nets for quantitative reliability assessment of safety critical software used in nuclear power plants

    International Nuclear Information System (INIS)

    Eom, H. S.; Sung, T. Y.; Jeong, H. S.; Park, J. H.; Kang, H. G.; Lee, K.

    2001-03-01

    As part of the Probabilistic Safety Assessment of safety grade digital systems used in Nuclear Power plants research, measures and methodologies applicable to quantitative reliability assessment of safety critical software were surveyed. Among the techniques proposed in the literature we selected those which are in use widely and investigated their limitations in quantitative software reliability assessment. One promising methodology from the survey is Bayesian Belief Nets (BBN) which has a formalism and can combine various disparate evidences relevant to reliability into final decision under uncertainty. Thus we analyzed BBN and its application cases in digital systems assessment area and finally studied the possibility of its application to the quantitative reliability assessment of safety critical software

  8. Reliability and Validity of the Footprint Assessment Method Using Photoshop CS5 Software.

    Science.gov (United States)

    Gutiérrez-Vilahú, Lourdes; Massó-Ortigosa, Núria; Costa-Tutusaus, Lluís; Guerra-Balic, Myriam

    2015-05-01

    Several sophisticated methods of footprint analysis currently exist. However, it is sometimes useful to apply standard measurement methods of recognized evidence with an easy and quick application. We sought to assess the reliability and validity of a new method of footprint assessment in a healthy population using Photoshop CS5 software (Adobe Systems Inc, San Jose, California). Forty-two footprints, corresponding to 21 healthy individuals (11 men with a mean ± SD age of 20.45 ± 2.16 years and 10 women with a mean ± SD age of 20.00 ± 1.70 years) were analyzed. Footprints were recorded in static bipedal standing position using optical podography and digital photography. Three trials for each participant were performed. The Hernández-Corvo, Chippaux-Smirak, and Staheli indices and the Clarke angle were calculated by manual method and by computerized method using Photoshop CS5 software. Test-retest was used to determine reliability. Validity was obtained by intraclass correlation coefficient (ICC). The reliability test for all of the indices showed high values (ICC, 0.98-0.99). Moreover, the validity test clearly showed no difference between techniques (ICC, 0.99-1). The reliability and validity of a method to measure, assess, and record the podometric indices using Photoshop CS5 software has been demonstrated. This provides a quick and accurate tool useful for the digital recording of morphostatic foot study parameters and their control.

  9. Formal model-based development for safety-critical embedded software

    International Nuclear Information System (INIS)

    Kim, Jin Hyun; Choi, Jin Young

    2005-01-01

    Safety-critical embedded software for nuclear I and C system is developed under the safety and reliability regulation. Programmable logic controller(PLC) is a computer system for instrumentation and control (I and C) system of nuclear power plants. PLC consists of various I and C logics in software, including real-time operating system (RTOS). Hence, errors related with RTOS should be detected and eliminated in development processes. Practically, the verification and validation for errors in RTOS is performed in test procedure, in which a lot of tasks for testing are embedded in RTOS and are running under a test environments. But the test process can not be enough to guarantee the safety and reliability of RTOS. Therefore, in this paper, we introduce to applying formal methods with the development of software for the PLC. We particularity apply formal methods to a development of RTOS for PLC, which is a safety critical level. In this development, we use the state charts of I-Logix to specify and verification and model checking to verify the specification

  10. Formal model-based development for safety-critical embedded software

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jin Hyun; Choi, Jin Young [Korea University, seoul (Korea, Republic of)

    2005-11-15

    Safety-critical embedded software for nuclear I and C system is developed under the safety and reliability regulation. Programmable logic controller(PLC) is a computer system for instrumentation and control (I and C) system of nuclear power plants. PLC consists of various I and C logics in software, including real-time operating system (RTOS). Hence, errors related with RTOS should be detected and eliminated in development processes. Practically, the verification and validation for errors in RTOS is performed in test procedure, in which a lot of tasks for testing are embedded in RTOS and are running under a test environments. But the test process can not be enough to guarantee the safety and reliability of RTOS. Therefore, in this paper, we introduce to applying formal methods with the development of software for the PLC. We particularity apply formal methods to a development of RTOS for PLC, which is a safety critical level. In this development, we use the state charts of I-Logix to specify and verification and model checking to verify the specification.

  11. Inter- and Intrarater Reliability Using Different Software Versions of E4D Compare in Dental Education.

    Science.gov (United States)

    Callan, Richard S; Cooper, Jeril R; Young, Nancy B; Mollica, Anthony G; Furness, Alan R; Looney, Stephen W

    2015-06-01

    The problems associated with intra- and interexaminer reliability when assessing preclinical performance continue to hinder dental educators' ability to provide accurate and meaningful feedback to students. Many studies have been conducted to evaluate the validity of utilizing various technologies to assist educators in achieving that goal. The purpose of this study was to compare two different versions of E4D Compare software to determine if either could be expected to deliver consistent and reliable comparative results, independent of the individual utilizing the technology. Five faculty members obtained E4D digital images of students' attempts (sample model) at ideal gold crown preparations for tooth #30 performed on typodont teeth. These images were compared to an ideal (master model) preparation utilizing two versions of E4D Compare software. The percent correlations between and within these faculty members were recorded and averaged. The intraclass correlation coefficient was used to measure both inter- and intrarater agreement among the examiners. The study found that using the older version of E4D Compare did not result in acceptable intra- or interrater agreement among the examiners. However, the newer version of E4D Compare, when combined with the Nevo scanner, resulted in a remarkable degree of agreement both between and within the examiners. These results suggest that consistent and reliable results can be expected when utilizing this technology under the protocol described in this study.

  12. Application of a methodology for the development and validation of reliable process control software

    International Nuclear Information System (INIS)

    Ramamoorthy, C.V.; Mok, Y.R.; Bastani, F.B.; Chin, G.

    1980-01-01

    The necessity of a good methodology for the development of reliable software, especially with respect to the final software validation and testing activities, is discussed. A formal specification development and validation methodology is proposed. This methodology has been applied to the development and validation of a pilot software, incorporating typical features of critical software for nuclear power plants safety protection. The main features of the approach include the use of a formal specification language and the independent development of two sets of specifications. 1 ref

  13. BBN based Quantitative Assessment of Software Design Specification

    International Nuclear Information System (INIS)

    Eom, Heung-Seop; Park, Gee-Yong; Kang, Hyun-Gook; Kwon, Kee-Choon; Chang, Seung-Cheol

    2007-01-01

    Probabilistic Safety Assessment (PSA), which is one of the important methods in assessing the overall safety of a nuclear power plant (NPP), requires quantitative reliability information of safety-critical software, but the conventional reliability assessment methods can not provide enough information for PSA of a NPP. Therefore current PSA which includes safety-critical software does not usually consider the reliability of the software or uses arbitrary values for it. In order to solve this situation this paper proposes a method that can produce quantitative reliability information of safety-critical software for PSA by making use of Bayesian Belief Networks (BBN). BBN has generally been used to model an uncertain system in many research fields including the safety assessment of software. The proposed method was constructed by utilizing BBN which can combine the qualitative and the quantitative evidence relevant to the reliability of safety critical software. The constructed BBN model can infer a conclusion in a formal and a quantitative way. A case study was carried out with the proposed method to assess the quality of software design specification (SDS) of safety-critical software that will be embedded in a reactor protection system. The intermediate V and V results of the software design specification were used as inputs to the BBN model

  14. DYNAMIC SOFTWARE TESTING MODELS WITH PROBABILISTIC PARAMETERS FOR FAULT DETECTION AND ERLANG DISTRIBUTION FOR FAULT RESOLUTION DURATION

    Directory of Open Access Journals (Sweden)

    A. D. Khomonenko

    2016-07-01

    Full Text Available Subject of Research.Software reliability and test planning models are studied taking into account the probabilistic nature of error detection and discovering. Modeling of software testing enables to plan the resources and final quality at early stages of project execution. Methods. Two dynamic models of processes (strategies are suggested for software testing, using error detection probability for each software module. The Erlang distribution is used for arbitrary distribution approximation of fault resolution duration. The exponential distribution is used for approximation of fault resolution discovering. For each strategy, modified labeled graphs are built, along with differential equation systems and their numerical solutions. The latter makes it possible to compute probabilistic characteristics of the test processes and states: probability states, distribution functions for fault detection and elimination, mathematical expectations of random variables, amount of detected or fixed errors. Evaluation of Results. Probabilistic characteristics for software development projects were calculated using suggested models. The strategies have been compared by their quality indexes. Required debugging time to achieve the specified quality goals was calculated. The calculation results are used for time and resources planning for new projects. Practical Relevance. The proposed models give the possibility to use the reliability estimates for each individual module. The Erlang approximation removes restrictions on the use of arbitrary time distribution for fault resolution duration. It improves the accuracy of software test process modeling and helps to take into account the viability (power of the tests. With the use of these models we can search for ways to improve software reliability by generating tests which detect errors with the highest probability.

  15. Modeling high-Power Accelerators Reliability-SNS LINAC (SNS-ORNL); MAX LINAC (MYRRHA)

    International Nuclear Information System (INIS)

    Pitigoi, A. E.; Fernandez Ramos, P.

    2013-01-01

    Improving reliability has recently become a very important objective in the field of particle accelerators. The particle accelerators in operation are constantly undergoing modifications, and improvements are implemented using new technologies, more reliable components or redundant schemes (to obtain more reliability, strength, more power, etc.) A reliability model of SNS (Spallation Neutron Source) LINAC has been developed within MAX project and analysis of the accelerator systems reliability has been performed within the MAX project, using the Risk Spectrum reliability analysis software. The analysis results have been evaluated by comparison with the SNS operational data. Results and conclusions are presented in this paper, oriented to identify design weaknesses and provide recommendations for improving reliability of MYRRHA linear accelerator. The SNS reliability model developed for the MAX preliminary design phase indicates possible avenues for further investigation that could be needed to improve the reliability of the high-power accelerators, in view of the future reliability targets of ADS accelerators.

  16. Composing, Analyzing and Validating Software Models

    Science.gov (United States)

    Sheldon, Frederick T.

    1998-10-01

    This research has been conducted at the Computational Sciences Division of the Information Sciences Directorate at Ames Research Center (Automated Software Engineering Grp). The principle work this summer has been to review and refine the agenda that were carried forward from last summer. Formal specifications provide good support for designing a functionally correct system, however they are weak at incorporating non-functional performance requirements (like reliability). Techniques which utilize stochastic Petri nets (SPNs) are good for evaluating the performance and reliability for a system, but they may be too abstract and cumbersome from the stand point of specifying and evaluating functional behavior. Therefore, one major objective of this research is to provide an integrated approach to assist the user in specifying both functionality (qualitative: mutual exclusion and synchronization) and performance requirements (quantitative: reliability and execution deadlines). In this way, the merits of a powerful modeling technique for performability analysis (using SPNs) can be combined with a well-defined formal specification language. In doing so, we can come closer to providing a formal approach to designing a functionally correct system that meets reliability and performance goals.

  17. Model-based Software Engineering

    DEFF Research Database (Denmark)

    Kindler, Ekkart

    2010-01-01

    The vision of model-based software engineering is to make models the main focus of software development and to automatically generate software from these models. Part of that idea works already today. But, there are still difficulties when it comes to behaviour. Actually, there is no lack in models...

  18. Model-integrating software components engineering flexible software systems

    CERN Document Server

    Derakhshanmanesh, Mahdi

    2015-01-01

    In his study, Mahdi Derakhshanmanesh builds on the state of the art in modeling by proposing to integrate models into running software on the component-level without translating them to code. Such so-called model-integrating software exploits all advantages of models: models implicitly support a good separation of concerns, they are self-documenting and thus improve understandability and maintainability and in contrast to model-driven approaches there is no synchronization problem anymore between the models and the code generated from them. Using model-integrating components, software will be

  19. Digital System Reliability Test for the Evaluation of safety Critical Software of Digital Reactor Protection System

    Directory of Open Access Journals (Sweden)

    Hyun-Kook Shin

    2006-08-01

    Full Text Available A new Digital Reactor Protection System (DRPS based on VME bus Single Board Computer has been developed by KOPEC to prevent software Common Mode Failure(CMF inside digital system. The new DRPS has been proved to be an effective digital safety system to prevent CMF by Defense-in-Depth and Diversity (DID&D analysis. However, for practical use in Nuclear Power Plants, the performance test and the reliability test are essential for the digital system qualification. In this study, a single channel of DRPS prototype has been manufactured for the evaluation of DRPS capabilities. The integrated functional tests are performed and the system reliability is analyzed and tested. The results of reliability test show that the application software of DRPS has a very high reliability compared with the analog reactor protection systems.

  20. Estimation of Remained defects in a Safety-Critical Software using Bayesian Belief Network of Software Development Life Cycle

    International Nuclear Information System (INIS)

    Lee, Seung Jun; Jung, Wondea Jung

    2015-01-01

    Some researchers recognized Bayesian belief network (BBN) method to be a promising method of quantifying software reliability. Brookhaven National Laboratory (BNL) comprehensively reviewed various quantitative software reliability methods to identify the most promising methods for use in probabilistic safety assessments (PSAs) of digital systems of NPPs against a set of the most desirable characteristics developed therein. BBNs are recognized as a promising way of quantifying software reliability and are useful for integrating many aspects of software engineering and quality assurance. The method explicitly incorporates important factors relevant to reliability, such as the quality of the developer, the development process, problem complexity, testing effort, and the operation environment. In this work, a BBN model was developed to estimate the number of remained defects in a safety-critical software based on the quality evaluation of software development life cycle (SDLC). Even though a number of software reliability evaluation methods exist, none of them can be applicable to the safety-critical software in an NPP because software quality in terms of PDF is required for the PSA

  1. An artificial intelligence system for reliability studies

    International Nuclear Information System (INIS)

    Llory, M.; Ancelin, C.; Bannelier, M.; Bouhadana, H.; Bouissou, M.; Lucas, J.Y.; Magne, L.; Villate, N.

    1990-01-01

    The EDF (French Electricity Company) software developed for computer aided reliability studies is considered. Such software tools were applied in the study of the safety requirements of the Paluel nuclear power plant. The reliability models, based on IF-THEN type rules, and the generation of models by the expert system are described. The models are then processed applying algorithm structures [fr

  2. Cost Estimation of Software Development and the Implications for the Program Manager

    Science.gov (United States)

    1992-06-01

    Software Lifecycle Model (SLIM), the Jensen System-4 model, the Software Productivity, Quality, and Reliability Estimator ( SPQR \\20), the Constructive...function models in current use are the Software Productivity, Quality, and Reliability Estimator ( SPQR /20) and the Software Architecture Sizing and...Estimator ( SPQR /20) was developed by T. Capers Jones of Software Productivity Research, Inc., in 1985. The model is intended to estimate the outcome

  3. In search of cost-effective, reliable software

    International Nuclear Information System (INIS)

    Naser, J.A.; Bhatt, S.C.

    1992-01-01

    Considerable effort is ongoing to utilize the strengths of digital technology to upgrade and add functionality to existing systems and to develop solutions to problems in the nuclear industry. Acceptance of digital solutions requires verification and validation activities to ensure the reliability and acceptance of these solutions. EPRI has an ongoing effort to develop a methodology for verification and validation of digital control systems. Also, a joint project between the NRC and EPRI is developing a methodology for expert system verification and validation. To obtain a wider acceptance of digital system solutions and hence the utilization of verification and validation techniques, cost effective methods for design, development and verification and validation are needed. EPRI is leading an effort to develop methods for cost effective verification and validation for all types of software

  4. Model-Based Software Testing for Object-Oriented Software

    Science.gov (United States)

    Biju, Soly Mathew

    2008-01-01

    Model-based testing is one of the best solutions for testing object-oriented software. It has a better test coverage than other testing styles. Model-based testing takes into consideration behavioural aspects of a class, which are usually unchecked in other testing methods. An increase in the complexity of software has forced the software industry…

  5. Reliability Estimation for Digital Instrument/Control System

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Yaguang; Sydnor, Russell [U.S. Nuclear Regulatory Commission, Washington, D.C. (United States)

    2011-08-15

    Digital instrumentation and controls (DI and C) systems are widely adopted in various industries because of their flexibility and ability to implement various functions that can be used to automatically monitor, analyze, and control complicated systems. It is anticipated that the DI and C will replace the traditional analog instrumentation and controls (AI and C) systems in all future nuclear reactor designs. There is an increasing interest for reliability and risk analyses for safety critical DI and C systems in regulatory organizations, such as The United States Nuclear Regulatory Commission. Developing reliability models and reliability estimation methods for digital reactor control and protection systems will involve every part of the DI and C system, such as sensors, signal conditioning and processing components, transmission lines and digital communication systems, D/A and A/D converters, computer system, signal processing software, control and protection software, power supply system, and actuators. Some of these components are hardware, such as sensors and actuators, their failure mechanisms are well understood, and the traditional reliability model and estimation methods can be directly applied. But many of these components are firmware which has software embedded in the hardware, and software needs special consideration because its failure mechanism is unique, and the reliability estimation method for a software system will be different from the ones used for hardware systems. In this paper, we will propose a reliability estimation method for the entire DI and C system reliability using a recently developed software reliability estimation method and a traditional hardware reliability estimation method.

  6. Reliability Estimation for Digital Instrument/Control System

    International Nuclear Information System (INIS)

    Yang, Yaguang; Sydnor, Russell

    2011-01-01

    Digital instrumentation and controls (DI and C) systems are widely adopted in various industries because of their flexibility and ability to implement various functions that can be used to automatically monitor, analyze, and control complicated systems. It is anticipated that the DI and C will replace the traditional analog instrumentation and controls (AI and C) systems in all future nuclear reactor designs. There is an increasing interest for reliability and risk analyses for safety critical DI and C systems in regulatory organizations, such as The United States Nuclear Regulatory Commission. Developing reliability models and reliability estimation methods for digital reactor control and protection systems will involve every part of the DI and C system, such as sensors, signal conditioning and processing components, transmission lines and digital communication systems, D/A and A/D converters, computer system, signal processing software, control and protection software, power supply system, and actuators. Some of these components are hardware, such as sensors and actuators, their failure mechanisms are well understood, and the traditional reliability model and estimation methods can be directly applied. But many of these components are firmware which has software embedded in the hardware, and software needs special consideration because its failure mechanism is unique, and the reliability estimation method for a software system will be different from the ones used for hardware systems. In this paper, we will propose a reliability estimation method for the entire DI and C system reliability using a recently developed software reliability estimation method and a traditional hardware reliability estimation method

  7. An Embedded System for Safe, Secure and Reliable Execution of High Consequence Software

    Energy Technology Data Exchange (ETDEWEB)

    MCCOY,JAMES A.

    2000-08-29

    As more complex and functionally diverse requirements are placed on high consequence embedded applications, ensuring safe and secure operation requires an execution environment that is ultra reliable from a system viewpoint. In many cases the safety and security of the system depends upon the reliable cooperation between the hardware and the software to meet real-time system throughput requirements. The selection of a microprocessor and its associated development environment for an embedded application has the most far-reaching effects on the development and production of the system than any other element in the design. The effects of this choice ripple through the remainder of the hardware design and profoundly affect the entire software development process. While state-of-the-art software engineering principles indicate that an object oriented (OO) methodology provides a superior development environment, traditional programming languages available for microprocessors targeted for deeply embedded applications do not directly support OO techniques. Furthermore, the microprocessors themselves do not typically support nor do they enforce an OO environment. This paper describes a system level approach for the design of a microprocessor intended for use in deeply embedded high consequence applications that both supports and enforces an OO execution environment.

  8. Determining Reliability and Validity of the Persian Version of Software Usability Measurements Inventory (SUMI) Questionnaire

    OpenAIRE

    seyed abolfazl zakerian; Roya Azizi; Mehdi Rahgozar

    2013-01-01

    The term usability refers to a special index for success of an operating system. This study aimed to determine the reliability and validity of the Software Usability Measurements Inventory (SUMI) questionnaire as one of the valid and common questionnaires about usability evaluation. The back translation method was used to translate the questionnaire from English to Persian back to English. Moreover, repeatability or test-retest reliability was practically used to determine the reliability of ...

  9. Software Code Smell Prediction Model Using Shannon, Rényi and Tsallis Entropies

    Directory of Open Access Journals (Sweden)

    Aakanshi Gupta

    2018-05-01

    Full Text Available The current era demands high quality software in a limited time period to achieve new goals and heights. To meet user requirements, the source codes undergo frequent modifications which can generate the bad smells in software that deteriorate the quality and reliability of software. Source code of the open source software is easily accessible by any developer, thus frequently modifiable. In this paper, we have proposed a mathematical model to predict the bad smells using the concept of entropy as defined by the Information Theory. Open-source software Apache Abdera is taken into consideration for calculating the bad smells. Bad smells are collected using a detection tool from sub components of the Apache Abdera project, and different measures of entropy (Shannon, Rényi and Tsallis entropy. By applying non-linear regression techniques, the bad smells that can arise in the future versions of software are predicted based on the observed bad smells and entropy measures. The proposed model has been validated using goodness of fit parameters (prediction error, bias, variation, and Root Mean Squared Prediction Error (RMSPE. The values of model performance statistics ( R 2 , adjusted R 2 , Mean Square Error (MSE and standard error also justify the proposed model. We have compared the results of the prediction model with the observed results on real data. The results of the model might be helpful for software development industries and future researchers.

  10. The reliability of the software of the digital control system Nuclear Advantage

    International Nuclear Information System (INIS)

    Graae, T.; Engdahl, L.

    1996-01-01

    The ABB nuclear power control system Nuclear Advantage is a truly integrated control system. The integration of process control and safety control aims at achieving a common operator interface in order to simplify and thus improve control room ergonomics. The challenge is to design an integrated control system and at the same time ensure the functional separation between the independent safety subsystems as well as between the safety and the conventional sections. Software reliability is discussed and illustrated by statistical test results. It has proved to be a hundred times better than the reliability of the high-quality hardware. (orig.) [de

  11. Reliability model of SNS linac (spallation neutron source-ORNL)

    International Nuclear Information System (INIS)

    Pitigoi, A.; Fernandez, P.

    2015-01-01

    A reliability model of SNS LINAC (Spallation Neutron Source at Oak Ridge National Laboratory) has been developed using risk spectrum reliability analysis software and the analysis of the accelerator system's reliability has been performed. The analysis results have been evaluated by comparing them with the SNS operational data. This paper presents the main results and conclusions focusing on the definition of design weaknesses and provides recommendations to improve reliability of the MYRRHA ( linear accelerator. The reliability results show that the most affected SNS LINAC parts/systems are: 1) SCL (superconducting linac), front-end systems: IS, LEBT (low-energy beam transport line), MEBT (medium-energy beam transport line), diagnostics and controls; 2) RF systems (especially the SCL RF system); 3) power supplies and PS controllers. These results are in line with the records in the SNS logbook. The reliability issue that needs to be enforced in the linac design is the redundancy of the systems, subsystems and components most affected by failures. For compensation purposes, there is a need for intelligent fail-over redundancy implementation in controllers. Enough diagnostics has to be implemented to allow reliable functioning of the redundant solutions and to ensure the compensation function

  12. Reliable and Fault-Tolerant Software-Defined Network Operations Scheme for Remote 3D Printing

    Science.gov (United States)

    Kim, Dongkyun; Gil, Joon-Min

    2015-03-01

    The recent wide expansion of applicable three-dimensional (3D) printing and software-defined networking (SDN) technologies has led to a great deal of attention being focused on efficient remote control of manufacturing processes. SDN is a renowned paradigm for network softwarization, which has helped facilitate remote manufacturing in association with high network performance, since SDN is designed to control network paths and traffic flows, guaranteeing improved quality of services by obtaining network requests from end-applications on demand through the separated SDN controller or control plane. However, current SDN approaches are generally focused on the controls and automation of the networks, which indicates that there is a lack of management plane development designed for a reliable and fault-tolerant SDN environment. Therefore, in addition to the inherent advantage of SDN, this paper proposes a new software-defined network operations center (SD-NOC) architecture to strengthen the reliability and fault-tolerance of SDN in terms of network operations and management in particular. The cooperation and orchestration between SDN and SD-NOC are also introduced for the SDN failover processes based on four principal SDN breakdown scenarios derived from the failures of the controller, SDN nodes, and connected links. The abovementioned SDN troubles significantly reduce the network reachability to remote devices (e.g., 3D printers, super high-definition cameras, etc.) and the reliability of relevant control processes. Our performance consideration and analysis results show that the proposed scheme can shrink operations and management overheads of SDN, which leads to the enhancement of responsiveness and reliability of SDN for remote 3D printing and control processes.

  13. Model-driven software engineering

    NARCIS (Netherlands)

    Amstel, van M.F.; Brand, van den M.G.J.; Protic, Z.; Verhoeff, T.; Hamberg, R.; Verriet, J.

    2014-01-01

    Software plays an important role in designing and operating warehouses. However, traditional software engineering methods for designing warehouse software are not able to cope with the complexity, size, and increase of automation in modern warehouses. This chapter describes Model-Driven Software

  14. Predicting Software Suitability Using a Bayesian Belief Network

    Science.gov (United States)

    Beaver, Justin M.; Schiavone, Guy A.; Berrios, Joseph S.

    2005-01-01

    The ability to reliably predict the end quality of software under development presents a significant advantage for a development team. It provides an opportunity to address high risk components earlier in the development life cycle, when their impact is minimized. This research proposes a model that captures the evolution of the quality of a software product, and provides reliable forecasts of the end quality of the software being developed in terms of product suitability. Development team skill, software process maturity, and software problem complexity are hypothesized as driving factors of software product quality. The cause-effect relationships between these factors and the elements of software suitability are modeled using Bayesian Belief Networks, a machine learning method. This research presents a Bayesian Network for software quality, and the techniques used to quantify the factors that influence and represent software quality. The developed model is found to be effective in predicting the end product quality of small-scale software development efforts.

  15. BurnCase 3D software validation study: Burn size measurement accuracy and inter-rater reliability.

    Science.gov (United States)

    Parvizi, Daryousch; Giretzlehner, Michael; Wurzer, Paul; Klein, Limor Dinur; Shoham, Yaron; Bohanon, Fredrick J; Haller, Herbert L; Tuca, Alexandru; Branski, Ludwik K; Lumenta, David B; Herndon, David N; Kamolz, Lars-P

    2016-03-01

    The aim of this study was to compare the accuracy of burn size estimation using the computer-assisted software BurnCase 3D (RISC Software GmbH, Hagenberg, Austria) with that using a 2D scan, considered to be the actual burn size. Thirty artificial burn areas were pre planned and prepared on three mannequins (one child, one female, and one male). Five trained physicians (raters) were asked to assess the size of all wound areas using BurnCase 3D software. The results were then compared with the real wound areas, as determined by 2D planimetry imaging. To examine inter-rater reliability, we performed an intraclass correlation analysis with a 95% confidence interval. The mean wound area estimations of the five raters using BurnCase 3D were in total 20.7±0.9% for the child, 27.2±1.5% for the female and 16.5±0.1% for the male mannequin. Our analysis showed relative overestimations of 0.4%, 2.8% and 1.5% for the child, female and male mannequins respectively, compared to the 2D scan. The intraclass correlation between the single raters for mean percentage of the artificial burn areas was 98.6%. There was also a high intraclass correlation between the single raters and the 2D Scan visible. BurnCase 3D is a valid and reliable tool for the determination of total body surface area burned in standard models. Further clinical studies including different pediatric and overweight adult mannequins are warranted. Copyright © 2016 Elsevier Ltd and ISBI. All rights reserved.

  16. Stochastic reliability and maintenance modeling essays in honor of Professor Shunji Osaki on his 70th birthday

    CERN Document Server

    Nakagawa, Toshio

    2013-01-01

    In honor of the work of Professor Shunji Osaki, Stochastic Reliability and Maintenance Modeling provides a comprehensive study of the legacy of and ongoing research in stochastic reliability and maintenance modeling. Including associated application areas such as dependable computing, performance evaluation, software engineering, communication engineering, distinguished researchers review and build on the contributions over the last four decades by Professor Shunji Osaki. Fundamental yet significant research results are presented and discussed clearly alongside new ideas and topics on stochastic reliability and maintenance modeling to inspire future research. Across 15 chapters readers gain the knowledge and understanding to apply reliability and maintenance theory to computer and communication systems. Stochastic Reliability and Maintenance Modeling is ideal for graduate students and researchers in reliability engineering, and workers, managers and engineers engaged in computer, maintenance and management wo...

  17. Software Cost-Estimation Model

    Science.gov (United States)

    Tausworthe, R. C.

    1985-01-01

    Software Cost Estimation Model SOFTCOST provides automated resource and schedule model for software development. Combines several cost models found in open literature into one comprehensive set of algorithms. Compensates for nearly fifty implementation factors relative to size of task, inherited baseline, organizational and system environment and difficulty of task.

  18. Reliability and Validity of the Footprint Assessment Method Using Photoshop CS5 Software in Young People with Down Syndrome.

    Science.gov (United States)

    Gutiérrez-Vilahú, Lourdes; Massó-Ortigosa, Núria; Rey-Abella, Ferran; Costa-Tutusaus, Lluís; Guerra-Balic, Myriam

    2016-05-01

    People with Down syndrome present skeletal abnormalities in their feet that can be analyzed by commonly used gold standard indices (the Hernández-Corvo index, the Chippaux-Smirak index, the Staheli arch index, and the Clarke angle) based on footprint measurements. The use of Photoshop CS5 software (Adobe Systems Software Ireland Ltd, Dublin, Ireland) to measure footprints has been validated in the general population. The present study aimed to assess the reliability and validity of this footprint assessment technique in the population with Down syndrome. Using optical podography and photography, 44 footprints from 22 patients with Down syndrome (11 men [mean ± SD age, 23.82 ± 3.12 years] and 11 women [mean ± SD age, 24.82 ± 6.81 years]) were recorded in a static bipedal standing position. A blinded observer performed the measurements using a validated manual method three times during the 4-month study, with 2 months between measurements. Test-retest was used to check the reliability of the Photoshop CS5 software measurements. Validity and reliability were obtained by intraclass correlation coefficient (ICC). The reliability test for all of the indices showed very good values for the Photoshop CS5 method (ICC, 0.982-0.995). Validity testing also found no differences between the techniques (ICC, 0.988-0.999). The Photoshop CS5 software method is reliable and valid for the study of footprints in young people with Down syndrome.

  19. Development of advanced methods and related software for human reliability evaluation within probabilistic safety analyses

    International Nuclear Information System (INIS)

    Kosmowski, K.T.; Mertens, J.; Degen, G.; Reer, B.

    1994-06-01

    Human Reliability Analysis (HRA) is an important part of Probabilistic Safety Analysis (PSA). The first part of this report consists of an overview of types of human behaviour and human error including the effect of significant performance shaping factors on human reliability. Particularly with regard to safety assessments for nuclear power plants a lot of HRA methods have been developed. The most important of these methods are presented and discussed in the report, together with techniques for incorporating HRA into PSA and with models of operator cognitive behaviour. Based on existing HRA methods the concept of a software system is described. For the development of this system the utilization of modern programming tools is proposed; the essential goal is the effective application of HRA methods. A possible integration of computeraided HRA within PSA is discussed. The features of Expert System Technology and examples of applications (PSA, HRA) are presented in four appendices. (orig.) [de

  20. Generic domain models in software engineering

    Science.gov (United States)

    Maiden, Neil

    1992-01-01

    This paper outlines three research directions related to domain-specific software development: (1) reuse of generic models for domain-specific software development; (2) empirical evidence to determine these generic models, namely elicitation of mental knowledge schema possessed by expert software developers; and (3) exploitation of generic domain models to assist modelling of specific applications. It focuses on knowledge acquisition for domain-specific software development, with emphasis on tool support for the most important phases of software development.

  1. Development of a Method for Quantifying the Reliability of Nuclear Safety-Related Software

    Energy Technology Data Exchange (ETDEWEB)

    Yi Zhang; Michael W. Golay

    2003-10-01

    The work of our project is intended to help introducing digital technologies into nuclear power into nuclear power plant safety related software applications. In our project we utilize a combination of modern software engineering methods: design process discipline and feedback, formal methods, automated computer aided software engineering tools, automatic code generation, and extensive feasible structure flow path testing to improve software quality. The tactics include ensuring that the software structure is kept simple, permitting routine testing during design development, permitting extensive finished product testing in the input data space of most likely service and using test-based Bayesian updating to estimate the probability that a random software input will encounter an error upon execution. From the results obtained the software reliability can be both improved and its value estimated. Hopefully our success in the project's work can aid the transition of the nuclear enterprise into the modern information world. In our work, we have been using the proprietary sample software, the digital Signal Validation Algorithm (SVA), provided by Westinghouse. Also our work is being done with their collaboration. The SVA software is used for selecting the plant instrumentation signal set which is to be used as the input the digital Plant Protection System (PPS). This is the system that automatically decides whether to trip the reactor. In our work, we are using -001 computer assisted software engineering (CASE) tool of Hamilton Technologies Inc. This tool is capable of stating the syntactic structure of a program reflecting its state requirements, logical functions and data structure.

  2. Development of a Method for Quantifying the Reliability of Nuclear Safety-Related Software

    International Nuclear Information System (INIS)

    Yi Zhang; Golay, Michael W.

    2003-01-01

    The work of our project is intended to help introducing digital technologies into nuclear power into nuclear power plant safety related software applications. In our project we utilize a combination of modern software engineering methods: design process discipline and feedback, formal methods, automated computer aided software engineering tools, automatic code generation, and extensive feasible structure flow path testing to improve software quality. The tactics include ensuring that the software structure is kept simple, permitting routine testing during design development, permitting extensive finished product testing in the input data space of most likely service and using test-based Bayesian updating to estimate the probability that a random software input will encounter an error upon execution. From the results obtained the software reliability can be both improved and its value estimated. Hopefully our success in the project's work can aid the transition of the nuclear enterprise into the modern information world. In our work, we have been using the proprietary sample software, the digital Signal Validation Algorithm (SVA), provided by Westinghouse. Also our work is being done with their collaboration. The SVA software is used for selecting the plant instrumentation signal set which is to be used as the input the digital Plant Protection System (PPS). This is the system that automatically decides whether to trip the reactor. In our work, we are using -001 computer assisted software engineering (CASE) tool of Hamilton Technologies Inc. This tool is capable of stating the syntactic structure of a program reflecting its state requirements, logical functions and data structure

  3. Methods for qualification of highly reliable software - international procedure

    International Nuclear Information System (INIS)

    Kersken, M.

    1997-01-01

    Despite the advantages of computer-assisted safety technology, there still is some uneasyness to be observed with respect to the novel processes, resulting from absence of a body of generally accepted and uncontentious qualification guides (regulatory provisions, standards) for safety evaluation of the computer codes applied. Warranty of adequate protection of the population, operators or plant components is an essential aspect in this context, too - as it is in general with reliability and risk assessment of novel technology - so that, due to appropriate legislation still missing, there currently is a licensing risk involved in the introduction of digital safety systems. Nevertheless, there is some extent of agreement within the international community and utility operators about what standards and measures should be applied for qualification of software of relevance to plant safety. The standard IEC 880/IEC 86/ in particular, in its original version, or national documents based on this standard, are applied in all countries using or planning to install those systems. A novel supplement to this standard, document /IEC 96/, is in the process of finalization and defines the requirements to be met by modern methods of software engineering. (orig./DG) [de

  4. Comparative test-retest reliability of metabolite values assessed with magnetic resonance spectroscopy of the brain. The LCModel versus the manufacturer software.

    Science.gov (United States)

    Fayed, Nicolas; Modrego, Pedro J; Medrano, Jaime

    2009-06-01

    Reproducibility is an essential strength of any diagnostic technique for cross-sectional and longitudinal works. To determine in vivo short-term comparatively, the test-retest reliability of magnetic resonance spectroscopy (MRS) of the brain was compared using the manufacturer's software package and the widely used linear combination of model (LCModel) technique. Single-voxel H-MRS was performed in a series of patients with different pathologies on a 1.5 T clinical scanner. Four areas of the brain were explored with the point resolved spectroscopy technique acquisition mode; the echo time was 35 milliseconds and the repetition time was 2000 milliseconds. We enrolled 15 patients for every area, and the intra-individual variations of metabolites were studied in two consecutive scans without removing the patient from the scanner. Curve fitting and analysis of metabolites were made with the software of GE and the LCModel. Spectra non-fulfilling the minimum criteria of quality in relation to linewidths and signal/noise ratio were rejected. The intraclass correlation coefficients for the N-acetylaspartate/creatine (NAA/Cr) ratios were 0.93, 0.89, 0.9 and 0.8 for the posterior cingulate gyrus, occipital, prefrontal and temporal regions, respectively, with the GE software. For the LCModel, the coefficients were 0.9, 0.89, 0.87 and 0.84, respectively. For the absolute value of NAA, the GE software was also slightly more reproducible than LCModel. However, for the choline/Cr and myo-inositol/Cr ratios, the LCModel was more reliable than the GE software. The variability we have seen hovers around the percentages observed in previous reports (around 10% for the NAA/Cr ratios). We did not find that the LCModel software is superior to the software of the manufacturer. Reproducibility of metabolite values relies more on the observance of the quality parameters than on the software used.

  5. Model-based software process improvement

    Science.gov (United States)

    Zettervall, Brenda T.

    1994-01-01

    The activities of a field test site for the Software Engineering Institute's software process definition project are discussed. Products tested included the improvement model itself, descriptive modeling techniques, the CMM level 2 framework document, and the use of process definition guidelines and templates. The software process improvement model represents a five stage cyclic approach for organizational process improvement. The cycles consist of the initiating, diagnosing, establishing, acting, and leveraging phases.

  6. 3D modeling and visualization software for complex geometries

    International Nuclear Information System (INIS)

    Guse, Guenter; Klotzbuecher, Michael; Mohr, Friedrich

    2011-01-01

    The reactor safety depends on reliable nondestructive testing of reactor components. For 100% detection probability of flaws and the determination of their size using ultrasonic methods the ultrasonic waves have to hit the flaws within a specific incidence and squint angle. For complex test geometries like testing of nozzle welds from the outside of the component these angular ranges can only be determined using elaborate mathematical calculations. The authors developed a 3D modeling and visualization software tool that allows to integrate and present ultrasonic measuring data into the 3D geometry. The software package was verified using 1:1 test samples (example: testing of the nozzle edge of the feedwater nozzle of a steam generator from the outside; testing of the reactor pressure vessel nozzle edge from the inside).

  7. Incorporation of Markov reliability models for digital instrumentation and control systems into existing PRAs

    International Nuclear Information System (INIS)

    Bucci, P.; Mangan, L. A.; Kirschenbaum, J.; Mandelli, D.; Aldemir, T.; Arndt, S. A.

    2006-01-01

    Markov models have the ability to capture the statistical dependence between failure events that can arise in the presence of complex dynamic interactions between components of digital instrumentation and control systems. One obstacle to the use of such models in an existing probabilistic risk assessment (PRA) is that most of the currently available PRA software is based on the static event-tree/fault-tree methodology which often cannot represent such interactions. We present an approach to the integration of Markov reliability models into existing PRAs by describing the Markov model of a digital steam generator feedwater level control system, how dynamic event trees (DETs) can be generated from the model, and how the DETs can be incorporated into an existing PRA with the SAPHIRE software. (authors)

  8. Modeling Student Software Testing Processes: Attitudes, Behaviors, Interventions, and Their Effects

    Science.gov (United States)

    Buffardi, Kevin John

    2014-01-01

    Effective software testing identifies potential bugs and helps correct them, producing more reliable and maintainable software. As software development processes have evolved, incremental testing techniques have grown in popularity, particularly with introduction of test-driven development (TDD). However, many programmers struggle to adopt TDD's…

  9. Robust recurrent neural network modeling for software fault detection and correction prediction

    International Nuclear Information System (INIS)

    Hu, Q.P.; Xie, M.; Ng, S.H.; Levitin, G.

    2007-01-01

    Software fault detection and correction processes are related although different, and they should be studied together. A practical approach is to apply software reliability growth models to model fault detection, and fault correction process is assumed to be a delayed process. On the other hand, the artificial neural networks model, as a data-driven approach, tries to model these two processes together with no assumptions. Specifically, feedforward backpropagation networks have shown their advantages over analytical models in fault number predictions. In this paper, the following approach is explored. First, recurrent neural networks are applied to model these two processes together. Within this framework, a systematic networks configuration approach is developed with genetic algorithm according to the prediction performance. In order to provide robust predictions, an extra factor characterizing the dispersion of prediction repetitions is incorporated into the performance function. Comparisons with feedforward neural networks and analytical models are developed with respect to a real data set

  10. Considerations of the Software Metric-based Methodology for Software Reliability Assessment in Digital I and C Systems

    International Nuclear Information System (INIS)

    Ha, J. H.; Kim, M. K.; Chung, B. S.; Oh, H. C.; Seo, M. R.

    2007-01-01

    Analog I and C systems have been replaced by digital I and C systems because the digital systems have many potential benefits to nuclear power plants in terms of operational and safety performance. For example, digital systems are essentially free of drifts, have higher data handling and storage capabilities, and provide improved performance by accuracy and computational capabilities. In addition, analog replacement parts become more difficult to obtain since they are obsolete and discontinued. There are, however, challenges to the introduction of digital technology into the nuclear power plants because digital systems are more complex than analog systems and their operation and failure modes are different. Especially, software, which can be the core of functionality in the digital systems, does not wear out physically like hardware and its failure modes are not yet defined clearly. Thus, some researches to develop the methodology for software reliability assessment are still proceeding in the safety-critical areas such as nuclear system, aerospace and medical devices. Among them, software metric-based methodology has been considered for the digital I and C systems of Korean nuclear power plants. Advantages and limitations of that methodology are identified and requirements for its application to the digital I and C systems are considered in this study

  11. Reliability Quantification Method for Safety Critical Software Based on a Finite Test Set

    International Nuclear Information System (INIS)

    Shin, Sung Min; Kim, Hee Eun; Kang, Hyun Gook; Lee, Seung Jun

    2014-01-01

    Software inside of digitalized system have very important role because it may cause irreversible consequence and affect the whole system as common cause failure. However, test-based reliability quantification method for some safety critical software has limitations caused by difficulties in developing input sets as a form of trajectory which is series of successive values of variables. To address these limitations, this study proposed another method which conduct the test using combination of single values of variables. To substitute the trajectory form of input using combination of variables, the possible range of each variable should be identified. For this purpose, assigned range of each variable, logical relations between variables, plant dynamics under certain situation, and characteristics of obtaining information of digital device are considered. A feasibility of the proposed method was confirmed through an application to the Reactor Protection System (RPS) software trip logic

  12. Application of Artificial Intelligence technology to the analysis and synthesis of reliable software systems

    Science.gov (United States)

    Wild, Christian; Eckhardt, Dave

    1987-01-01

    The development of a methodology for the production of highly reliable software is one of the greatest challenges facing the computer industry. Meeting this challenge will undoubtably involve the integration of many technologies. This paper describes the use of Artificial Intelligence technologies in the automated analysis of the formal algebraic specifications of abstract data types. These technologies include symbolic execution of specifications using techniques of automated deduction and machine learning through the use of examples. On-going research into the role of knowledge representation and problem solving in the process of developing software is also discussed.

  13. Software testability and its application to avionic software

    Science.gov (United States)

    Voas, Jeffrey M.; Miller, Keith W.; Payne, Jeffery E.

    1993-01-01

    Randomly generated black-box testing is an established yet controversial method of estimating software reliability. Unfortunately, as software applications have required higher reliabilities, practical difficulties with black-box testing have become increasingly problematic. These practical problems are particularly acute in life-critical avionics software, where requirements of 10 exp -7 failures per hour of system reliability can translate into a probability of failure (POF) of perhaps 10 exp -9 or less for each individual execution of the software. This paper describes the application of one type of testability analysis called 'sensitivity analysis' to B-737 avionics software; one application of sensitivity analysis is to quantify whether software testing is capable of detecting faults in a particular program and thus whether we can be confident that a tested program is not hiding faults. We so 80 by finding the testabilities of the individual statements of the program, and then use those statement testabilities to find the testabilities of the functions and modules. For the B-737 system we analyzed, we were able to isolate those functions that are more prone to hide errors during system/reliability testing.

  14. Techniques for developing reliable and functional materials control and accounting software

    International Nuclear Information System (INIS)

    Barlich, G.

    1988-01-01

    The media has increasingly focused on failures of computer systems resulting in financial, material, and other losses and on systems failing to function as advertised. Unfortunately, such failures with equally disturbing losses are possible in computer systems providing materials control and accounting (MCandA) functions. Major improvements in the reliability and correctness of systems are possible with disciplined design and development techniques applied during software development. This paper describes some of the techniques used in the Safeguard Systems Group at Los Alamos National Laboratory for various MCandA systems

  15. Reliability and accuracy of three imaging software packages used for 3D analysis of the upper airway on cone beam computed tomography images.

    Science.gov (United States)

    Chen, Hui; van Eijnatten, Maureen; Wolff, Jan; de Lange, Jan; van der Stelt, Paul F; Lobbezoo, Frank; Aarab, Ghizlane

    2017-08-01

    The aim of this study was to assess the reliability and accuracy of three different imaging software packages for three-dimensional analysis of the upper airway using CBCT images. To assess the reliability of the software packages, 15 NewTom 5G ® (QR Systems, Verona, Italy) CBCT data sets were randomly and retrospectively selected. Two observers measured the volume, minimum cross-sectional area and the length of the upper airway using Amira ® (Visage Imaging Inc., Carlsbad, CA), 3Diagnosys ® (3diemme, Cantu, Italy) and OnDemand3D ® (CyberMed, Seoul, Republic of Korea) software packages. The intra- and inter-observer reliability of the upper airway measurements were determined using intraclass correlation coefficients and Bland & Altman agreement tests. To assess the accuracy of the software packages, one NewTom 5G ® CBCT data set was used to print a three-dimensional anthropomorphic phantom with known dimensions to be used as the "gold standard". This phantom was subsequently scanned using a NewTom 5G ® scanner. Based on the CBCT data set of the phantom, one observer measured the volume, minimum cross-sectional area, and length of the upper airway using Amira ® , 3Diagnosys ® , and OnDemand3D ® , and compared these measurements with the gold standard. The intra- and inter-observer reliability of the measurements of the upper airway using the different software packages were excellent (intraclass correlation coefficient ≥0.75). There was excellent agreement between all three software packages in volume, minimum cross-sectional area and length measurements. All software packages underestimated the upper airway volume by -8.8% to -12.3%, the minimum cross-sectional area by -6.2% to -14.6%, and the length by -1.6% to -2.9%. All three software packages offered reliable volume, minimum cross-sectional area and length measurements of the upper airway. The length measurements of the upper airway were the most accurate results in all software packages. All

  16. Diversity requirements for safety critical software-based automation systems

    International Nuclear Information System (INIS)

    Korhonen, J.; Pulkkinen, U.; Haapanen, P.

    1998-03-01

    System vendors nowadays propose software-based systems even for the most critical safety functions in nuclear power plants. Due to the nature and mechanisms of influence of software faults new methods are needed for the safety and reliability evaluation of these systems. In the research project 'Programmable automation systems in nuclear power plants (OHA)' various safety assessment methods and tools for software based systems are developed and evaluated. This report first discusses the (common cause) failure mechanisms in software-based systems, then defines fault-tolerant system architectures to avoid common cause failures, then studies the various alternatives to apply diversity and their influence on system reliability. Finally, a method for the assessment of diversity is described. Other recently published reports in OHA-report series handles the statistical reliability assessment of software based (STUK-YTO-TR 119), usage models in reliability assessment of software-based systems (STUK-YTO-TR 128) and handling of programmable automation in plant PSA-studies (STUK-YTO-TR 129)

  17. Knowledge modelling and reliability processing: presentation of the Figaro language and associated tools

    International Nuclear Information System (INIS)

    Bouissou, M.; Villatte, N.; Bouhadana, H.; Bannelier, M.

    1991-12-01

    EDF has been developing for several years an integrated set of knowledge-based and algorithmic tools for automation of reliability assessment of complex (especially sequential) systems. In this environment, the reliability expert has at his disposal all the powerful software tools for qualitative and quantitative processing, besides he gets various means to generate automatically the inputs for these tools, through the acquisition of graphical data. The development of these tools has been based on FIGARO, a specific language, which was built to get an homogeneous system modelling. Various compilers and interpreters get a FIGARO model into conventional models, such as fault-trees, Markov chains, Petri Networks. In this report, we introduce the main basics of FIGARO language, illustrating them with examples

  18. Impact of Internet of Things on Software Business Model and Software Industry

    OpenAIRE

    Murari, Bhanu Teja

    2016-01-01

    Context: Internet of things (IoT) technology is rapidly increasing and changes the business environment for a software organization. There is a need to understand what are important factors of business model should a software company focus on obtaining benefits from the potential that IoT offers. This thesis also focuses on finding the impact of IoT on software business model and software industry especially on software development. Objectives: In this thesis, we do research on IoT software b...

  19. Proposed reliability cost model

    Science.gov (United States)

    Delionback, L. M.

    1973-01-01

    The research investigations which were involved in the study include: cost analysis/allocation, reliability and product assurance, forecasting methodology, systems analysis, and model-building. This is a classic example of an interdisciplinary problem, since the model-building requirements include the need for understanding and communication between technical disciplines on one hand, and the financial/accounting skill categories on the other. The systems approach is utilized within this context to establish a clearer and more objective relationship between reliability assurance and the subcategories (or subelements) that provide, or reenforce, the reliability assurance for a system. Subcategories are further subdivided as illustrated by a tree diagram. The reliability assurance elements can be seen to be potential alternative strategies, or approaches, depending on the specific goals/objectives of the trade studies. The scope was limited to the establishment of a proposed reliability cost-model format. The model format/approach is dependent upon the use of a series of subsystem-oriented CER's and sometimes possible CTR's, in devising a suitable cost-effective policy.

  20. Beginning software engineering

    CERN Document Server

    Stephens, Rod

    2015-01-01

    Beginning Software Engineering demystifies the software engineering methodologies and techniques that professional developers use to design and build robust, efficient, and consistently reliable software. Free of jargon and assuming no previous programming, development, or management experience, this accessible guide explains important concepts and techniques that can be applied to any programming language. Each chapter ends with exercises that let you test your understanding and help you elaborate on the chapter's main concepts. Everything you need to understand waterfall, Sashimi, agile, RAD, Scrum, Kanban, Extreme Programming, and many other development models is inside!

  1. Software-Engineering Process Simulation (SEPS) model

    Science.gov (United States)

    Lin, C. Y.; Abdel-Hamid, T.; Sherif, J. S.

    1992-01-01

    The Software Engineering Process Simulation (SEPS) model is described which was developed at JPL. SEPS is a dynamic simulation model of the software project development process. It uses the feedback principles of system dynamics to simulate the dynamic interactions among various software life cycle development activities and management decision making processes. The model is designed to be a planning tool to examine tradeoffs of cost, schedule, and functionality, and to test the implications of different managerial policies on a project's outcome. Furthermore, SEPS will enable software managers to gain a better understanding of the dynamics of software project development and perform postmodern assessments.

  2. Online Rule Generation Software Process Model

    OpenAIRE

    Sudeep Marwaha; Alka Aroa; Satma M C; Rajni Jain; R C Goyal

    2013-01-01

    For production systems like expert systems, a rule generation software can facilitate the faster deployment. The software process model for rule generation using decision tree classifier refers to the various steps required to be executed for the development of a web based software model for decision rule generation. The Royce’s final waterfall model has been used in this paper to explain the software development process. The paper presents the specific output of various steps of modified wat...

  3. On Quality and Measures in Software Engineering

    Science.gov (United States)

    Bucur, Ion I.

    2006-01-01

    Complexity measures are mainly used to estimate vital information about reliability and maintainability of software systems from regular analysis of the source code. Such measures also provide constant feedback during a software project to assist the control of the development procedure. There exist several models to classify a software product's…

  4. An Assessment between Software Development Life Cycle Models of Software Engineering

    OpenAIRE

    Er. KESHAV VERMA; Er. PRAMOD KUMAR; Er. MOHIT KUMAR; Er.GYANESH TIWARI

    2013-01-01

    This research deals with an essential and important subject in Digital world. It is related with the software managing processes that inspect the part of software development during the development models, which are called as software development life cycle. It shows five of the development models namely, waterfall, Iteration, V-shaped, spiral and Extreme programming. These models have advantages and disadvantages as well. So, the main objective of this research is to represent dissimilar mod...

  5. Power electronics reliability analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Mark A.; Atcitty, Stanley

    2009-12-01

    This report provides the DOE and industry with a general process for analyzing power electronics reliability. The analysis can help with understanding the main causes of failures, downtime, and cost and how to reduce them. One approach is to collect field maintenance data and use it directly to calculate reliability metrics related to each cause. Another approach is to model the functional structure of the equipment using a fault tree to derive system reliability from component reliability. Analysis of a fictitious device demonstrates the latter process. Optimization can use the resulting baseline model to decide how to improve reliability and/or lower costs. It is recommended that both electric utilities and equipment manufacturers make provisions to collect and share data in order to lay the groundwork for improving reliability into the future. Reliability analysis helps guide reliability improvements in hardware and software technology including condition monitoring and prognostics and health management.

  6. The art of software modeling

    CERN Document Server

    Lieberman, Benjamin A

    2007-01-01

    Modeling complex systems is a difficult challenge and all too often one in which modelers are left to their own devices. Using a multidisciplinary approach, The Art of Software Modeling covers theory, practice, and presentation in detail. It focuses on the importance of model creation and demonstrates how to create meaningful models. Presenting three self-contained sections, the text examines the background of modeling and frameworks for organizing information. It identifies techniques for researching and capturing client and system information and addresses the challenges of presenting models to specific audiences. Using concepts from art theory and aesthetics, this broad-based approach encompasses software practices, cognitive science, and information presentation. The book also looks at perception and cognition of diagrams, view composition, color theory, and presentation techniques. Providing practical methods for investigating and organizing complex information, The Art of Software Modeling demonstrate...

  7. Computing and software

    Directory of Open Access Journals (Sweden)

    White, G. C.

    2004-06-01

    Full Text Available The reality is that the statistical methods used for analysis of data depend upon the availability of software. Analysis of marked animal data is no different than the rest of the statistical field. The methods used for analysis are those that are available in reliable software packages. Thus, the critical importance of having reliable, up–to–date software available to biologists is obvious. Statisticians have continued to develop more robust models, ever expanding the suite of potential analysis methods available. But without software to implement these newer methods, they will languish in the abstract, and not be applied to the problems deserving them. In the Computers and Software Session, two new software packages are described, a comparison of implementation of methods for the estimation of nest survival is provided, and a more speculative paper about how the next generation of software might be structured is presented. Rotella et al. (2004 compare nest survival estimation with different software packages: SAS logistic regression, SAS non–linear mixed models, and Program MARK. Nests are assumed to be visited at various, possibly infrequent, intervals. All of the approaches described compute nest survival with the same likelihood, and require that the age of the nest is known to account for nests that eventually hatch. However, each approach offers advantages and disadvantages, explored by Rotella et al. (2004. Efford et al. (2004 present a new software package called DENSITY. The package computes population abundance and density from trapping arrays and other detection methods with a new and unique approach. DENSITY represents the first major addition to the analysis of trapping arrays in 20 years. Barker & White (2004 discuss how existing software such as Program MARK require that each new model’s likelihood must be programmed specifically for that model. They wishfully think that future software might allow the user to combine

  8. Reliability analysis of reactor inspection robot(RIROB)

    International Nuclear Information System (INIS)

    Eom, H. S.; Kim, J. H.; Lee, J. C.; Choi, Y. R.; Moon, S. S.

    2002-05-01

    This report describes the method and the result of the reliability analysis of RIROB developed in Korea Atomic Energy Research Institute. There are many classic techniques and models for the reliability analysis. These techniques and models have been used widely and approved in other industries such as aviation and nuclear industry. Though these techniques and models have been approved in real fields they are still insufficient for the complicated systems such RIROB which are composed of computer, networks, electronic parts, mechanical parts, and software. Particularly the application of these analysis techniques to digital and software parts of complicated systems is immature at this time thus expert judgement plays important role in evaluating the reliability of the systems at these days. In this report we proposed a method which combines diverse evidences relevant to the reliability to evaluate the reliability of complicated systems such as RIROB. The proposed method combines diverse evidences and performs inference in formal and in quantitative way by using the benefits of Bayesian Belief Nets (BBN)

  9. Software metrics: Software quality metrics for distributed systems. [reliability engineering

    Science.gov (United States)

    Post, J. V.

    1981-01-01

    Software quality metrics was extended to cover distributed computer systems. Emphasis is placed on studying embedded computer systems and on viewing them within a system life cycle. The hierarchy of quality factors, criteria, and metrics was maintained. New software quality factors were added, including survivability, expandability, and evolvability.

  10. Fault-tolerant embedded system design and optimization considering reliability estimation uncertainty

    International Nuclear Information System (INIS)

    Wattanapongskorn, Naruemon; Coit, David W.

    2007-01-01

    In this paper, we model embedded system design and optimization, considering component redundancy and uncertainty in the component reliability estimates. The systems being studied consist of software embedded in associated hardware components. Very often, component reliability values are not known exactly. Therefore, for reliability analysis studies and system optimization, it is meaningful to consider component reliability estimates as random variables with associated estimation uncertainty. In this new research, the system design process is formulated as a multiple-objective optimization problem to maximize an estimate of system reliability, and also, to minimize the variance of the reliability estimate. The two objectives are combined by penalizing the variance for prospective solutions. The two most common fault-tolerant embedded system architectures, N-Version Programming and Recovery Block, are considered as strategies to improve system reliability by providing system redundancy. Four distinct models are presented to demonstrate the proposed optimization techniques with or without redundancy. For many design problems, multiple functionally equivalent software versions have failure correlation even if they have been independently developed. The failure correlation may result from faults in the software specification, faults from a voting algorithm, and/or related faults from any two software versions. Our approach considers this correlation in formulating practical optimization models. Genetic algorithms with a dynamic penalty function are applied in solving this optimization problem, and reasonable and interesting results are obtained and discussed

  11. An overview of erosion corrosion models and reliability assessment for corrosion defects in piping system

    International Nuclear Information System (INIS)

    Srividya, A.; Suresh, H.N.; Verma, A.K.; Gopika, V.; Santosh

    2006-01-01

    Piping systems are part of passive structural elements in power plants. The analysis of the piping systems and their quantification in terms of failure probability is of utmost importance. The piping systems may fail due to various degradation mechanisms like thermal fatigue, erosion-corrosion, stress corrosion cracking and vibration fatigue. On examination of previous results, erosion corrosion was more prevalent and wall thinning is a time dependent phenomenon. The paper is intended to consolidate the work done by various investigators on erosion corrosion in estimating the erosion corrosion rate and reliability predictions. A comparison of various erosion corrosion models is made. The reliability predictions based on remaining strength of corroded pipelines by wall thinning is also attempted. Variables in the limit state functions are modelled using normal distributions and Reliability assessment is carried out using some of the existing failure pressure models. A steady state corrosion rate is assumed to estimate the corrosion defect and First Order Reliability Method (FORM) is used to find the probability of failure associated with corrosion defects over time using the software for Component Reliability evaluation (COMREL). (author)

  12. A reliability model of a warm standby configuration with two identical sets of units

    International Nuclear Information System (INIS)

    Huang, Wei; Loman, James; Song, Thomas

    2015-01-01

    This article presents a new reliability model and the development of its analytical solution for a warm standby redundant configuration with units that are originally operated in active mode, and then, upon turn-on of originally standby units, are put into warm standby mode. These units can be used later if a standby- turned into active-unit fails. Numerical results of an example configuration are presented and discussed with comparison to other warm standby configurations, and to Monte Carlo simulation results obtained from BlockSim software. Results show that the Monte Carlo simulation model gives virtually identical reliability value when the simulation uses a high number of replications, confirming the developed model. - Highlights: • A new reliability model is developed for a warm standby redundancy with two sets of identical units. • The units subject to state change from active to standby then back to active mode. • A closed form analytical solution is developed with exponential distribution. • To validate the developed model, a Monte Carlo simulation for an exemplary configuration is performed

  13. Assessing and updating the reliability of concrete bridges subjected to spatial deterioration - principles and software implementation

    DEFF Research Database (Denmark)

    Schneider, Ronald; Fischer, Johannes; Bügler, Maximilian

    2015-01-01

    to implement the method presented here. The software prototype is applied to a typical highway bridge and the influence of inspection information on the system deterioration state and the structural reliability is quantified taking into account the spatial correlation of the corrosion process. This work...

  14. Model-driven dependability assessment of software systems

    CERN Document Server

    Bernardi, Simona; Petriu, Dorina C

    2013-01-01

    In this book, the authors present cutting-edge model-driven techniques for modeling and analysis of software dependability. Most of them are based on the use of UML as software specification language. From the software system specification point of view, such techniques exploit the standard extension mechanisms of UML (i.e., UML profiling). UML profiles enable software engineers to add non-functional properties to the software model, in addition to the functional ones. The authors detail the state of the art on UML profile proposals for dependability specification and rigorously describe the t

  15. Integration of Simulink Models with Component-based Software Models

    Directory of Open Access Journals (Sweden)

    MARIAN, N.

    2008-06-01

    Full Text Available Model based development aims to facilitate the development of embedded control systems by emphasizing the separation of the design level from the implementation level. Model based design involves the use of multiple models that represent different views of a system, having different semantics of abstract system descriptions. Usually, in mechatronics systems, design proceeds by iterating model construction, model analysis, and model transformation. Constructing a MATLAB/Simulink model, a plant and controller behavior is simulated using graphical blocks to represent mathematical and logical constructs and process flow, then software code is generated. A Simulink model is a representation of the design or implementation of a physical system that satisfies a set of requirements. A software component-based system aims to organize system architecture and behavior as a means of computation, communication and constraints, using computational blocks and aggregates for both discrete and continuous behavior, different interconnection and execution disciplines for event-based and time-based controllers, and so on, to encompass the demands to more functionality, at even lower prices, and with opposite constraints. COMDES (Component-based Design of Software for Distributed Embedded Systems is such a component-based system framework developed by the software engineering group of Mads Clausen Institute for Product Innovation (MCI, University of Southern Denmark. Once specified, the software model has to be analyzed. One way of doing that is to integrate in wrapper files the model back into Simulink S-functions, and use its extensive simulation features, thus allowing an early exploration of the possible design choices over multiple disciplines. The paper describes a safe translation of a restricted set of MATLAB/Simulink blocks to COMDES software components, both for continuous and discrete behavior, and the transformation of the software system into the S

  16. Intra-observer reliability and agreement of manual and digital orthodontic model analysis.

    Science.gov (United States)

    Koretsi, Vasiliki; Tingelhoff, Linda; Proff, Peter; Kirschneck, Christian

    2018-01-23

    Digital orthodontic model analysis is gaining acceptance in orthodontics, but its reliability is dependent on the digitalisation hardware and software used. We thus investigated intra-observer reliability and agreement / conformity of a particular digital model analysis work-flow in relation to traditional manual plaster model analysis. Forty-eight plaster casts of the upper/lower dentition were collected. Virtual models were obtained with orthoX®scan (Dentaurum) and analysed with ivoris®analyze3D (Computer konkret). Manual model analyses were done with a dial caliper (0.1 mm). Common parameters were measured on each plaster cast and its virtual counterpart five times each by an experienced observer. We assessed intra-observer reliability within method (ICC), agreement/conformity between methods (Bland-Altman analyses and Lin's concordance correlation), and changing bias (regression analyses). Intra-observer reliability was substantial within each method (ICC ≥ 0.7), except for five manual outcomes (12.8 per cent). Bias between methods was statistically significant, but less than 0.5 mm for 87.2 per cent of the outcomes. In general, larger tooth sizes were measured digitally. Total difference maxilla and mandible had wide limits of agreement (-3.25/6.15 and -2.31/4.57 mm), but bias between methods was mostly smaller than intra-observer variation within each method with substantial conformity of manual and digital measurements in general. No changing bias was detected. Although both work-flows were reliable, the investigated digital work-flow proved to be more reliable and yielded on average larger tooth sizes. Averaged differences between methods were within 0.5 mm for directly measured outcomes but wide ranges are expected for some computed space parameters due to cumulative error. © The Author 2017. Published by Oxford University Press on behalf of the European Orthodontic Society. All rights reserved. For permissions, please email: journals.permissions@oup.com

  17. Testing Software Development Project Productivity Model

    Science.gov (United States)

    Lipkin, Ilya

    Software development is an increasingly influential factor in today's business environment, and a major issue affecting software development is how an organization estimates projects. If the organization underestimates cost, schedule, and quality requirements, the end results will not meet customer needs. On the other hand, if the organization overestimates these criteria, resources that could have been used more profitably will be wasted. There is no accurate model or measure available that can guide an organization in a quest for software development, with existing estimation models often underestimating software development efforts as much as 500 to 600 percent. To address this issue, existing models usually are calibrated using local data with a small sample size, with resulting estimates not offering improved cost analysis. This study presents a conceptual model for accurately estimating software development, based on an extensive literature review and theoretical analysis based on Sociotechnical Systems (STS) theory. The conceptual model serves as a solution to bridge organizational and technological factors and is validated using an empirical dataset provided by the DoD. Practical implications of this study allow for practitioners to concentrate on specific constructs of interest that provide the best value for the least amount of time. This study outlines key contributing constructs that are unique for Software Size E-SLOC, Man-hours Spent, and Quality of the Product, those constructs having the largest contribution to project productivity. This study discusses customer characteristics and provides a framework for a simplified project analysis for source selection evaluation and audit task reviews for the customers and suppliers. Theoretical contributions of this study provide an initial theory-based hypothesized project productivity model that can be used as a generic overall model across several application domains such as IT, Command and Control

  18. Reliability of new software in measuring cervical multifidus diameters and shoulder muscle strength in a synchronized way; an ultrasonographic study

    Directory of Open Access Journals (Sweden)

    Leila Rahnama

    2015-08-01

    Full Text Available OBJECTIVES: This study was conducted with the purpose of evaluating the inter-session reliability of new software to measure the diameters of the cervical multifidus muscle (CMM, both at rest and during isometric contractions of the shoulder abductors in subjects with neck pain and in healthy individuals.METHOD: In the present study, the reliability of measuring the diameters of the CMM with the Sonosynch software was evaluated by using 24 participants, including 12 subjects with chronic neck pain and 12 healthy individuals. The anterior-posterior diameter (APD and the lateral diameter (LD of the CMM were measured in a resting state and then repeated during isometric contraction of the shoulder abductors. Measurements were taken on separate occasions 3 to 7 days apart in order to determine inter-session reliability. Intraclass correlation coefficient (ICC, standard error of measurement (SEM, and smallest detectable difference (SDD were used to evaluate the relative and absolute reliability, respectively.RESULTS: The Sonosynch software has shown to be highly reliable in measuring the diameters of the CMM both in healthy subjects and in those with neck pain. The ICCs 95% CI for APD ranged from 0.84 to 0.94 in subjects with neck pain and from 0.86 to 0.94 in healthy subjects. For LD, the ICC 95% CI ranged from 0.64 to 0.95 in subjects with neck pain and from 0.82 to 0.92 in healthy subjects.CONCLUSIONS: Ultrasonographic measurement of the diameters of the CMM using Sonosynch has proved to be reliable especially for APD in healthy subjects as well as subjects with neck pain.

  19. Study on Software Quality Improvement based on Rayleigh Model and PDCA Model

    OpenAIRE

    Ning Jingfeng; Hu Ming

    2013-01-01

    As the software industry gradually becomes mature, software quality is regarded as the life of a software enterprise. This article discusses how to improve the quality of software, applies Rayleigh model and PDCA model to the software quality management, combines with the defect removal effectiveness index, exerts PDCA model to solve the problem of quality management objectives when using the Rayleigh model in bidirectional quality improvement strategies of software quality management, a...

  20. Reliability assessment of embedded digital system using multi-state function

    International Nuclear Information System (INIS)

    Choi, Jong Gyun; Seong, Poong Hyun

    2006-01-01

    This work describes a combinatorial model for estimating the reliability of the embedded digital system by means of multi-state function. This model includes a coverage model for fault-handling techniques implemented in digital systems. The fault-handling techniques make it difficult for many types of components in digital system to be treated as binary state, good or bad. The multi-state function provides a complete analysis of multi-state systems as which the digital systems can be regarded. Through adaptation of software operational profile flow to multi-state function, the HW/SW interaction is also considered for estimation of the reliability of digital system. Using this model, we evaluate the reliability of one board controller in a digital system, Interposing Logic System (ILS), which is installed in YGN nuclear power units 3 and 4. Since the proposed model is a generalized combinatorial model, the simplification of this model becomes the conventional model that treats the system as binary state. This modeling method is particularly attractive for embedded systems in which small sized application software is implemented since it will require very laborious work for this method to be applied to systems with large software

  1. Industry Software Trustworthiness Criterion Research Based on Business Trustworthiness

    Science.gov (United States)

    Zhang, Jin; Liu, Jun-fei; Jiao, Hai-xing; Shen, Yi; Liu, Shu-yuan

    To industry software Trustworthiness problem, an idea aiming to business to construct industry software trustworthiness criterion is proposed. Based on the triangle model of "trustworthy grade definition-trustworthy evidence model-trustworthy evaluating", the idea of business trustworthiness is incarnated from different aspects of trustworthy triangle model for special industry software, power producing management system (PPMS). Business trustworthiness is the center in the constructed industry trustworthy software criterion. Fusing the international standard and industry rules, the constructed trustworthy criterion strengthens the maneuverability and reliability. Quantitive evaluating method makes the evaluating results be intuitionistic and comparable.

  2. Integration of Simulink Models with Component-based Software Models

    DEFF Research Database (Denmark)

    Marian, Nicolae

    2008-01-01

    Model based development aims to facilitate the development of embedded control systems by emphasizing the separation of the design level from the implementation level. Model based design involves the use of multiple models that represent different views of a system, having different semantics...... of abstract system descriptions. Usually, in mechatronics systems, design proceeds by iterating model construction, model analysis, and model transformation. Constructing a MATLAB/Simulink model, a plant and controller behavior is simulated using graphical blocks to represent mathematical and logical...... constraints. COMDES (Component-based Design of Software for Distributed Embedded Systems) is such a component-based system framework developed by the software engineering group of Mads Clausen Institute for Product Innovation (MCI), University of Southern Denmark. Once specified, the software model has...

  3. An algebraic approach to modeling in software engineering

    International Nuclear Information System (INIS)

    Loegel, C.J.; Ravishankar, C.V.

    1993-09-01

    Our work couples the formalism of universal algebras with the engineering techniques of mathematical modeling to develop a new approach to the software engineering process. Our purpose in using this combination is twofold. First, abstract data types and their specification using universal algebras can be considered a common point between the practical requirements of software engineering and the formal specification of software systems. Second, mathematical modeling principles provide us with a means for effectively analyzing real-world systems. We first use modeling techniques to analyze a system and then represent the analysis using universal algebras. The rest of the software engineering process exploits properties of universal algebras that preserve the structure of our original model. This paper describes our software engineering process and our experience using it on both research and commercial systems. We need a new approach because current software engineering practices often deliver software that is difficult to develop and maintain. Formal software engineering approaches use universal algebras to describe ''computer science'' objects like abstract data types, but in practice software errors are often caused because ''real-world'' objects are improperly modeled. There is a large semantic gap between the customer's objects and abstract data types. In contrast, mathematical modeling uses engineering techniques to construct valid models for real-world systems, but these models are often implemented in an ad hoc manner. A combination of the best features of both approaches would enable software engineering to formally specify and develop software systems that better model real systems. Software engineering, like mathematical modeling, should concern itself first and foremost with understanding a real system and its behavior under given circumstances, and then with expressing this knowledge in an executable form

  4. A study on quantitative V and V of safety-critical software

    International Nuclear Information System (INIS)

    Eom, H. S.; Kang, H. G.; Chang, S. C.; Ha, J. J.; Son, H. S.

    2004-03-01

    Recently practical needs have required quantitative features for the software reliability for Probabilistic Safety Assessment which is one of the important methods being used in assessing the overall safety of nuclear power plant. But the conventional assessment methods of software reliability could not provide enough information for PSA of NPP, therefore current assessments of a digital system which includes safety-critical software usually exclude the software part or use arbitrary values. This paper describes a Bayesian Belief Networks based method that models the rule-based qualitative software assessment method for a practical use and can produce quantitative results for PSA. The framework was constructed by utilizing BBN that can combine the qualitative and quantitative evidence relevant to the reliability of safety-critical software and can infer a conclusion in a formal and a quantitative way. The case study was performed by applying the method for assessing the quality of software requirement specification of safety-critical software that will be embedded in reactor protection system

  5. Reliability analysis and operator modelling

    International Nuclear Information System (INIS)

    Hollnagel, Erik

    1996-01-01

    The paper considers the state of operator modelling in reliability analysis. Operator models are needed in reliability analysis because operators are needed in process control systems. HRA methods must therefore be able to account both for human performance variability and for the dynamics of the interaction. A selected set of first generation HRA approaches is briefly described in terms of the operator model they use, their classification principle, and the actual method they propose. In addition, two examples of second generation methods are also considered. It is concluded that first generation HRA methods generally have very simplistic operator models, either referring to the time-reliability relationship or to elementary information processing concepts. It is argued that second generation HRA methods must recognise that cognition is embedded in a context, and be able to account for that in the way human reliability is analysed and assessed

  6. Object Oriented Modeling : A method for combining model and software development

    NARCIS (Netherlands)

    Van Lelyveld, W.

    2010-01-01

    When requirements for a new model cannot be met by available modeling software, new software can be developed for a specific model. Methods for the development of both model and software exist, but a method for combined development has not been found. A compatible way of thinking is required to

  7. Reliability and maintainability assessment factors for reliable fault-tolerant systems

    Science.gov (United States)

    Bavuso, S. J.

    1984-01-01

    A long term goal of the NASA Langley Research Center is the development of a reliability assessment methodology of sufficient power to enable the credible comparison of the stochastic attributes of one ultrareliable system design against others. This methodology, developed over a 10 year period, is a combined analytic and simulative technique. An analytic component is the Computer Aided Reliability Estimation capability, third generation, or simply CARE III. A simulative component is the Gate Logic Software Simulator capability, or GLOSS. The numerous factors that potentially have a degrading effect on system reliability and the ways in which these factors that are peculiar to highly reliable fault tolerant systems are accounted for in credible reliability assessments. Also presented are the modeling difficulties that result from their inclusion and the ways in which CARE III and GLOSS mitigate the intractability of the heretofore unworkable mathematics.

  8. Microgrid Design Analysis Using Technology Management Optimization and the Performance Reliability Model

    Energy Technology Data Exchange (ETDEWEB)

    Stamp, Jason E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Eddy, John P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jensen, Richard P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Munoz-Ramos, Karina [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-01-01

    Microgrids are a focus of localized energy production that support resiliency, security, local con- trol, and increased access to renewable resources (among other potential benefits). The Smart Power Infrastructure Demonstration for Energy Reliability and Security (SPIDERS) Joint Capa- bility Technology Demonstration (JCTD) program between the Department of Defense (DOD), Department of Energy (DOE), and Department of Homeland Security (DHS) resulted in the pre- liminary design and deployment of three microgrids at military installations. This paper is focused on the analysis process and supporting software used to determine optimal designs for energy surety microgrids (ESMs) in the SPIDERS project. There are two key pieces of software, an ex- isting software application developed by Sandia National Laboratories (SNL) called Technology Management Optimization (TMO) and a new simulation developed for SPIDERS called the per- formance reliability model (PRM). TMO is a decision support tool that performs multi-objective optimization over a mixed discrete/continuous search space for which the performance measures are unrestricted in form. The PRM is able to statistically quantify the performance and reliability of a microgrid operating in islanded mode (disconnected from any utility power source). Together, these two software applications were used as part of the ESM process to generate the preliminary designs presented by SNL-led DOE team to the DOD. Acknowledgements Sandia National Laboratories and the SPIDERS technical team would like to acknowledge the following for help in the project: * Mike Hightower, who has been the key driving force for Energy Surety Microgrids * Juan Torres and Abbas Akhil, who developed the concept of microgrids for military instal- lations * Merrill Smith, U.S. Department of Energy SPIDERS Program Manager * Ross Roley and Rich Trundy from U.S. Pacific Command * Bill Waugaman and Bill Beary from U.S. Northern Command * Tarek Abdallah, Melanie

  9. Development of a test rig and its application for validation and reliability testing of safety-critical software

    Energy Technology Data Exchange (ETDEWEB)

    Thai, N D; McDonald, A M [Atomic Energy of Canada Ltd., Mississauga, ON (Canada)

    1996-12-31

    This paper describes a versatile test rig developed by AECL for functional testing of safety-critical software used in the process trip computers of the Wolsong CANDU stations. The description covers the hardware and software aspects of the test rig, the test language and its interpreter, and other major testing software utilities such as the test oracle, sampler and profiler. The paper also discusses the application of the rig in the final stages of testing of the process trip computer software, namely validation and reliability tests. It shows how random test cases are generated, test scripts prepared and automatically run on the test rig. The versatility of the rig is further demonstrated in other types of testing such as sub-system tests, verification of the test oracle, testing of newly-developed test script, self-test and calibration. (author). 5 tabs., 10 figs.

  10. Development of a test rig and its application for validation and reliability testing of safety-critical software

    International Nuclear Information System (INIS)

    Thai, N.D.; McDonald, A.M.

    1995-01-01

    This paper describes a versatile test rig developed by AECL for functional testing of safety-critical software used in the process trip computers of the Wolsong CANDU stations. The description covers the hardware and software aspects of the test rig, the test language and its interpreter, and other major testing software utilities such as the test oracle, sampler and profiler. The paper also discusses the application of the rig in the final stages of testing of the process trip computer software, namely validation and reliability tests. It shows how random test cases are generated, test scripts prepared and automatically run on the test rig. The versatility of the rig is further demonstrated in other types of testing such as sub-system tests, verification of the test oracle, testing of newly-developed test script, self-test and calibration. (author). 5 tabs., 10 figs

  11. Addressing Software Engineering Issues in Real-Time Software ...

    African Journals Online (AJOL)

    Addressing Software Engineering Issues in Real-Time Software ... systems, manufacturing process, process control, military, space exploration, and ... but also physical properties such as timeliness, Quality of Service and reliability.

  12. Reliability of a computer software angle tool for measuring spine and pelvic flexibility during the sit-and-reach test.

    Science.gov (United States)

    Mier, Constance M; Shapiro, Belinda S

    2013-02-01

    The purpose of this study was to determine the reliability of a computer software angle tool that measures thoracic (T), lumbar (L), and pelvic (P) angles as a means of evaluating spine and pelvic flexibility during the sit-and-reach (SR) test. Thirty adults performed the SR twice on separate days. The SR test was captured on video and later analyzed for T, L, and P angles using the computer software angle tool. During the test, 3 markers were placed over T1, T12, and L5 vertebrae to identify T, L, and P angles. Intraclass correlation coefficient (ICC) indicated a very high internal consistency (between trials) for T, L, and P angles (0.95-0.99); thus, the average of trials was used for test-retest (between days) reliability. Mean (±SD) values did not differ between days for T (51.0 ± 14.3 vs. 52.3 ± 16.2°), L (23.9 ± 7.1 vs. 23.0 ± 6.9°), or P (98.4 ± 15.6 vs. 98.3 ± 14.7°) angles. Test-retest reliability (ICC) was high for T (0.96) and P (0.97) angles and moderate for L angle (0.84). Both intrarater and interrater reliabilities were high for T (0.95, 0.94) and P (0.97, 0.97) angles and moderate for L angle (0.87, 0.82). Thus, the computer software angle tool is a highly objective method for assessing spine and pelvic flexibility during a video-captured SR test.

  13. Software as quality product

    International Nuclear Information System (INIS)

    Enders, A.

    1975-01-01

    In many discussions on the reliability of computer systems, software is presented as the weak link in the chain. The contribution attempts to identify the reasons for this situation as seen from the software development. The concepts correctness and reliability of programmes are explained as they are understood in the specialist discussion of today. Measures and methods are discussed which are particularly relevant as far as the obtaining of fault-free and reliable programmes is concerned. Conclusions are drawn for the user of software so that he is in the position to judge himself what can be justly expected frm the product software compared to other products. (orig./LH) [de

  14. Reliability

    OpenAIRE

    Condon, David; Revelle, William

    2017-01-01

    Separating the signal in a test from the irrelevant noise is a challenge for all measurement. Low test reliability limits test validity, attenuates important relationships, and can lead to regression artifacts. Multiple approaches to the assessment and improvement of reliability are discussed. The advantages and disadvantages of several different approaches to reliability are considered. Practical advice on how to assess reliability using open source software is provided.

  15. Reliability databases: State-of-the-art and perspectives

    DEFF Research Database (Denmark)

    Akhmedjanov, Farit

    2001-01-01

    The report gives a history of development and an overview of the existing reliability databases. This overview also describes some other (than computer databases) sources of reliability and failures information, e.g. reliability handbooks, but the mainattention is paid to standard models...... and software packages containing the data mentioned. The standards corresponding to collection and exchange of reliability data are observed too. Finally, perspective directions in such data sources development areshown....

  16. Reliability and safety engineering

    CERN Document Server

    Verma, Ajit Kumar; Karanki, Durga Rao

    2016-01-01

    Reliability and safety are core issues that must be addressed throughout the life cycle of engineering systems. Reliability and Safety Engineering presents an overview of the basic concepts, together with simple and practical illustrations. The authors present reliability terminology in various engineering fields, viz.,electronics engineering, software engineering, mechanical engineering, structural engineering and power systems engineering. The book describes the latest applications in the area of probabilistic safety assessment, such as technical specification optimization, risk monitoring and risk informed in-service inspection. Reliability and safety studies must, inevitably, deal with uncertainty, so the book includes uncertainty propagation methods: Monte Carlo simulation, fuzzy arithmetic, Dempster-Shafer theory and probability bounds. Reliability and Safety Engineering also highlights advances in system reliability and safety assessment including dynamic system modeling and uncertainty management. Cas...

  17. Optimal structure of fault-tolerant software systems

    International Nuclear Information System (INIS)

    Levitin, Gregory

    2005-01-01

    This paper considers software systems consisting of fault-tolerant components. These components are built from functionally equivalent but independently developed versions characterized by different reliability and execution time. Because of hardware resource constraints, the number of versions that can run simultaneously is limited. The expected system execution time and its reliability (defined as probability of obtaining the correct output within a specified time) strictly depend on parameters of software versions and sequence of their execution. The system structure optimization problem is formulated in which one has to choose software versions for each component and find the sequence of their execution in order to achieve the greatest system reliability subject to cost constraints. The versions are to be chosen from a list of available products. Each version is characterized by its reliability, execution time and cost. The suggested optimization procedure is based on an algorithm for determining system execution time distribution that uses the moment generating function approach and on the genetic algorithm. Both N-version programming and the recovery block scheme are considered within a universal model. Illustrated example is presented

  18. Software engineering and Ada (Trademark) training: An implementation model for NASA

    Science.gov (United States)

    Legrand, Sue; Freedman, Glenn

    1988-01-01

    The choice of Ada for software engineering for projects such as the Space Station has resulted in government and industrial groups considering training programs that help workers become familiar with both a software culture and the intricacies of a new computer language. The questions of how much time it takes to learn software engineering with Ada, how much an organization should invest in such training, and how the training should be structured are considered. Software engineering is an emerging, dynamic discipline. It is defined by the author as the establishment and application of sound engineering environments, tools, methods, models, principles, and concepts combined with appropriate standards, guidelines, and practices to support computing which is correct, modifiable, reliable and safe, efficient, and understandable throughout the life cycle of the application. Neither the training programs needed, nor the content of such programs, have been well established. This study addresses the requirements for training for NASA personnel and recommends an implementation plan. A curriculum and a means of delivery are recommended. It is further suggested that a knowledgeable programmer may be able to learn Ada in 5 days, but that it takes 6 to 9 months to evolve into a software engineer who uses the language correctly and effectively. The curriculum and implementation plan can be adapted for each NASA Center according to the needs dictated by each project.

  19. An Accurate FFPA-PSR Estimator Algorithm and Tool for Software Effort Estimation

    Directory of Open Access Journals (Sweden)

    Senthil Kumar Murugesan

    2015-01-01

    Full Text Available Software companies are now keen to provide secure software with respect to accuracy and reliability of their products especially related to the software effort estimation. Therefore, there is a need to develop a hybrid tool which provides all the necessary features. This paper attempts to propose a hybrid estimator algorithm and model which incorporates quality metrics, reliability factor, and the security factor with a fuzzy-based function point analysis. Initially, this method utilizes a fuzzy-based estimate to control the uncertainty in the software size with the help of a triangular fuzzy set at the early development stage. Secondly, the function point analysis is extended by the security and reliability factors in the calculation. Finally, the performance metrics are added with the effort estimation for accuracy. The experimentation is done with different project data sets on the hybrid tool, and the results are compared with the existing models. It shows that the proposed method not only improves the accuracy but also increases the reliability, as well as the security, of the product.

  20. A flexible modelling software for data acquisition

    International Nuclear Information System (INIS)

    Shu Yantai; Chen Yanhui; Yang Songqi; Liu Genchen

    1992-03-01

    A flexible modelling software for data acquisition is based on an event-driven simulator. It can be used to simulate a wide variety of systems which can be modelled as open queuing networks. The main feature of the software is its flexibility to evaluate the performance of various data acquisition system, whether pulsed or not. The flexible features of this software as follow: The user can choose the number of processors in the model and the route which every job takes to move the model. the service rate of a processor is automatically adapted. The simulator has a pipe-line mechanism. A job can be divided into several segments and a processor may be used as a compression component etc. Some modelling techniques and applications of this software in plasma physics laboratories are also presented

  1. Development of a Standard Set of Software Indicators for Aeronautical Systems Center.

    Science.gov (United States)

    1992-09-01

    29:12). The composite models listed include COCOMO and the Software Productivity, Quality, and Reliability Model ( SPQR ) (29:12). The SPQR model was...determine the values of the 68 input parameters. Source provides no specifics. Indicator Name SPQR (SW Productivity, Qual, Reliability) Indicator Class

  2. Guidelines for reliability analysis of digital systems in PSA context. Phase 1 status report

    International Nuclear Information System (INIS)

    Authen, S.; Larsson, J.; Bjoerkman, K.; Holmberg, J.-E.

    2010-12-01

    Digital protection and control systems are appearing as upgrades in older nuclear power plants (NPPs) and are commonplace in new NPPs. To assess the risk of NPP operation and to determine the risk impact of digital system upgrades on NPPs, quantitative reliability models are needed for digital systems. Due to the many unique attributes of these systems, challenges exist in systems analysis, modeling and in data collection. Currently there is no consensus on reliability analysis approaches. Traditional methods have clearly limitations, but more dynamic approaches are still in trial stage and can be difficult to apply in full scale probabilistic safety assessments (PSA). The number of PSAs worldwide including reliability models of digital I and C systems are few. A comparison of Nordic experiences and a literature review on main international references have been performed in this pre-study project. The study shows a wide range of approaches, and also indicates that no state-of-the-art currently exists. The study shows areas where the different PSAs agree and gives the basis for development of a common taxonomy for reliability analysis of digital systems. It is still an open matter whether software reliability needs to be explicitly modelled in the PSA. The most important issue concerning software reliability is proper descriptions of the impact that software-based systems has on the dependence between the safety functions and the structure of accident sequences. In general the conventional fault tree approach seems to be sufficient for modelling reactor protection system kind of functions. The following focus areas have been identified for further activities: 1. Common taxonomy of hardware and software failure modes of digital components for common use 2. Guidelines regarding level of detail in system analysis and screening of components, failure modes and dependencies 3. Approach for modelling of CCF between components (including software). (Author)

  3. Guidelines for reliability analysis of digital systems in PSA context. Phase 1 status report

    Energy Technology Data Exchange (ETDEWEB)

    Authen, S.; Larsson, J. (Risk Pilot AB, Stockholm (Sweden)); Bjoerkman, K.; Holmberg, J.-E. (VTT, Helsingfors (Finland))

    2010-12-15

    Digital protection and control systems are appearing as upgrades in older nuclear power plants (NPPs) and are commonplace in new NPPs. To assess the risk of NPP operation and to determine the risk impact of digital system upgrades on NPPs, quantitative reliability models are needed for digital systems. Due to the many unique attributes of these systems, challenges exist in systems analysis, modeling and in data collection. Currently there is no consensus on reliability analysis approaches. Traditional methods have clearly limitations, but more dynamic approaches are still in trial stage and can be difficult to apply in full scale probabilistic safety assessments (PSA). The number of PSAs worldwide including reliability models of digital I and C systems are few. A comparison of Nordic experiences and a literature review on main international references have been performed in this pre-study project. The study shows a wide range of approaches, and also indicates that no state-of-the-art currently exists. The study shows areas where the different PSAs agree and gives the basis for development of a common taxonomy for reliability analysis of digital systems. It is still an open matter whether software reliability needs to be explicitly modelled in the PSA. The most important issue concerning software reliability is proper descriptions of the impact that software-based systems has on the dependence between the safety functions and the structure of accident sequences. In general the conventional fault tree approach seems to be sufficient for modelling reactor protection system kind of functions. The following focus areas have been identified for further activities: 1. Common taxonomy of hardware and software failure modes of digital components for common use 2. Guidelines regarding level of detail in system analysis and screening of components, failure modes and dependencies 3. Approach for modelling of CCF between components (including software). (Author)

  4. A measurement system for large, complex software programs

    Science.gov (United States)

    Rone, Kyle Y.; Olson, Kitty M.; Davis, Nathan E.

    1994-01-01

    This paper describes measurement systems required to forecast, measure, and control activities for large, complex software development and support programs. Initial software cost and quality analysis provides the foundation for meaningful management decisions as a project evolves. In modeling the cost and quality of software systems, the relationship between the functionality, quality, cost, and schedule of the product must be considered. This explicit relationship is dictated by the criticality of the software being developed. This balance between cost and quality is a viable software engineering trade-off throughout the life cycle. Therefore, the ability to accurately estimate the cost and quality of software systems is essential to providing reliable software on time and within budget. Software cost models relate the product error rate to the percent of the project labor that is required for independent verification and validation. The criticality of the software determines which cost model is used to estimate the labor required to develop the software. Software quality models yield an expected error discovery rate based on the software size, criticality, software development environment, and the level of competence of the project and developers with respect to the processes being employed.

  5. Evaluation of Software Quality to Improve Application Performance Using Mc Call Model

    Directory of Open Access Journals (Sweden)

    Inda D Lestantri

    2018-04-01

    Full Text Available The existence of software should have more value to improve the performance of the organization in addition to having the primary function to automate. Before being implemented in an operational environment, software must pass the test gradually to ensure that the software is functioning properly, meeting user needs and providing convenience for users to use it. This test is performed on a web-based application, by taking a test case in an e-SAP application. E-SAP is an application used to monitor teaching and learning activities used by a university in Jakarta. To measure software quality, testing can be done on users randomly. The user samples selected in this test are users with an age range of 18 years old up to 25 years, background information technology. This test was conducted on 30 respondents. This test is done by using Mc Call model. Model of testing Mc Call consists of 11 dimensions are grouped into 3 categories. This paper describes the testing with reference to the category of product operation, which includes 5 dimensions. The dimensions of testing performed include the dimensions of correctness, usability, efficiency, reliability, and integrity. This paper discusses testing on each dimension to measure software quality as an effort to improve performance. The result of research is e-SAP application has good quality with product operation value equal to 85.09%. This indicates that the e-SAP application has a great quality, so this application deserves to be examined in the next stage on the operational environment.

  6. Reliability data collection and use in risk and availability assessment

    International Nuclear Information System (INIS)

    Colombari, V.

    1989-01-01

    For EuReDatA it is a prevailing objective to initiate and support contact between experts, companies and institutions active in reliability engineering and research. Main topics of this 6th EuReDatA Conference are: Reliability data banks; incidents data banks; common cause data; source and propagation of uncertainties; computer aided risk analysis; reliability and incidents data acquisition and processing; human reliability; probabilistic safety and availability assessment; feedback of reliability into system design; data fusion; reliability modeling and techniques; structural and mechanical reliability; consequence modeling; software and electronic reliability; reliability tests. Some conference papers are separately indexed in the database. (HP)

  7. COMPARATIVE ANALYSIS OF SOFTWARE DEVELOPMENT MODELS

    OpenAIRE

    Sandeep Kaur*

    2017-01-01

    No geek is unfamiliar with the concept of software development life cycle (SDLC). This research deals with the various SDLC models covering waterfall, spiral, and iterative, agile, V-shaped, prototype model. In the modern era, all the software systems are fallible as they can’t stand with certainty. So, it is tried to compare all aspects of the various models, their pros and cons so that it could be easy to choose a particular model at the time of need

  8. Building and integrating reliability models in a Reliability-Centered-Maintenance approach

    International Nuclear Information System (INIS)

    Verite, B.; Villain, B.; Venturini, V.; Hugonnard, S.; Bryla, P.

    1998-03-01

    Electricite de France (EDF) has recently developed its OMF-Structures method, designed to optimize preventive maintenance of passive structures such as pipes and support, based on risk. In particular, reliability performances of components need to be determined; it is a two-step process, consisting of a qualitative sort followed by a quantitative evaluation, involving two types of models. Initially, degradation models are widely used to exclude some components from the field of preventive maintenance. The reliability of the remaining components is then evaluated by means of quantitative reliability models. The results are then included in a risk indicator that is used to directly optimize preventive maintenance tasks. (author)

  9. Models for composing software : an analysis of software composition and objects

    NARCIS (Netherlands)

    Bergmans, Lodewijk

    1999-01-01

    In this report, we investigate component-based software construction with a focus on composition. In particular we try to analyze the requirements and issues for components and software composition. As a means to understand this research area, we introduce a canonical model for representing

  10. Design and reliability analysis of DP-3 dynamic positioning control architecture

    Science.gov (United States)

    Wang, Fang; Wan, Lei; Jiang, Da-Peng; Xu, Yu-Ru

    2011-12-01

    As the exploration and exploitation of oil and gas proliferate throughout deepwater area, the requirements on the reliability of dynamic positioning system become increasingly stringent. The control objective ensuring safety operation at deep water will not be met by a single controller for dynamic positioning. In order to increase the availability and reliability of dynamic positioning control system, the triple redundancy hardware and software control architectures were designed and developed according to the safe specifications of DP-3 classification notation for dynamically positioned ships and rigs. The hardware redundant configuration takes the form of triple-redundant hot standby configuration including three identical operator stations and three real-time control computers which connect each other through dual networks. The function of motion control and redundancy management of control computers were implemented by software on the real-time operating system VxWorks. The software realization of task loose synchronization, majority voting and fault detection were presented in details. A hierarchical software architecture was planed during the development of software, consisting of application layer, real-time layer and physical layer. The behavior of the DP-3 dynamic positioning control system was modeled by a Markov model to analyze its reliability. The effects of variation in parameters on the reliability measures were investigated. The time domain dynamic simulation was carried out on a deepwater drilling rig to prove the feasibility of the proposed control architecture.

  11. Structural hybrid reliability index and its convergent solving method based on random–fuzzy–interval reliability model

    OpenAIRE

    Hai An; Ling Zhou; Hui Sun

    2016-01-01

    Aiming to resolve the problems of a variety of uncertainty variables that coexist in the engineering structure reliability analysis, a new hybrid reliability index to evaluate structural hybrid reliability, based on the random–fuzzy–interval model, is proposed in this article. The convergent solving method is also presented. First, the truncated probability reliability model, the fuzzy random reliability model, and the non-probabilistic interval reliability model are introduced. Then, the new...

  12. Improving the Reliability of Decision-Support Systems for Nuclear Emergency Management by Leveraging Software Design Diversity

    Directory of Open Access Journals (Sweden)

    Tudor B. Ionescu

    2016-03-01

    Full Text Available This paper introduces a novel method of continuous verification of simulation software used in decision-support systems for nuclear emergency management (DSNE. The proposed approach builds on methods from the field of software reliability engineering, such as N-Version Programming, Recovery Blocks, and Consensus Recovery Blocks. We introduce a new acceptance test for dispersion simulation results and a new voting scheme based on taxonomies of simulation results rather than individual simulation results. The acceptance test and the voter are used in a new scheme, which extends the Consensus Recovery Block method by a database of result taxonomies to support machine-learning. This enables the system to learn how to distinguish correct from incorrect results, with respect to the implemented numerical schemes. Considering that decision-support systems for nuclear emergency management are used in a safety-critical application context, the methods introduced in this paper help improve the reliability of the system and the trustworthiness of the simulation results used by emergency managers in the decision making process. The effectiveness of the approach has been assessed using the atmospheric dispersion forecasts of two test versions of the widely used RODOS DSNE system.

  13. Study of the nonlinear imperfect software debugging model

    International Nuclear Information System (INIS)

    Wang, Jinyong; Wu, Zhibo

    2016-01-01

    In recent years there has been a dramatic proliferation of research on imperfect software debugging phenomena. Software debugging is a complex process and is affected by a variety of factors, including the environment, resources, personnel skills, and personnel psychologies. Therefore, the simple assumption that debugging is perfect is inconsistent with the actual software debugging process, wherein a new fault can be introduced when removing a fault. Furthermore, the fault introduction process is nonlinear, and the cumulative number of nonlinearly introduced faults increases over time. Thus, this paper proposes a nonlinear, NHPP imperfect software debugging model in consideration of the fact that fault introduction is a nonlinear process. The fitting and predictive power of the NHPP-based proposed model are validated through related experiments. Experimental results show that this model displays better fitting and predicting performance than the traditional NHPP-based perfect and imperfect software debugging models. S-confidence bounds are set to analyze the performance of the proposed model. This study also examines and discusses optimal software release-time policy comprehensively. In addition, this research on the nonlinear process of fault introduction is significant given the recent surge of studies on software-intensive products, such as cloud computing and big data. - Highlights: • Fault introduction is a nonlinear changing process during the debugging phase. • The assumption that the process of fault introduction is nonlinear is credible. • Our proposed model can better fit and accurately predict software failure behavior. • Research on fault introduction case is significant to software-intensive products.

  14. Automating risk analysis of software design models.

    Science.gov (United States)

    Frydman, Maxime; Ruiz, Guifré; Heymann, Elisa; César, Eduardo; Miller, Barton P

    2014-01-01

    The growth of the internet and networked systems has exposed software to an increased amount of security threats. One of the responses from software developers to these threats is the introduction of security activities in the software development lifecycle. This paper describes an approach to reduce the need for costly human expertise to perform risk analysis in software, which is common in secure development methodologies, by automating threat modeling. Reducing the dependency on security experts aims at reducing the cost of secure development by allowing non-security-aware developers to apply secure development with little to no additional cost, making secure development more accessible. To automate threat modeling two data structures are introduced, identification trees and mitigation trees, to identify threats in software designs and advise mitigation techniques, while taking into account specification requirements and cost concerns. These are the components of our model for automated threat modeling, AutSEC. We validated AutSEC by implementing it in a tool based on data flow diagrams, from the Microsoft security development methodology, and applying it to VOMS, a grid middleware component, to evaluate our model's performance.

  15. A SYSTEMATIC STUDY OF SOFTWARE QUALITY MODELS

    OpenAIRE

    Dr.Vilas. M. Thakare; Ashwin B. Tomar

    2011-01-01

    This paper aims to provide a basis for software quality model research, through a systematic study ofpapers. It identifies nearly seventy software quality research papers from journals and classifies paper asper research topic, estimation approach, study context and data set. The paper results combined withother knowledge provides support for recommendations in future software quality model research, toincrease the area of search for relevant studies, carefully select the papers within a set ...

  16. Radiobiological modelling with MarCell software

    International Nuclear Information System (INIS)

    Hasan, J.S.; Jones, T.D.

    1996-01-01

    Jones introduced a bone marrow radiation cell kinetics model with great potential for application in the fields of health physics, radiation research, and medicine. However, until recently, only the model developers have been able to apply it because of the complex array of biological and physical assignments needed for evaluation of a particular radiation exposure protocol. The purpose of this article is to illustrate the use of MarCell (MARrow CELL Kinetics) software for MS-DOS, a user-friendly computer implementation of that mathematical model that allows almost anyone with an elementary knowledge of radiation physics and/or medical procedures to apply the model. A hands-on demonstration of the software will be given by guiding the user through evaluation of a medical total body irradiation protocol and a nuclear fallout scenario. A brief overview of the software is given in the Appendix

  17. Modelling software failures of digital I and C in probabilistic safety analyses based on the TELEPERM registered XS operating experience

    International Nuclear Information System (INIS)

    Jockenhoevel-Barttfeld, Mariana; Taurines Andre; Baeckstroem, Ola; Holmberg, Jan-Erik; Porthin, Markus; Tyrvaeinen, Tero

    2015-01-01

    Digital instrumentation and control (I and C) systems appear as upgrades in existing nuclear power plants (NPPs) and in new plant designs. In order to assess the impact of digital system failures, quantifiable reliability models are needed along with data for digital systems that are compatible with existing probabilistic safety assessments (PSA). The paper focuses on the modelling of software failures of digital I and C systems in probabilistic assessments. An analysis of software faults, failures and effects is presented to derive relevant failure modes of system and application software for the PSA. The estimations of software failure probabilities are based on an analysis of the operating experience of TELEPERM registered XS (TXS). For the assessment of application software failures the analysis combines the use of the TXS operating experience at an application function level combined with conservative engineering judgments. Failure probabilities to actuate on demand and of spurious actuation of typical reactor protection application are estimated. Moreover, the paper gives guidelines for the modelling of software failures in the PSA. The strategy presented in this paper is generic and can be applied to different software platforms and their applications.

  18. A comparison between Markovian models and Bayesian networks for treating some dependent events in reliability evaluations

    International Nuclear Information System (INIS)

    Duarte, Juliana P.; Leite, Victor C.; Melo, P.F. Frutuoso e

    2013-01-01

    Bayesian networks have become a very handy tool for solving problems in various application areas. This paper discusses the use of Bayesian networks to treat dependent events in reliability engineering typically modeled by Markovian models. Dependent events play an important role as, for example, when treating load-sharing systems, bridge systems, common-cause failures, and switching systems (those for which a standby component is activated after the main one fails by means of a switching mechanism). Repair plays an important role in all these cases (as, for example, the number of repairmen). All Bayesian network calculations are performed by means of the Netica™ software, of Norsys Software Corporation, and Fortran 90 to evaluate them over time. The discussion considers the development of time-dependent reliability figures of merit, which are easily obtained, through Markovian models, but not through Bayesian networks, because these latter need probability figures as input and not failure and repair rates. Bayesian networks produced results in very good agreement with those of Markov models and pivotal decomposition. Static and discrete time (DTBN) Bayesian networks were used in order to check their capabilities of modeling specific situations, like switching failures in cold-standby systems. The DTBN was more flexible to modeling systems where the time of occurrence of an event is important, for example, standby failure and repair. However, the static network model showed as good results as DTBN by a much more simplified approach. (author)

  19. A comparison between Markovian models and Bayesian networks for treating some dependent events in reliability evaluations

    Energy Technology Data Exchange (ETDEWEB)

    Duarte, Juliana P.; Leite, Victor C.; Melo, P.F. Frutuoso e, E-mail: julianapduarte@poli.ufrj.br, E-mail: victor.coppo.leite@poli.ufrj.br, E-mail: frutuoso@nuclear.ufrj.br [Universidade Federal do Rio de Janeiro (UFRJ), Rio de Janeiro, RJ (Brazil)

    2013-07-01

    Bayesian networks have become a very handy tool for solving problems in various application areas. This paper discusses the use of Bayesian networks to treat dependent events in reliability engineering typically modeled by Markovian models. Dependent events play an important role as, for example, when treating load-sharing systems, bridge systems, common-cause failures, and switching systems (those for which a standby component is activated after the main one fails by means of a switching mechanism). Repair plays an important role in all these cases (as, for example, the number of repairmen). All Bayesian network calculations are performed by means of the Netica™ software, of Norsys Software Corporation, and Fortran 90 to evaluate them over time. The discussion considers the development of time-dependent reliability figures of merit, which are easily obtained, through Markovian models, but not through Bayesian networks, because these latter need probability figures as input and not failure and repair rates. Bayesian networks produced results in very good agreement with those of Markov models and pivotal decomposition. Static and discrete time (DTBN) Bayesian networks were used in order to check their capabilities of modeling specific situations, like switching failures in cold-standby systems. The DTBN was more flexible to modeling systems where the time of occurrence of an event is important, for example, standby failure and repair. However, the static network model showed as good results as DTBN by a much more simplified approach. (author)

  20. Evaluating predictive models of software quality

    International Nuclear Information System (INIS)

    Ciaschini, V; Canaparo, M; Ronchieri, E; Salomoni, D

    2014-01-01

    Applications from High Energy Physics scientific community are constantly growing and implemented by a large number of developers. This implies a strong churn on the code and an associated risk of faults, which is unavoidable as long as the software undergoes active evolution. However, the necessities of production systems run counter to this. Stability and predictability are of paramount importance; in addition, a short turn-around time for the defect discovery-correction-deployment cycle is required. A way to reconcile these opposite foci is to use a software quality model to obtain an approximation of the risk before releasing a program to only deliver software with a risk lower than an agreed threshold. In this article we evaluated two quality predictive models to identify the operational risk and the quality of some software products. We applied these models to the development history of several EMI packages with intent to discover the risk factor of each product and compare it with its real history. We attempted to determine if the models reasonably maps reality for the applications under evaluation, and finally we concluded suggesting directions for further studies.

  1. Evaluating Predictive Models of Software Quality

    Science.gov (United States)

    Ciaschini, V.; Canaparo, M.; Ronchieri, E.; Salomoni, D.

    2014-06-01

    Applications from High Energy Physics scientific community are constantly growing and implemented by a large number of developers. This implies a strong churn on the code and an associated risk of faults, which is unavoidable as long as the software undergoes active evolution. However, the necessities of production systems run counter to this. Stability and predictability are of paramount importance; in addition, a short turn-around time for the defect discovery-correction-deployment cycle is required. A way to reconcile these opposite foci is to use a software quality model to obtain an approximation of the risk before releasing a program to only deliver software with a risk lower than an agreed threshold. In this article we evaluated two quality predictive models to identify the operational risk and the quality of some software products. We applied these models to the development history of several EMI packages with intent to discover the risk factor of each product and compare it with its real history. We attempted to determine if the models reasonably maps reality for the applications under evaluation, and finally we concluded suggesting directions for further studies.

  2. Structural hybrid reliability index and its convergent solving method based on random–fuzzy–interval reliability model

    Directory of Open Access Journals (Sweden)

    Hai An

    2016-08-01

    Full Text Available Aiming to resolve the problems of a variety of uncertainty variables that coexist in the engineering structure reliability analysis, a new hybrid reliability index to evaluate structural hybrid reliability, based on the random–fuzzy–interval model, is proposed in this article. The convergent solving method is also presented. First, the truncated probability reliability model, the fuzzy random reliability model, and the non-probabilistic interval reliability model are introduced. Then, the new hybrid reliability index definition is presented based on the random–fuzzy–interval model. Furthermore, the calculation flowchart of the hybrid reliability index is presented and it is solved using the modified limit-step length iterative algorithm, which ensures convergence. And the validity of convergent algorithm for the hybrid reliability model is verified through the calculation examples in literature. In the end, a numerical example is demonstrated to show that the hybrid reliability index is applicable for the wear reliability assessment of mechanisms, where truncated random variables, fuzzy random variables, and interval variables coexist. The demonstration also shows the good convergence of the iterative algorithm proposed in this article.

  3. Error-Free Software

    Science.gov (United States)

    1989-01-01

    001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.

  4. Integrating Behaviour in Software Models: An Event Coordination Notation

    DEFF Research Database (Denmark)

    Kindler, Ekkart

    2011-01-01

    One of the main problems in model-based software engineering is modelling behaviour in such a way that the behaviour models can be easily integrated with each other, with the structural software models and with pre-existing software. In this paper, we propose an event coordination notation (ECNO)...

  5. Software engineering

    CERN Document Server

    Sommerville, Ian

    2010-01-01

    The ninth edition of Software Engineering presents a broad perspective of software engineering, focusing on the processes and techniques fundamental to the creation of reliable, software systems. Increased coverage of agile methods and software reuse, along with coverage of 'traditional' plan-driven software engineering, gives readers the most up-to-date view of the field currently available. Practical case studies, a full set of easy-to-access supplements, and extensive web resources make teaching the course easier than ever.

  6. Evolving software products, the design of a water-related modeling software ecosystem

    DEFF Research Database (Denmark)

    Manikas, Konstantinos

    2017-01-01

    more than 50 years ago. However, a radical change of software products to evolve both in the software engineering as much as the organizational and business aspects in a disruptive manner are rather rare. In this paper, we report on the transformation of one of the market leader product series in water......-related calculation and modeling from a traditional business-as-usual series of products to an evolutionary software ecosystem. We do so by relying on existing concepts on software ecosystem analysis to analyze the future ecosystem. We report and elaborate on the main focus points necessary for this transition. We...... argue for the generalization of our focus points to the transition from traditional business-as-usual software products to software ecosystems....

  7. Assessing software quality at each step of its life-cycle to enhance reliability of control systems

    International Nuclear Information System (INIS)

    Hardion, V.; Buteau, A.; Leclercq, N.; Abeille, G.; Pierre-Joseph, Z.; Le, S.

    2012-01-01

    A distributed software control system aims to enhance the upgrade ability and reliability by sharing responsibility between several components. The disadvantage is that it makes it harder to detect problems on a significant number of modules. With Kaizen in mind we have chosen to continuously invest in automation to obtain a complete overview of software quality despite the growth of legacy code. The development process has already been mastered by staging each life-cycle step thanks to a continuous integration server based on JENKINS and MAVEN. We enhanced this process, focusing on 3 objectives: Automatic Test, Static Code Analysis and Post-Mortem Supervision. Now, the build process automatically includes a test section to detect regressions, incorrect behaviour and integration incompatibility. The in-house TANGOUNIT project satisfies the difficulties of testing distributed components such as Tango Devices. In the next step, the programming code has to pass a complete code quality check-up. The SONAR quality server has been integrated in the process, to collect each static code analysis and display the hot topics on summary web pages. Finally, the integration of Google BREAKPAD in every TANGO Devices gives us essential statistics from crash reports and enables us to replay the crash scenarios at any time. We have already gained greater visibility on current developments. Some concrete results will be presented including reliability enhancement, better management of subcontracted software development, quicker adoption of coding standards by new developers and understanding of impacts when moving to a new technology. (authors)

  8. Development of a New VLBI Data Analysis Software

    Science.gov (United States)

    Bolotin, Sergei; Gipson, John M.; MacMillan, Daniel S.

    2010-01-01

    We present an overview of a new VLBI analysis software under development at NASA GSFC. The new software will replace CALC/SOLVE and many related utility programs. It will have the capabilities of the current system as well as incorporate new models and data analysis techniques. In this paper we give a conceptual overview of the new software. We formulate the main goals of the software. The software should be flexible and modular to implement models and estimation techniques that currently exist or will appear in future. On the other hand it should be reliable and possess production quality for processing standard VLBI sessions. Also, it needs to be capable of processing observations from a fully deployed network of VLBI2010 stations in a reasonable time. We describe the software development process and outline the software architecture.

  9. Mathematical reliability an expository perspective

    CERN Document Server

    Mazzuchi, Thomas; Singpurwalla, Nozer

    2004-01-01

    In this volume consideration was given to more advanced theoretical approaches and novel applications of reliability to ensure that topics having a futuristic impact were specifically included. Topics like finance, forensics, information, and orthopedics, as well as the more traditional reliability topics were purposefully undertaken to make this collection different from the existing books in reliability. The entries have been categorized into seven parts, each emphasizing a theme that seems poised for the future development of reliability as an academic discipline with relevance. The seven parts are networks and systems; recurrent events; information and design; failure rate function and burn-in; software reliability and random environments; reliability in composites and orthopedics, and reliability in finance and forensics. Embedded within the above are some of the other currently active topics such as causality, cascading, exchangeability, expert testimony, hierarchical modeling, optimization and survival...

  10. Development of a new model to predict indoor daylighting: Integration in CODYRUN software and validation

    Energy Technology Data Exchange (ETDEWEB)

    Fakra, A.H., E-mail: fakra@univ-reunion.f [Physics and Mathematical Engineering Laboratory for Energy and Environment (PIMENT), University of La Reunion, 117 rue du General Ailleret, 97430 Le Tampon (French Overseas Dpt.), Reunion (France); Miranville, F.; Boyer, H.; Guichard, S. [Physics and Mathematical Engineering Laboratory for Energy and Environment (PIMENT), University of La Reunion, 117 rue du General Ailleret, 97430 Le Tampon (French Overseas Dpt.), Reunion (France)

    2011-07-15

    Research highlights: {yields} This study presents a new model capable to simulate indoor daylighting. {yields} The model was introduced in research software called CODYRUN. {yields} The validation of the code was realized from a lot of tests cases. -- Abstract: Many models exist in the scientific literature for determining indoor daylighting values. They are classified in three categories: numerical, simplified and empirical models. Nevertheless, each of these categories of models are not convenient for every application. Indeed, the numerical model requires high calculation time; conditions of use of the simplified models are limited, and experimental models need not only important financial resources but also a perfect control of experimental devices (e.g. scale model), as well as climatic characteristics of the location (e.g. in situ experiment). In this article, a new model based on a combination of multiple simplified models is established. The objective is to improve this category of model. The originality of our paper relies on the coupling of several simplified models of indoor daylighting calculations. The accuracy of the simulation code, introduced into CODYRUN software to simulate correctly indoor illuminance, is then verified. Besides, the software consists of a numerical building simulation code, developed in the Physics and Mathematical Engineering Laboratory for Energy and Environment (PIMENT) at the University of Reunion. Initially dedicated to the thermal, airflow and hydrous phenomena in the buildings, the software has been completed for the calculation of indoor daylighting. New models and algorithms - which rely on a semi-detailed approach - will be presented in this paper. In order to validate the accuracy of the integrated models, many test cases have been considered as analytical, inter-software comparisons and experimental comparisons. In order to prove the accuracy of the new model - which can properly simulate the illuminance - a

  11. On Model Based Synthesis of Embedded Control Software

    OpenAIRE

    Alimguzhin, Vadim; Mari, Federico; Melatti, Igor; Salvo, Ivano; Tronci, Enrico

    2012-01-01

    Many Embedded Systems are indeed Software Based Control Systems (SBCSs), that is control systems whose controller consists of control software running on a microcontroller device. This motivates investigation on Formal Model Based Design approaches for control software. Given the formal model of a plant as a Discrete Time Linear Hybrid System and the implementation specifications (that is, number of bits in the Analog-to-Digital (AD) conversion) correct-by-construction control software can be...

  12. Models on reliability of non-destructive testing

    International Nuclear Information System (INIS)

    Simola, K.; Pulkkinen, U.

    1998-01-01

    The reliability of ultrasonic inspections has been studied in e.g. international PISC (Programme for the Inspection of Steel Components) exercises. These exercises have produced a large amount of information on the effect of various factors on the reliability of inspections. The information obtained from reliability experiments are used to model the dependency of flaw detection probability on various factors and to evaluate the performance of inspection equipment, including the sizing accuracy. The information from experiments is utilised in a most effective way when mathematical models are applied. Here, some statistical models for reliability of non-destructive tests are introduced. In order to demonstrate the use of inspection reliability models, they have been applied to the inspection results of intergranular stress corrosion cracking (IGSCC) type flaws in PISC III exercise (PISC 1995). The models are applied to both flaw detection frequency data of all inspection teams and to flaw sizing data of one participating team. (author)

  13. Reliable Software Development for Machine Protection Systems

    CERN Document Server

    Anderson, D; Dragu, M; Fuchsberger, K; Garnier, JC; Gorzawski, AA; Koza, M; Krol, K; Misiowiec, K; Stamos, K; Zerlauth, M

    2014-01-01

    The Controls software for the Large Hadron Collider (LHC) at CERN, with more than 150 millions lines of code, resides amongst the largest known code bases in the world1. Industry has been applying Agile software engineering techniques for more than two decades now, and the advantages of these techniques can no longer be ignored to manage the code base for large projects within the accelerator community. Furthermore, CERN is a particular environment due to the high personnel turnover and manpower limitations, where applying Agile processes can improve both, the codebase management as well as its quality. This paper presents the successful application of the Agile software development process Scrum for machine protection systems at CERN, the quality standards and infrastructure introduced together with the Agile process as well as the challenges encountered to adapt it to the CERN environment.

  14. A Comparison Between Five Models Of Software Engineering

    OpenAIRE

    Nabil Mohammed Ali Munassar; A. Govardhan

    2010-01-01

    This research deals with a vital and important issue in computer world. It is concerned with the software management processes that examine the area of software development through the development models, which are known as software development life cycle. It represents five of the development models namely, waterfall, Iteration, V-shaped, spiral and Extreme programming. These models have advantages and disadvantages as well. Therefore, the main objective of this research is to represent diff...

  15. A Software Reuse Approach and Its Effect On Software Quality, An Empirical Study for The Software Industry

    OpenAIRE

    Mateen, Ahmed; Kausar, Samina; Sattar, Ahsan Raza

    2017-01-01

    Software reusability has become much interesting because of increased quality and reduce cost. A good process of software reuse leads to enhance the reliability, productivity, quality and the reduction of time and cost. Current reuse techniques focuses on the reuse of software artifact which grounded on anticipated functionality whereas, the non-functional (quality) aspect are also important. So, Software reusability used here to expand quality and productivity of software. It improves overal...

  16. Reliability model for offshore wind farms; Paalidelighedsmodel for havvindmoelleparker

    Energy Technology Data Exchange (ETDEWEB)

    Christensen, P.; Lundtang Paulsen, J.; Lybech Toegersen, M.; Krogh, T. [Risoe National Lab., Roskilde (Denmark); Raben, N.; Donovan, M.H.; Joergensen, L. [SEAS (Denmark); Winther-Jensen, M.

    2002-05-01

    A method for the prediction of the mean availability for an offshore windfarm has been developed. Factors comprised are the reliability of the single turbine, the strategy for preventive maintenance the climate, the number of repair teams, and the type of boats available for transport. The mean availability is defined as the sum of the fractions of time, where each turbine is available for production. The project has been carried out together with SEAS Wind Technique, and their site Roedsand has been chosen as the example of the work. A climate model has been created based on actual site measurements. The prediction of the availability is done with a Monte Carlo-simulation. Software was developed for the preparation of the climate model from weather measurements as well as for the Monte carlo-simulation. Three examples have been simulated, one with guessed parametres, and the other two with parameters more close to the Roedsand case. (au)

  17. Analysis of linear measurements on 3D surface models using CBCT data segmentation obtained by automatic standard pre-set thresholds in two segmentation software programs: an in vitro study.

    Science.gov (United States)

    Poleti, Marcelo Lupion; Fernandes, Thais Maria Freire; Pagin, Otávio; Moretti, Marcela Rodrigues; Rubira-Bullen, Izabel Regina Fischer

    2016-01-01

    The aim of this in vitro study was to evaluate the reliability and accuracy of linear measurements on three-dimensional (3D) surface models obtained by standard pre-set thresholds in two segmentation software programs. Ten mandibles with 17 silica markers were scanned for 0.3-mm voxels in the i-CAT Classic (Imaging Sciences International, Hatfield, PA, USA). Twenty linear measurements were carried out by two observers two times on the 3D surface models: the Dolphin Imaging 11.5 (Dolphin Imaging & Management Solutions, Chatsworth, CA, USA), using two filters(Translucent and Solid-1), and in the InVesalius 3.0.0 (Centre for Information Technology Renato Archer, Campinas, SP, Brazil). The physical measurements were made by another observer two times using a digital caliper on the dry mandibles. Excellent intra- and inter-observer reliability for the markers, physical measurements, and 3D surface models were found (intra-class correlation coefficient (ICC) and Pearson's r ≥ 0.91). The linear measurements on 3D surface models by Dolphin and InVesalius software programs were accurate (Dolphin Solid-1 > InVesalius > Dolphin Translucent). The highest absolute and percentage errors were obtained for the variable R1-R1 (1.37 mm) and MF-AC (2.53 %) in the Dolphin Translucent and InVesalius software, respectively. Linear measurements on 3D surface models obtained by standard pre-set thresholds in the Dolphin and InVesalius software programs are reliable and accurate compared with physical measurements. Studies that evaluate the reliability and accuracy of the 3D models are necessary to ensure error predictability and to establish diagnosis, treatment plan, and prognosis in a more realistic way.

  18. A Study On Traditional And Evolutionary Software Development Models

    Directory of Open Access Journals (Sweden)

    Kamran Rasheed

    2017-07-01

    Full Text Available Today Computing technologies are becoming the pioneers of the organizations and helpful in individual functionality i.e. added to computing device we need to add softwares. Set of instruction or computer program is known as software. The development of software is done through some traditional or some new or evolutionary models. Software development is becoming a key and a successful business nowadays. Without software all hardware is useless. Some collective steps that are performed in the development of these are known as Software development life cycle SDLC. There are some adaptive and predictive models for developing software. Predictive mean already known like WATERFALL Spiral Prototype and V-shaped models while Adaptive model include agile Scrum. All methodologies of both adaptive and predictive have their own procedure and steps. Predictive are Static and Adaptive are dynamic mean change cannot be made to the predictive while adaptive have the capability of changing. The purpose of this study is to get familiar with all these and discuss their uses and steps of development. This discussion will be helpful in deciding which model they should use in which circumstance and what are the development step including in each model.

  19. Reliability Modeling of Wind Turbines

    DEFF Research Database (Denmark)

    Kostandyan, Erik

    Cost reductions for offshore wind turbines are a substantial requirement in order to make offshore wind energy more competitive compared to other energy supply methods. During the 20 – 25 years of wind turbines useful life, Operation & Maintenance costs are typically estimated to be a quarter...... for Operation & Maintenance planning. Concentrating efforts on development of such models, this research is focused on reliability modeling of Wind Turbine critical subsystems (especially the power converter system). For reliability assessment of these components, structural reliability methods are applied...... to one third of the total cost of energy. Reduction of Operation & Maintenance costs will result in significant cost savings and result in cheaper electricity production. Operation & Maintenance processes mainly involve actions related to replacements or repair. Identifying the right times when...

  20. Reliability of Soft Tissue Model Based Implant Surgical Guides; A Methodological Mistake.

    Science.gov (United States)

    Sabour, Siamak; Dastjerdi, Elahe Vahid

    2012-08-20

    Abstract We were interested to read the paper by Maney P and colleagues published in the July 2012 issue of J Oral Implantol. The authors aimed to assess the reliability of soft tissue model based implant surgical guides reported that the accuracy was evaluated using software. 1 I found the manuscript title of Maney P, et al. incorrect and misleading. Moreover, they reported twenty-two sites (46.81%) were considered accurate (13 of 24 maxillary and 9 of 23 mandibular sites). As the authors point out in their conclusion, Soft tissue models do not always provide sufficient accuracy for implant surgical guide fabrication.Reliability (precision) and validity (accuracy) are two different methodological issues in researches. Sensitivity, specificity, PPV, NPV, likelihood ratio positive (true positive/false negative) and likelihood ratio negative (false positive/ true negative) as well as odds ratio (true results\\false results - preferably more than 50) are among the tests to evaluate the validity (accuracy) of a single test compared to a gold standard.2-4 It is not clear that the reported twenty-two sites (46.81%) which were considered accurate related to which of the above mentioned estimates for validity analysis. Reliability (repeatability or reproducibility) is being assessed by different statistical tests such as Pearson r, least square and paired t.test which all of them are among common mistakes in reliability analysis 5. Briefly, for quantitative variable Intra Class Correlation Coefficient (ICC) and for qualitative variables weighted kappa should be used with caution because kappa has its own limitation too. Regarding reliability or agreement, it is good to know that for computing kappa value, just concordant cells are being considered, whereas discordant cells should also be taking into account in order to reach a correct estimation of agreement (Weighted kappa).2-4 As a take home message, for reliability and validity analysis, appropriate tests should be

  1. The Ragnarok Architectural Software Configuration Management Model

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak

    1999-01-01

    The architecture is the fundamental framework for designing and implementing large scale software, and the ability to trace and control its evolution is essential. However, many traditional software configuration management tools view 'software' merely as a set of files, not as an architecture....... This introduces an unfortunate impedance mismatch between the design domain (architecture level) and configuration management domain (file level.) This paper presents a software configuration management model that allows tight version control and configuration management of the architecture of a software system...

  2. Architecture design in global and model-centric software development

    NARCIS (Netherlands)

    Heijstek, Werner

    2012-01-01

    This doctoral dissertation describes a series of empirical investigations into representation, dissemination and coordination of software architecture design in the context of global software development. A particular focus is placed on model-centric and model-driven software development.

  3. Software for modelling groundwater transport and contaminant migration

    International Nuclear Information System (INIS)

    Gishkelyuk, I.A.

    2008-01-01

    Facilities of modern software for modeling of groundwater transport and process of contaminant distribution are considered. Advantages of their application are discussed. The comparative analysis of mathematical modeling software of 'Groundwater modeling system' and 'Earth Science Module' from 'COMSOL Multiphysics' is carried out. (authors)

  4. Model-based engineering for medical-device software.

    Science.gov (United States)

    Ray, Arnab; Jetley, Raoul; Jones, Paul L; Zhang, Yi

    2010-01-01

    This paper demonstrates the benefits of adopting model-based design techniques for engineering medical device software. By using a patient-controlled analgesic (PCA) infusion pump as a candidate medical device, the authors show how using models to capture design information allows for i) fast and efficient construction of executable device prototypes ii) creation of a standard, reusable baseline software architecture for a particular device family, iii) formal verification of the design against safety requirements, and iv) creation of a safety framework that reduces verification costs for future versions of the device software. 1.

  5. inventory management, VMI, software agents, MDV model

    Directory of Open Access Journals (Sweden)

    Waldemar Wieczerzycki

    2012-03-01

    Full Text Available Background: As it is well know, the implementation of instruments of logistics management is only possible with the use of the latest information technology. So-called agent technology is one of the most promising solutions in this area. Its essence consists in an entirely new way of software distribution on the computer network platform, in which computer exchange among themselves not only data, but also software modules, called just agents. The first aim is to propose the alternative method of the implementation of the concept of the inventory management by the supplier with the use of intelligent software agents, which are able not only to transfer the information but also to make the autonomous decisions based on the privileges given to them. The second aim of this research was to propose a new model of a software agent, which will be both of a high mobility and a high intelligence. Methods: After a brief discussion of the nature of agent technology, the most important benefits of using it to build platforms to support business are given. Then the original model of polymorphic software agent, called Multi-Dimensionally Versioned Software Agent (MDV is presented, which is oriented on the specificity of IT applications in business. MDV agent is polymorphic, which allows the transmission through the network only the most relevant parts of its code, and only when necessary. Consequently, the network nodes exchange small amounts of software code, which ensures high mobility of software agents, and thus highly efficient operation of IT platforms built on the proposed model. Next, the adaptation of MDV software agents to implementation of well-known logistics management instrument - VMI (Vendor Managed Inventory is illustrated. Results: The key benefits of this approach are identified, among which one can distinguish: reduced costs, higher flexibility and efficiency, new functionality - especially addressed to business negotiation, full automation

  6. Software Engineering Laboratory (SEL) cleanroom process model

    Science.gov (United States)

    Green, Scott; Basili, Victor; Godfrey, Sally; Mcgarry, Frank; Pajerski, Rose; Waligora, Sharon

    1991-01-01

    The Software Engineering Laboratory (SEL) cleanroom process model is described. The term 'cleanroom' originates in the integrated circuit (IC) production process, where IC's are assembled in dust free 'clean rooms' to prevent the destructive effects of dust. When applying the clean room methodology to the development of software systems, the primary focus is on software defect prevention rather than defect removal. The model is based on data and analysis from previous cleanroom efforts within the SEL and is tailored to serve as a guideline in applying the methodology to future production software efforts. The phases that are part of the process model life cycle from the delivery of requirements to the start of acceptance testing are described. For each defined phase, a set of specific activities is discussed, and the appropriate data flow is described. Pertinent managerial issues, key similarities and differences between the SEL's cleanroom process model and the standard development approach used on SEL projects, and significant lessons learned from prior cleanroom projects are presented. It is intended that the process model described here will be further tailored as additional SEL cleanroom projects are analyzed.

  7. Presenting an Evaluation Model for the Cancer Registry Software.

    Science.gov (United States)

    Moghaddasi, Hamid; Asadi, Farkhondeh; Rabiei, Reza; Rahimi, Farough; Shahbodaghi, Reihaneh

    2017-12-01

    As cancer is increasingly growing, cancer registry is of great importance as the main core of cancer control programs, and many different software has been designed for this purpose. Therefore, establishing a comprehensive evaluation model is essential to evaluate and compare a wide range of such software. In this study, the criteria of the cancer registry software have been determined by studying the documents and two functional software of this field. The evaluation tool was a checklist and in order to validate the model, this checklist was presented to experts in the form of a questionnaire. To analyze the results of validation, an agreed coefficient of %75 was determined in order to apply changes. Finally, when the model was approved, the final version of the evaluation model for the cancer registry software was presented. The evaluation model of this study contains tool and method of evaluation. The evaluation tool is a checklist including the general and specific criteria of the cancer registry software along with their sub-criteria. The evaluation method of this study was chosen as a criteria-based evaluation method based on the findings. The model of this study encompasses various dimensions of cancer registry software and a proper method for evaluating it. The strong point of this evaluation model is the separation between general criteria and the specific ones, while trying to fulfill the comprehensiveness of the criteria. Since this model has been validated, it can be used as a standard to evaluate the cancer registry software.

  8. Functional Testing Protocols for Commercial Building Efficiency Baseline Modeling Software

    Energy Technology Data Exchange (ETDEWEB)

    Jump, David; Price, Phillip N.; Granderson, Jessica; Sohn, Michael

    2013-09-06

    This document describes procedures for testing and validating proprietary baseline energy modeling software accuracy in predicting energy use over the period of interest, such as a month or a year. The procedures are designed according to the methodology used for public domain baselining software in another LBNL report that was (like the present report) prepared for Pacific Gas and Electric Company: ?Commercial Building Energy Baseline Modeling Software: Performance Metrics and Method Testing with Open Source Models and Implications for Proprietary Software Testing Protocols? (referred to here as the ?Model Analysis Report?). The test procedure focuses on the quality of the software?s predictions rather than on the specific algorithms used to predict energy use. In this way the software vendor is not required to divulge or share proprietary information about how their software works, while enabling stakeholders to assess its performance.

  9. Linear mixed models a practical guide using statistical software

    CERN Document Server

    West, Brady T; Galecki, Andrzej T

    2006-01-01

    Simplifying the often confusing array of software programs for fitting linear mixed models (LMMs), Linear Mixed Models: A Practical Guide Using Statistical Software provides a basic introduction to primary concepts, notation, software implementation, model interpretation, and visualization of clustered and longitudinal data. This easy-to-navigate reference details the use of procedures for fitting LMMs in five popular statistical software packages: SAS, SPSS, Stata, R/S-plus, and HLM. The authors introduce basic theoretical concepts, present a heuristic approach to fitting LMMs based on bo

  10. Software safety analysis techniques for developing safety critical software in the digital protection system of the LMR

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jang Soo; Cheon, Se Woo; Kim, Chang Hoi; Sim, Yun Sub

    2001-02-01

    This report has described the software safety analysis techniques and the engineering guidelines for developing safety critical software to identify the state of the art in this field and to give the software safety engineer a trail map between the code and standards layer and the design methodology and documents layer. We have surveyed the management aspects of software safety activities during the software lifecycle in order to improve the safety. After identifying the conventional safety analysis techniques for systems, we have surveyed in details the software safety analysis techniques, software FMEA(Failure Mode and Effects Analysis), software HAZOP(Hazard and Operability Analysis), and software FTA(Fault Tree Analysis). We have also surveyed the state of the art in the software reliability assessment techniques. The most important results from the reliability techniques are not the specific probability numbers generated, but the insights into the risk importance of software features. To defend against potential common-mode failures, high quality, defense-in-depth, and diversity are considered to be key elements in digital I and C system design. To minimize the possibility of CMFs and thus increase the plant reliability, we have provided D-in-D and D analysis guidelines.

  11. Software safety analysis techniques for developing safety critical software in the digital protection system of the LMR

    International Nuclear Information System (INIS)

    Lee, Jang Soo; Cheon, Se Woo; Kim, Chang Hoi; Sim, Yun Sub

    2001-02-01

    This report has described the software safety analysis techniques and the engineering guidelines for developing safety critical software to identify the state of the art in this field and to give the software safety engineer a trail map between the code and standards layer and the design methodology and documents layer. We have surveyed the management aspects of software safety activities during the software lifecycle in order to improve the safety. After identifying the conventional safety analysis techniques for systems, we have surveyed in details the software safety analysis techniques, software FMEA(Failure Mode and Effects Analysis), software HAZOP(Hazard and Operability Analysis), and software FTA(Fault Tree Analysis). We have also surveyed the state of the art in the software reliability assessment techniques. The most important results from the reliability techniques are not the specific probability numbers generated, but the insights into the risk importance of software features. To defend against potential common-mode failures, high quality, defense-in-depth, and diversity are considered to be key elements in digital I and C system design. To minimize the possibility of CMFs and thus increase the plant reliability, we have provided D-in-D and D analysis guidelines

  12. Safeprops: A Software for Fast and Reliable Estimation of Safety and Environmental Properties for Organic Compounds

    DEFF Research Database (Denmark)

    Jones, Mark Nicholas; Frutiger, Jerome; Abildskov, Jens

    We present a new software tool called SAFEPROPS which is able to estimate major safety-related and environmental properties for organic compounds. SAFEPROPS provides accurate, reliable and fast predictions using the Marrero-Gani group contribution (MG-GC) method. It is implemented using Python...... as the main programming language, while the necessary parameters together with their correlation matrix are obtained from a SQLite database which has been populated using off-line parameter and error estimation routines (Eq. 3-8)....

  13. Linear mixed models a practical guide using statistical software

    CERN Document Server

    West, Brady T; Galecki, Andrzej T

    2014-01-01

    Highly recommended by JASA, Technometrics, and other journals, the first edition of this bestseller showed how to easily perform complex linear mixed model (LMM) analyses via a variety of software programs. Linear Mixed Models: A Practical Guide Using Statistical Software, Second Edition continues to lead readers step by step through the process of fitting LMMs. This second edition covers additional topics on the application of LMMs that are valuable for data analysts in all fields. It also updates the case studies using the latest versions of the software procedures and provides up-to-date information on the options and features of the software procedures available for fitting LMMs in SAS, SPSS, Stata, R/S-plus, and HLM.New to the Second Edition A new chapter on models with crossed random effects that uses a case study to illustrate software procedures capable of fitting these models Power analysis methods for longitudinal and clustered study designs, including software options for power analyses and suggest...

  14. SOFTWARE DESIGN MODELLING WITH FUNCTIONAL PETRI NETS

    African Journals Online (AJOL)

    Dr Obe

    the system, which can be described as a set of conditions. ... FPN Software prototype proposed for the conventional programming construct: if-then-else ... mathematical modeling tool allowing for ... methods and techniques of software design.

  15. Application of discrete function and software control flow to dependability assessment of embedded digital system

    International Nuclear Information System (INIS)

    Choi, Jong Gyun; Seong, Poong Hyun

    2001-01-01

    This article describes a combinatorial model for estimating the reliability of the embedded digital system by means of discrete function theory and software control flow. This model includes a coverage model for fault processing mechanisms implemented in digital system. Furthermore, the model considers the interaction between hardware and software. The fault processing mechanisms make it difficult for many types of components in digital system to be treated as binary state, good or bad. The discrete function theory provides a complete analysis of multi-state system as which the digital system can be regarded Through adaptation software control flow to discrete function theory, the HW/SW interaction is considered for estimation of the reliability of digital system. Using this model, we predict the reliability of one board controller in a digital system, Interposing Logic System(ILS), which is installed in YGN nuclear power units 3 and 4. Since the proposed model is general combinatinal model, the simplification of this model becomes a conservative model that treats the system as binary state. Moreover, if information for coverage factor of fault tolerance mechanisms implemented in system through fault injection experiment is obtained, this model can consider detailed interaction of system components

  16. Software dependability in the Tandem GUARDIAN system

    Science.gov (United States)

    Lee, Inhwan; Iyer, Ravishankar K.

    1995-01-01

    Based on extensive field failure data for Tandem's GUARDIAN operating system this paper discusses evaluation of the dependability of operational software. Software faults considered are major defects that result in processor failures and invoke backup processes to take over. The paper categorizes the underlying causes of software failures and evaluates the effectiveness of the process pair technique in tolerating software faults. A model to describe the impact of software faults on the reliability of an overall system is proposed. The model is used to evaluate the significance of key factors that determine software dependability and to identify areas for improvement. An analysis of the data shows that about 77% of processor failures that are initially considered due to software are confirmed as software problems. The analysis shows that the use of process pairs to provide checkpointing and restart (originally intended for tolerating hardware faults) allows the system to tolerate about 75% of reported software faults that result in processor failures. The loose coupling between processors, which results in the backup execution (the processor state and the sequence of events) being different from the original execution, is a major reason for the measured software fault tolerance. Over two-thirds (72%) of measured software failures are recurrences of previously reported faults. Modeling, based on the data, shows that, in addition to reducing the number of software faults, software dependability can be enhanced by reducing the recurrence rate.

  17. An improved COCOMO software cost estimation model | Duke ...

    African Journals Online (AJOL)

    In this paper, we discuss the methodologies adopted previously in software cost estimation using the COnstructive COst MOdels (COCOMOs). From our analysis, COCOMOs produce very high software development efforts, which eventually produce high software development costs. Consequently, we propose its extension, ...

  18. Transformation of UML Behavioral Diagrams to Support Software Model Checking

    Directory of Open Access Journals (Sweden)

    Luciana Brasil Rebelo dos Santos

    2014-04-01

    Full Text Available Unified Modeling Language (UML is currently accepted as the standard for modeling (object-oriented software, and its use is increasing in the aerospace industry. Verification and Validation of complex software developed according to UML is not trivial due to complexity of the software itself, and the several different UML models/diagrams that can be used to model behavior and structure of the software. This paper presents an approach to transform up to three different UML behavioral diagrams (sequence, behavioral state machines, and activity into a single Transition System to support Model Checking of software developed in accordance with UML. In our approach, properties are formalized based on use case descriptions. The transformation is done for the NuSMV model checker, but we see the possibility in using other model checkers, such as SPIN. The main contribution of our work is the transformation of a non-formal language (UML to a formal language (language of the NuSMV model checker towards a greater adoption in practice of formal methods in software development.

  19. Statistical approach to software reliability certification

    NARCIS (Netherlands)

    Corro Ramos, I.; Di Bucchianico, A.; Hee, van K.M.

    2009-01-01

    We present a sequential software release procedure that certifies with some confidence level that the next error is not occurring within a certain time interval. Our procedure is defined in such a way that the release time is optimal for single stages and the global risk can be controlled. We assume

  20. Models of the atomic nucleus. With interactive software

    International Nuclear Information System (INIS)

    Cook, N.D.

    2006-01-01

    This book-and-CD-software package supplies users with an interactive experience for nuclear visualization via a computer-graphical interface, similar in principle to the molecular visualizations already available in chemistry. Models of the Atomic Nucleus, a largely non-technical introduction to nuclear theory, explains the nucleus in a way that makes nuclear physics as comprehensible as chemistry or cell biology. The book/software supplements virtually any of the current textbooks in nuclear physics by providing a means for 3D visual display of the diverse models of nuclear structure. For the first time, an easy-to-master software for scientific visualization of the nucleus makes this notoriously ''non-visual'' field become immediately 'visible.' After a review of the basics, the book explores and compares the competing models, and addresses how the lattice model best resolves remaining controversies. The appendix explains how to obtain the most from the software provided on the accompanying CD. (orig.)

  1. Reliability models for Space Station power system

    Science.gov (United States)

    Singh, C.; Patton, A. D.; Kim, Y.; Wagner, H.

    1987-01-01

    This paper presents a methodology for the reliability evaluation of Space Station power system. The two options considered are the photovoltaic system and the solar dynamic system. Reliability models for both of these options are described along with the methodology for calculating the reliability indices.

  2. Aspect-Oriented Model-Driven Software Product Line Engineering

    Science.gov (United States)

    Groher, Iris; Voelter, Markus

    Software product line engineering aims to reduce development time, effort, cost, and complexity by taking advantage of the commonality within a portfolio of similar products. The effectiveness of a software product line approach directly depends on how well feature variability within the portfolio is implemented and managed throughout the development lifecycle, from early analysis through maintenance and evolution. This article presents an approach that facilitates variability implementation, management, and tracing by integrating model-driven and aspect-oriented software development. Features are separated in models and composed of aspect-oriented composition techniques on model level. Model transformations support the transition from problem to solution space models. Aspect-oriented techniques enable the explicit expression and modularization of variability on model, template, and code level. The presented concepts are illustrated with a case study of a home automation system.

  3. Optimization of Reliability Centered Maintenance Bassed on Maintenance Costs and Reliability with Consideration of Location of Components

    Directory of Open Access Journals (Sweden)

    Mahdi Karbasian

    2011-03-01

    Full Text Available The reliability of designing systems such as electrical and electronic circuits, power generation/ distribution networks and mechanical systems, in which the failure of a component may cause the whole system failure, and even the reliability of cellular manufacturing systems that their machines are connected to as series are critically important. So far approaches for improving the reliability of these systems have been mainly based on the enhancement of inherent reliability of any system component or increasing system reliability based on maintenance strategies. Also in some of the resources, only the influence of the location of systems' components on reliability is studied. Therefore, it seems other approaches have been rarely applied. In this paper, a multi criteria model has been proposed to perform a balance among a system's reliability, location costs, and its system maintenance. Finally, a numerical example has been presented and solved by the Lingo software.

  4. Reliability Model of Power Transformer with ONAN Cooling

    OpenAIRE

    M. Sefidgaran; M. Mirzaie; A. Ebrahimzadeh

    2010-01-01

    Reliability of a power system is considerably influenced by its equipments. Power transformers are one of the most critical and expensive equipments of a power system and their proper functions are vital for the substations and utilities. Therefore, reliability model of power transformer is very important in the risk assessment of the engineering systems. This model shows the characteristics and functions of a transformer in the power system. In this paper the reliability model...

  5. Use of the 3D surgical modelling technique with open-source software for mandibular fibula free flap reconstruction and its surgical guides.

    Science.gov (United States)

    Ganry, L; Hersant, B; Quilichini, J; Leyder, P; Meningaud, J P

    2017-06-01

    Tridimensional (3D) surgical modelling is a necessary step to create 3D-printed surgical tools, and expensive professional software is generally needed. Open-source software are functional, reliable, updated, may be downloaded for free and used to produce 3D models. Few surgical teams have used free solutions for mastering 3D surgical modelling for reconstructive surgery with osseous free flaps. We described an Open-source software 3D surgical modelling protocol to perform a fast and nearly free mandibular reconstruction with microvascular fibula free flap and its surgical guides, with no need for engineering support. Four successive specialised Open-source software were used to perform our 3D modelling: OsiriX ® , Meshlab ® , Netfabb ® and Blender ® . Digital Imaging and Communications in Medicine (DICOM) data on patient skull and fibula, obtained with a computerised tomography (CT) scan, were needed. The 3D modelling of the reconstructed mandible and its surgical guides were created. This new strategy may improve surgical management in Oral and Craniomaxillofacial surgery. Further clinical studies are needed to demonstrate the feasibility, reproducibility, transfer of know how and benefits of this technique. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  6. Process model for building quality software on internet time ...

    African Journals Online (AJOL)

    The competitive nature of the software construction market and the inherently exhilarating nature of software itself have hinged the success of any software development project on four major pillars: time to market, product quality, innovation and documentation. Unfortunately, however, existing software development models ...

  7. Reliability of multi-model and structurally different single-model ensembles

    Energy Technology Data Exchange (ETDEWEB)

    Yokohata, Tokuta [National Institute for Environmental Studies, Center for Global Environmental Research, Tsukuba, Ibaraki (Japan); Annan, James D.; Hargreaves, Julia C. [Japan Agency for Marine-Earth Science and Technology, Research Institute for Global Change, Yokohama, Kanagawa (Japan); Collins, Matthew [University of Exeter, College of Engineering, Mathematics and Physical Sciences, Exeter (United Kingdom); Jackson, Charles S.; Tobis, Michael [The University of Texas at Austin, Institute of Geophysics, 10100 Burnet Rd., ROC-196, Mail Code R2200, Austin, TX (United States); Webb, Mark J. [Met Office Hadley Centre, Exeter (United Kingdom)

    2012-08-15

    The performance of several state-of-the-art climate model ensembles, including two multi-model ensembles (MMEs) and four structurally different (perturbed parameter) single model ensembles (SMEs), are investigated for the first time using the rank histogram approach. In this method, the reliability of a model ensemble is evaluated from the point of view of whether the observations can be regarded as being sampled from the ensemble. Our analysis reveals that, in the MMEs, the climate variables we investigated are broadly reliable on the global scale, with a tendency towards overdispersion. On the other hand, in the SMEs, the reliability differs depending on the ensemble and variable field considered. In general, the mean state and historical trend of surface air temperature, and mean state of precipitation are reliable in the SMEs. However, variables such as sea level pressure or top-of-atmosphere clear-sky shortwave radiation do not cover a sufficiently wide range in some. It is not possible to assess whether this is a fundamental feature of SMEs generated with particular model, or a consequence of the algorithm used to select and perturb the values of the parameters. As under-dispersion is a potentially more serious issue when using ensembles to make projections, we recommend the application of rank histograms to assess reliability when designing and running perturbed physics SMEs. (orig.)

  8. Consistent Evolution of Software Artifacts and Non-Functional Models

    Science.gov (United States)

    2014-11-14

    induce bad software performance)? 15. SUBJECT TERMS EOARD, Nano particles, Photo-Acoustic Sensors, Model-Driven Engineering ( MDE ), Software Performance...Università degli Studi dell’Aquila, Via Vetoio, 67100 L’Aquila, Italy Email: vittorio.cortellessa@univaq.it Web : http: // www. di. univaq. it/ cortelle/ Phone...Model-Driven Engineering ( MDE ), Software Performance Engineering (SPE), Change Propagation, Performance Antipatterns. For sake of readability of the

  9. Method for critical software event execution reliability in high integrity software

    Energy Technology Data Exchange (ETDEWEB)

    Kidd, M.E. [Sandia National Labs., Albuquerque, NM (United States)

    1997-11-01

    This report contains viewgraphs on a method called SEER, which provides a high level of confidence that critical software driven event execution sequences faithfully exceute in the face of transient computer architecture failures in both normal and abnormal operating environments.

  10. Bayesian Software Health Management for Aircraft Guidance, Navigation, and Control

    Science.gov (United States)

    Schumann, Johann; Mbaya, Timmy; Menghoel, Ole

    2011-01-01

    Modern aircraft, both piloted fly-by-wire commercial aircraft as well as UAVs, more and more depend on highly complex safety critical software systems with many sensors and computer-controlled actuators. Despite careful design and V&V of the software, severe incidents have happened due to malfunctioning software. In this paper, we discuss the use of Bayesian networks (BNs) to monitor the health of the on-board software and sensor system, and to perform advanced on-board diagnostic reasoning. We will focus on the approach to develop reliable and robust health models for the combined software and sensor systems.

  11. Travel time reliability modeling.

    Science.gov (United States)

    2011-07-01

    This report includes three papers as follows: : 1. Guo F., Rakha H., and Park S. (2010), "A Multi-state Travel Time Reliability Model," : Transportation Research Record: Journal of the Transportation Research Board, n 2188, : pp. 46-54. : 2. Park S.,...

  12. Proposed Reliability/Cost Model

    Science.gov (United States)

    Delionback, L. M.

    1982-01-01

    New technique estimates cost of improvement in reliability for complex system. Model format/approach is dependent upon use of subsystem cost-estimating relationships (CER's) in devising cost-effective policy. Proposed methodology should have application in broad range of engineering management decisions.

  13. A software quality model and metrics for risk assessment

    Science.gov (United States)

    Hyatt, L.; Rosenberg, L.

    1996-01-01

    A software quality model and its associated attributes are defined and used as the model for the basis for a discussion on risk. Specific quality goals and attributes are selected based on their importance to a software development project and their ability to be quantified. Risks that can be determined by the model's metrics are identified. A core set of metrics relating to the software development process and its products is defined. Measurements for each metric and their usability and applicability are discussed.

  14. Time domain series system definition and gear set reliability modeling

    International Nuclear Information System (INIS)

    Xie, Liyang; Wu, Ningxiang; Qian, Wenxue

    2016-01-01

    Time-dependent multi-configuration is a typical feature for mechanical systems such as gear trains and chain drives. As a series system, a gear train is distinct from a traditional series system, such as a chain, in load transmission path, system-component relationship, system functioning manner, as well as time-dependent system configuration. Firstly, the present paper defines time-domain series system to which the traditional series system reliability model is not adequate. Then, system specific reliability modeling technique is proposed for gear sets, including component (tooth) and subsystem (tooth-pair) load history description, material priori/posterior strength expression, time-dependent and system specific load-strength interference analysis, as well as statistically dependent failure events treatment. Consequently, several system reliability models are developed for gear sets with different tooth numbers in the scenario of tooth root material ultimate tensile strength failure. The application of the models is discussed in the last part, and the differences between the system specific reliability model and the traditional series system reliability model are illustrated by virtue of several numerical examples. - Highlights: • A new type of series system, i.e. time-domain multi-configuration series system is defined, that is of great significance to reliability modeling. • Multi-level statistical analysis based reliability modeling method is presented for gear transmission system. • Several system specific reliability models are established for gear set reliability estimation. • The differences between the traditional series system reliability model and the new model are illustrated.

  15. Evaluation of mobile ad hoc network reliability using propagation-based link reliability model

    International Nuclear Information System (INIS)

    Padmavathy, N.; Chaturvedi, Sanjay K.

    2013-01-01

    A wireless mobile ad hoc network (MANET) is a collection of solely independent nodes (that can move randomly around the area of deployment) making the topology highly dynamic; nodes communicate with each other by forming a single hop/multi-hop network and maintain connectivity in decentralized manner. MANET is modelled using geometric random graphs rather than random graphs because the link existence in MANET is a function of the geometric distance between the nodes and the transmission range of the nodes. Among many factors that contribute to the MANET reliability, the reliability of these networks also depends on the robustness of the link between the mobile nodes of the network. Recently, the reliability of such networks has been evaluated for imperfect nodes (transceivers) with binary model of communication links based on the transmission range of the mobile nodes and the distance between them. However, in reality, the probability of successful communication decreases as the signal strength deteriorates due to noise, fading or interference effects even up to the nodes' transmission range. Hence, in this paper, using a propagation-based link reliability model rather than a binary-model with nodes following a known failure distribution to evaluate the network reliability (2TR m , ATR m and AoTR m ) of MANET through Monte Carlo Simulation is proposed. The method is illustrated with an application and some imperative results are also presented

  16. Research on the Reliability Analysis of the Integrated Modular Avionics System Based on the AADL Error Model

    Directory of Open Access Journals (Sweden)

    Peng Wang

    2018-01-01

    Full Text Available In recent years, the integrated modular avionics (IMA concept has been introduced to replace the traditional federated avionics. Different avionics functions are hosted in a shared IMA platform, and IMA adopts partition technologies to provide a logical isolation among different functions. The IMA architecture can provide more sophisticated and powerful avionics functionality; meanwhile, the failure propagation patterns in IMA are more complex. The feature of resource sharing introduces some unintended interconnections among different functions, which makes the failure propagation modes more complex. Therefore, this paper proposes an architecture analysis and design language- (AADL- based method to establish the reliability model of IMA platform. The single software and hardware error behavior in IMA system is modeled. The corresponding AADL error model of failure propagation among components, between software and hardware, is given. Finally, the display function of IMA platform is taken as an example to illustrate the effectiveness of the proposed method.

  17. Resilience Engineering in Critical Long Term Aerospace Software Systems: A New Approach to Spacecraft Software Safety

    Science.gov (United States)

    Dulo, D. A.

    Safety critical software systems permeate spacecraft, and in a long term venture like a starship would be pervasive in every system of the spacecraft. Yet software failure today continues to plague both the systems and the organizations that develop them resulting in the loss of life, time, money, and valuable system platforms. A starship cannot afford this type of software failure in long journeys away from home. A single software failure could have catastrophic results for the spaceship and the crew onboard. This paper will offer a new approach to developing safe reliable software systems through focusing not on the traditional safety/reliability engineering paradigms but rather by focusing on a new paradigm: Resilience and Failure Obviation Engineering. The foremost objective of this approach is the obviation of failure, coupled with the ability of a software system to prevent or adapt to complex changing conditions in real time as a safety valve should failure occur to ensure safe system continuity. Through this approach, safety is ensured through foresight to anticipate failure and to adapt to risk in real time before failure occurs. In a starship, this type of software engineering is vital. Through software developed in a resilient manner, a starship would have reduced or eliminated software failure, and would have the ability to rapidly adapt should a software system become unstable or unsafe. As a result, long term software safety, reliability, and resilience would be present for a successful long term starship mission.

  18. Model for Simulating a Spiral Software-Development Process

    Science.gov (United States)

    Mizell, Carolyn; Curley, Charles; Nayak, Umanath

    2010-01-01

    A discrete-event simulation model, and a computer program that implements the model, have been developed as means of analyzing a spiral software-development process. This model can be tailored to specific development environments for use by software project managers in making quantitative cases for deciding among different software-development processes, courses of action, and cost estimates. A spiral process can be contrasted with a waterfall process, which is a traditional process that consists of a sequence of activities that include analysis of requirements, design, coding, testing, and support. A spiral process is an iterative process that can be regarded as a repeating modified waterfall process. Each iteration includes assessment of risk, analysis of requirements, design, coding, testing, delivery, and evaluation. A key difference between a spiral and a waterfall process is that a spiral process can accommodate changes in requirements at each iteration, whereas in a waterfall process, requirements are considered to be fixed from the beginning and, therefore, a waterfall process is not flexible enough for some projects, especially those in which requirements are not known at the beginning or may change during development. For a given project, a spiral process may cost more and take more time than does a waterfall process, but may better satisfy a customer's expectations and needs. Models for simulating various waterfall processes have been developed previously, but until now, there have been no models for simulating spiral processes. The present spiral-process-simulating model and the software that implements it were developed by extending a discrete-event simulation process model of the IEEE 12207 Software Development Process, which was built using commercially available software known as the Process Analysis Tradeoff Tool (PATT). Typical inputs to PATT models include industry-average values of product size (expressed as number of lines of code

  19. OpenFLUX: efficient modelling software for 13C-based metabolic flux analysis

    Directory of Open Access Journals (Sweden)

    Nielsen Lars K

    2009-05-01

    Full Text Available Abstract Background The quantitative analysis of metabolic fluxes, i.e., in vivo activities of intracellular enzymes and pathways, provides key information on biological systems in systems biology and metabolic engineering. It is based on a comprehensive approach combining (i tracer cultivation on 13C substrates, (ii 13C labelling analysis by mass spectrometry and (iii mathematical modelling for experimental design, data processing, flux calculation and statistics. Whereas the cultivation and the analytical part is fairly advanced, a lack of appropriate modelling software solutions for all modelling aspects in flux studies is limiting the application of metabolic flux analysis. Results We have developed OpenFLUX as a user friendly, yet flexible software application for small and large scale 13C metabolic flux analysis. The application is based on the new Elementary Metabolite Unit (EMU framework, significantly enhancing computation speed for flux calculation. From simple notation of metabolic reaction networks defined in a spreadsheet, the OpenFLUX parser automatically generates MATLAB-readable metabolite and isotopomer balances, thus strongly facilitating model creation. The model can be used to perform experimental design, parameter estimation and sensitivity analysis either using the built-in gradient-based search or Monte Carlo algorithms or in user-defined algorithms. Exemplified for a microbial flux study with 71 reactions, 8 free flux parameters and mass isotopomer distribution of 10 metabolites, OpenFLUX allowed to automatically compile the EMU-based model from an Excel file containing metabolic reactions and carbon transfer mechanisms, showing it's user-friendliness. It reliably reproduced the published data and optimum flux distributions for the network under study were found quickly ( Conclusion We have developed a fast, accurate application to perform steady-state 13C metabolic flux analysis. OpenFLUX will strongly facilitate and

  20. Model Driven Software Development for Agricultural Robotics

    DEFF Research Database (Denmark)

    Larsen, Morten

    The design and development of agricultural robots, consists of both mechan- ical, electrical and software components. All these components must be de- signed and combined such that the overall goal of the robot is fulfilled. The design and development of these systems require collaboration between...... processing, control engineering, etc. This thesis proposes a Model-Driven Software Develop- ment based approach to model, analyse and partially generate the software implementation of a agricultural robot. Furthermore, Guidelines for mod- elling the architecture of an agricultural robots are provided......, assisting with bridging the different engineering disciplines. Timing play an important role in agricultural robotic applications, synchronisation of robot movement and implement actions is important in order to achieve precision spraying, me- chanical weeding, individual feeding, etc. Discovering...

  1. Software to model AXAF-I image quality

    Science.gov (United States)

    Ahmad, Anees; Feng, Chen

    1995-01-01

    A modular user-friendly computer program for the modeling of grazing-incidence type x-ray optical systems has been developed. This comprehensive computer software GRAZTRACE covers the manipulation of input data, ray tracing with reflectivity and surface deformation effects, convolution with x-ray source shape, and x-ray scattering. The program also includes the capabilities for image analysis, detector scan modeling, and graphical presentation of the results. A number of utilities have been developed to interface the predicted Advanced X-ray Astrophysics Facility-Imaging (AXAF-I) mirror structural and thermal distortions with the ray-trace. This software is written in FORTRAN 77 and runs on a SUN/SPARC station. An interactive command mode version and a batch mode version of the software have been developed.

  2. Telecommunications system reliability engineering theory and practice

    CERN Document Server

    Ayers, Mark L

    2012-01-01

    "Increasing system complexity require new, more sophisticated tools for system modeling and metric calculation. Bringing the field up to date, this book provides telecommunications engineers with practical tools for analyzing, calculating, and reporting availability, reliability, and maintainability metrics. It gives the background in system reliability theory and covers in-depth applications in fiber optic networks, microwave networks, satellite networks, power systems, and facilities management. Computer programming tools for simulating the approaches presented, using the Matlab software suite, are also provided"

  3. Proceedings of the SRESA national conference on reliability and safety engineering

    International Nuclear Information System (INIS)

    Varde, P.V.; Vaishnavi, P.; Sujatha, S.; Valarmathi, A.

    2014-01-01

    The objective of this conference was to provide a forum for technical discussions on recent developments in the area of risk based approach and Prognostic Health Management of critical systems in decision making. The reliability and safety engineering methods are concerned with the way which the product fails, and the effects of failure is to understand how a product works and assures acceptable levels of safety. The reliability engineering addresses all the anticipated and possibly unanticipated causes of failure to ensure the occurrence of failure is prevented or minimized. The topics discussed in the conference were: Reliability in Engineering Design, Safety Assessment and Management, Reliability analysis and Assessment , Stochastic Petri nets for reliability Modeling, Dynamic Reliability, Reliability Prediction, Hardware Reliability, Software Reliability in Safety Critical Issues, Probabilistic Safety Assessment, Risk Informed Approach, Dynamic Models for Reliability Analysis, Reliability based Design and Analysis, Prognostics and Health Management, Remaining Useful Life (RUL), Human Reliability Modeling, Risk Based Applications, Hazard and Operability Study (HAZOP), Reliability in Network Security and Quality Assurance and Management etc. The papers relevant to INIS are indexed separately

  4. [The reliability of dento-maxillary models created by cone-beam CT and rapid prototyping:a comparative study].

    Science.gov (United States)

    Lv, Yan; Yan, Bin; Wang, Lin; Lou, Dong-hua

    2012-04-01

    To analyze the reliability of the dento-maxillary models created by cone-beam CT and rapid prototyping (RP). Plaster models were obtained from 20 orthodontic patients who had been scanned by cone-beam CT and 3-D models were formed after the calculation and reconstruction of software. Then, computerized composite models (RP models) were produced by rapid prototyping technique. The crown widths, dental arch widths and dental arch lengths on each plaster model, 3-D model and RP model were measured, followed by statistical analysis with SPSS17.0 software package. For crown widths, dental arch lengths and crowding, there were significant differences(Pmodels, but the dental arch widths were on the contrary. Measurements on 3-D models were significantly smaller than those on other two models(Pmodels, RP models had more numbers which were not significantly different from those on plaster models(P>0.05). The regression coefficient among three models were significantly different(Pmodels was bigger than that between 3-D and plaster models. There is high consistency within 3 models, while some differences were accepted in clinic. Therefore, it is possible to substitute 3-D and RP models for plaster models in order to save storage space and improve efficiency.

  5. Modeling and Forecasting (Un)Reliable Realized Covariances for More Reliable Financial Decisions

    DEFF Research Database (Denmark)

    Bollerslev, Tim; Patton, Andrew J.; Quaedvlieg, Rogier

    We propose a new framework for modeling and forecasting common financial risks based on (un)reliable realized covariance measures constructed from high-frequency intraday data. Our new approach explicitly incorporates the effect of measurement errors and time-varying attenuation biases into the c......We propose a new framework for modeling and forecasting common financial risks based on (un)reliable realized covariance measures constructed from high-frequency intraday data. Our new approach explicitly incorporates the effect of measurement errors and time-varying attenuation biases...

  6. Development and application of new quality model for software projects.

    Science.gov (United States)

    Karnavel, K; Dillibabu, R

    2014-01-01

    The IT industry tries to employ a number of models to identify the defects in the construction of software projects. In this paper, we present COQUALMO and its limitations and aim to increase the quality without increasing the cost and time. The computation time, cost, and effort to predict the residual defects are very high; this was overcome by developing an appropriate new quality model named the software testing defect corrective model (STDCM). The STDCM was used to estimate the number of remaining residual defects in the software product; a few assumptions and the detailed steps of the STDCM are highlighted. The application of the STDCM is explored in software projects. The implementation of the model is validated using statistical inference, which shows there is a significant improvement in the quality of the software projects.

  7. Software Platform Evaluation - Verifiable Fuel Cycle Simulation (VISION) Model

    International Nuclear Information System (INIS)

    J. J. Jacobson; D. E. Shropshire; W. B. West

    2005-01-01

    The purpose of this Software Platform Evaluation (SPE) is to document the top-level evaluation of potential software platforms on which to construct a simulation model that satisfies the requirements for a Verifiable Fuel Cycle Simulation Model (VISION) of the Advanced Fuel Cycle (AFC). See the Software Requirements Specification for Verifiable Fuel Cycle Simulation (VISION) Model (INEEL/EXT-05-02643, Rev. 0) for a discussion of the objective and scope of the VISION model. VISION is intended to serve as a broad systems analysis and study tool applicable to work conducted as part of the AFCI (including costs estimates) and Generation IV reactor development studies. This document will serve as a guide for selecting the most appropriate software platform for VISION. This is a ''living document'' that will be modified over the course of the execution of this work

  8. Stochastic models in reliability and maintenance

    CERN Document Server

    2002-01-01

    Our daily lives can be maintained by the high-technology systems. Computer systems are typical examples of such systems. We can enjoy our modern lives by using many computer systems. Much more importantly, we have to maintain such systems without failure, but cannot predict when such systems will fail and how to fix such systems without delay. A stochastic process is a set of outcomes of a random experiment indexed by time, and is one of the key tools needed to analyze the future behavior quantitatively. Reliability and maintainability technologies are of great interest and importance to the maintenance of such systems. Many mathematical models have been and will be proposed to describe reliability and maintainability systems by using the stochastic processes. The theme of this book is "Stochastic Models in Reliability and Main­ tainability. " This book consists of 12 chapters on the theme above from the different viewpoints of stochastic modeling. Chapter 1 is devoted to "Renewal Processes," under which cla...

  9. Engineering bioinformatics: building reliability, performance and productivity into bioinformatics software.

    Science.gov (United States)

    Lawlor, Brendan; Walsh, Paul

    2015-01-01

    There is a lack of software engineering skills in bioinformatic contexts. We discuss the consequences of this lack, examine existing explanations and remedies to the problem, point out their shortcomings, and propose alternatives. Previous analyses of the problem have tended to treat the use of software in scientific contexts as categorically different from the general application of software engineering in commercial settings. In contrast, we describe bioinformatic software engineering as a specialization of general software engineering, and examine how it should be practiced. Specifically, we highlight the difference between programming and software engineering, list elements of the latter and present the results of a survey of bioinformatic practitioners which quantifies the extent to which those elements are employed in bioinformatics. We propose that the ideal way to bring engineering values into research projects is to bring engineers themselves. We identify the role of Bioinformatic Engineer and describe how such a role would work within bioinformatic research teams. We conclude by recommending an educational emphasis on cross-training software engineers into life sciences, and propose research on Domain Specific Languages to facilitate collaboration between engineers and bioinformaticians.

  10. Engineering bioinformatics: building reliability, performance and productivity into bioinformatics software

    Science.gov (United States)

    Lawlor, Brendan; Walsh, Paul

    2015-01-01

    There is a lack of software engineering skills in bioinformatic contexts. We discuss the consequences of this lack, examine existing explanations and remedies to the problem, point out their shortcomings, and propose alternatives. Previous analyses of the problem have tended to treat the use of software in scientific contexts as categorically different from the general application of software engineering in commercial settings. In contrast, we describe bioinformatic software engineering as a specialization of general software engineering, and examine how it should be practiced. Specifically, we highlight the difference between programming and software engineering, list elements of the latter and present the results of a survey of bioinformatic practitioners which quantifies the extent to which those elements are employed in bioinformatics. We propose that the ideal way to bring engineering values into research projects is to bring engineers themselves. We identify the role of Bioinformatic Engineer and describe how such a role would work within bioinformatic research teams. We conclude by recommending an educational emphasis on cross-training software engineers into life sciences, and propose research on Domain Specific Languages to facilitate collaboration between engineers and bioinformaticians. PMID:25996054

  11. A possibilistic uncertainty model in classical reliability theory

    International Nuclear Information System (INIS)

    De Cooman, G.; Capelle, B.

    1994-01-01

    The authors argue that a possibilistic uncertainty model can be used to represent linguistic uncertainty about the states of a system and of its components. Furthermore, the basic properties of the application of this model to classical reliability theory are studied. The notion of the possibilistic reliability of a system or a component is defined. Based on the concept of a binary structure function, the important notion of a possibilistic function is introduced. It allows to calculate the possibilistic reliability of a system in terms of the possibilistic reliabilities of its components

  12. Software Design Modelling with Functional Petri Nets | Bakpo ...

    African Journals Online (AJOL)

    Software Design Modelling with Functional Petri Nets. ... of structured programs and a FPN Software prototype proposed for the conventional programming construct: if-then-else statement. ... EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT

  13. Software Innovation in a Mission Critical Environment

    Science.gov (United States)

    Fredrickson, Steven

    2015-01-01

    Operating in mission-critical environments requires trusted solutions, and the preference for "tried and true" approaches presents a potential barrier to infusing innovation into mission-critical systems. This presentation explores opportunities to overcome this barrier in the software domain. It outlines specific areas of innovation in software development achieved by the Johnson Space Center (JSC) Engineering Directorate in support of NASA's major human spaceflight programs, including International Space Station, Multi-Purpose Crew Vehicle (Orion), and Commercial Crew Programs. Software engineering teams at JSC work with hardware developers, mission planners, and system operators to integrate flight vehicles, habitats, robotics, and other spacecraft elements for genuinely mission critical applications. The innovations described, including the use of NASA Core Flight Software and its associated software tool chain, can lead to software that is more affordable, more reliable, better modelled, more flexible, more easily maintained, better tested, and enabling of automation.

  14. Developing Project Duration Models in Software Engineering

    Institute of Scientific and Technical Information of China (English)

    Pierre Bourque; Serge Oligny; Alain Abran; Bertrand Fournier

    2007-01-01

    Based on the empirical analysis of data contained in the International Software Benchmarking Standards Group(ISBSG) repository, this paper presents software engineering project duration models based on project effort. Duration models are built for the entire dataset and for subsets of projects developed for personal computer, mid-range and mainframeplatforms. Duration models are also constructed for projects requiring fewer than 400 person-hours of effort and for projectsre quiring more than 400 person-hours of effort. The usefulness of adding the maximum number of assigned resources as asecond independent variable to explain duration is also analyzed. The opportunity to build duration models directly fromproject functional size in function points is investigated as well.

  15. Reliability of electronic systems

    International Nuclear Information System (INIS)

    Roca, Jose L.

    2001-01-01

    Reliability techniques have been developed subsequently as a need of the diverse engineering disciplines, nevertheless they are not few those that think they have been work a lot on reliability before the same word was used in the current context. Military, space and nuclear industries were the first ones that have been involved in this topic, however not only in these environments it is that it has been carried out this small great revolution in benefit of the increase of the reliability figures of the products of those industries, but rather it has extended to the whole industry. The fact of the massive production, characteristic of the current industries, drove four decades ago, to the fall of the reliability of its products, on one hand, because the massively itself and, for other, to the recently discovered and even not stabilized industrial techniques. Industry should be changed according to those two new requirements, creating products of medium complexity and assuring an enough reliability appropriated to production costs and controls. Reliability began to be integral part of the manufactured product. Facing this philosophy, the book describes reliability techniques applied to electronics systems and provides a coherent and rigorous framework for these diverse activities providing a unifying scientific basis for the entire subject. It consists of eight chapters plus a lot of statistical tables and an extensive annotated bibliography. Chapters embrace the following topics: 1- Introduction to Reliability; 2- Basic Mathematical Concepts; 3- Catastrophic Failure Models; 4-Parametric Failure Models; 5- Systems Reliability; 6- Reliability in Design and Project; 7- Reliability Tests; 8- Software Reliability. This book is in Spanish language and has a potentially diverse audience as a text book from academic to industrial courses. (author)

  16. An integrated approach to human reliability analysis -- decision analytic dynamic reliability model

    International Nuclear Information System (INIS)

    Holmberg, J.; Hukki, K.; Norros, L.; Pulkkinen, U.; Pyy, P.

    1999-01-01

    The reliability of human operators in process control is sensitive to the context. In many contemporary human reliability analysis (HRA) methods, this is not sufficiently taken into account. The aim of this article is that integration between probabilistic and psychological approaches in human reliability should be attempted. This is achieved first, by adopting such methods that adequately reflect the essential features of the process control activity, and secondly, by carrying out an interactive HRA process. Description of the activity context, probabilistic modeling, and psychological analysis form an iterative interdisciplinary sequence of analysis in which the results of one sub-task maybe input to another. The analysis of the context is carried out first with the help of a common set of conceptual tools. The resulting descriptions of the context promote the probabilistic modeling, through which new results regarding the probabilistic dynamics can be achieved. These can be incorporated in the context descriptions used as reference in the psychological analysis of actual performance. The results also provide new knowledge of the constraints of activity, by providing information of the premises of the operator's actions. Finally, the stochastic marked point process model gives a tool, by which psychological methodology may be interpreted and utilized for reliability analysis

  17. Software error masking effect on hardware faults

    International Nuclear Information System (INIS)

    Choi, Jong Gyun; Seong, Poong Hyun

    1999-01-01

    Based on the Very High Speed Integrated Circuit (VHSIC) Hardware Description Language (VHDL), in this work, a simulation model for fault injection is developed to estimate the dependability of the digital system in operational phase. We investigated the software masking effect on hardware faults through the single bit-flip and stuck-at-x fault injection into the internal registers of the processor and memory cells. The fault location reaches all registers and memory cells. Fault distribution over locations is randomly chosen based on a uniform probability distribution. Using this model, we have predicted the reliability and masking effect of an application software in a digital system-Interposing Logic System (ILS) in a nuclear power plant. We have considered four the software operational profiles. From the results it was found that the software masking effect on hardware faults should be properly considered for predicting the system dependability accurately in operation phase. It is because the masking effect was formed to have different values according to the operational profile

  18. Reliability Modeling of Electromechanical System with Meta-Action Chain Methodology

    Directory of Open Access Journals (Sweden)

    Genbao Zhang

    2018-01-01

    Full Text Available To establish a more flexible and accurate reliability model, the reliability modeling and solving algorithm based on the meta-action chain thought are used in this thesis. Instead of estimating the reliability of the whole system only in the standard operating mode, this dissertation adopts the structure chain and the operating action chain for the system reliability modeling. The failure information and structure information for each component are integrated into the model to overcome the given factors applied in the traditional modeling. In the industrial application, there may be different operating modes for a multicomponent system. The meta-action chain methodology can estimate the system reliability under different operating modes by modeling the components with varieties of failure sensitivities. This approach has been identified by computing some electromechanical system cases. The results indicate that the process could improve the system reliability estimation. It is an effective tool to solve the reliability estimation problem in the system under various operating modes.

  19. Software that meets its Intent

    NARCIS (Netherlands)

    Huisman, Marieke; Bos, Herbert; Brinkkemper, Sjaak; van Deursen, Arie; Groote, Jan Friso; Lago, Patricia; van de Pol, Jaco; Visser, Eelco; Margaria, Tiziana; Steffen, Bernhard

    2016-01-01

    Software is widely used, and society increasingly depends on its reliability. However, software has become so complex and it evolves so quickly that we fail to keep it under control. Therefore, we propose intents: fundamental laws that capture a software systems’ intended behavior (resilient,

  20. Management models in the NZ software industry

    Directory of Open Access Journals (Sweden)

    Holger Spill

    Full Text Available This research interviewed eight innovative New Zealand software companies to find out how they manage new product development. It looked at how management used standard techniques of software development to manage product uncertainty through the theoretical lens of the Cyclic Innovation Model. The study found that while there is considerable variation, the management of innovation was largely determined by the level of complexity. Organizations with complex innovative software products had a more iterative software development style, more flexible internal processes and swifter decision-making. Organizations with less complexity in their products tended to use more formal structured approaches. Overall complexity could be inferred with reference to four key factors within the development environment.

  1. Open Source Software Success Model for Iran: End-User Satisfaction Viewpoint

    Directory of Open Access Journals (Sweden)

    Ali Niknafs

    2012-03-01

    Full Text Available The open source software development is notable option for software companies. Recent years, many advantages of this software type are cause of move to that in Iran. National security and international restrictions problems and also software and services costs and more other problems intensified importance of use of this software. Users and their viewpoints are the critical success factor in the software plans. But there is not an appropriate model for open source software case in Iran. This research tried to develop a measuring open source software success model for Iran. By use of data gathered from open source users and online survey the model was tested. The results showed that components by positive effect on open source success were user satisfaction, open source community services quality, open source quality, copyright and security.

  2. Envisioning the future of collaborative model-driven software engineering

    NARCIS (Netherlands)

    Di Ruscio, Davide; Franzago, Mirco; Malavolta, Ivano; Muccini, Henry

    2017-01-01

    The adoption of Model-driven Software Engineering (MDSE) to develop complex software systems in application domains like automotive and aerospace is being supported by the maturation of model-driven platforms and tools. However, empirical studies show that a wider adoption of MDSE technologies is

  3. Extracting software static defect models using data mining

    Directory of Open Access Journals (Sweden)

    Ahmed H. Yousef

    2015-03-01

    Full Text Available Large software projects are subject to quality risks of having defective modules that will cause failures during the software execution. Several software repositories contain source code of large projects that are composed of many modules. These software repositories include data for the software metrics of these modules and the defective state of each module. In this paper, a data mining approach is used to show the attributes that predict the defective state of software modules. Software solution architecture is proposed to convert the extracted knowledge into data mining models that can be integrated with the current software project metrics and bugs data in order to enhance the prediction. The results show better prediction capabilities when all the algorithms are combined using weighted votes. When only one individual algorithm is used, Naïve Bayes algorithm has the best results, then the Neural Network and the Decision Trees algorithms.

  4. Software cost/resource modeling: Software quality tradeoff measurement

    Science.gov (United States)

    Lawler, R. W.

    1980-01-01

    A conceptual framework for treating software quality from a total system perspective is developed. Examples are given to show how system quality objectives may be allocated to hardware and software; to illustrate trades among quality factors, both hardware and software, to achieve system performance objectives; and to illustrate the impact of certain design choices on software functionality.

  5. The cleanroom case study in the Software Engineering Laboratory: Project description and early analysis

    Science.gov (United States)

    Green, Scott; Kouchakdjian, Ara; Basili, Victor; Weidow, David

    1990-01-01

    This case study analyzes the application of the cleanroom software development methodology to the development of production software at the NASA/Goddard Space Flight Center. The cleanroom methodology emphasizes human discipline in program verification to produce reliable software products that are right the first time. Preliminary analysis of the cleanroom case study shows that the method can be applied successfully in the FDD environment and may increase staff productivity and product quality. Compared to typical Software Engineering Laboratory (SEL) activities, there is evidence of lower failure rates, a more complete and consistent set of inline code documentation, a different distribution of phase effort activity, and a different growth profile in terms of lines of code developed. The major goals of the study were to: (1) assess the process used in the SEL cleanroom model with respect to team structure, team activities, and effort distribution; (2) analyze the products of the SEL cleanroom model and determine the impact on measures of interest, including reliability, productivity, overall life-cycle cost, and software quality; and (3) analyze the residual products in the application of the SEL cleanroom model, such as fault distribution, error characteristics, system growth, and computer usage.

  6. Software Assurance Competency Model

    Science.gov (United States)

    2013-03-01

    COTS) software , and software as a service ( SaaS ). L2: Define and analyze risks in the acquisition of contracted software , COTS software , and SaaS ...2010a]: Application of technologies and processes to achieve a required level of confidence that software systems and services function in the...

  7. The software-cycle model for re-engineering and reuse

    Science.gov (United States)

    Bailey, John W.; Basili, Victor R.

    1992-01-01

    This paper reports on the progress of a study which will contribute to our ability to perform high-level, component-based programming by describing means to obtain useful components, methods for the configuration and integration of those components, and an underlying economic model of the costs and benefits associated with this approach to reuse. One goal of the study is to develop and demonstrate methods to recover reusable components from domain-specific software through a combination of tools, to perform the identification, extraction, and re-engineering of components, and domain experts, to direct the applications of those tools. A second goal of the study is to enable the reuse of those components by identifying techniques for configuring and recombining the re-engineered software. This component-recovery or software-cycle model addresses not only the selection and re-engineering of components, but also their recombination into new programs. Once a model of reuse activities has been developed, the quantification of the costs and benefits of various reuse options will enable the development of an adaptable economic model of reuse, which is the principal goal of the overall study. This paper reports on the conception of the software-cycle model and on several supporting techniques of software recovery, measurement, and reuse which will lead to the development of the desired economic model.

  8. Flexible software process lines in practice: A metamodel-based approach to effectively construct and manage families of software process models

    DEFF Research Database (Denmark)

    Kuhrmann, Marco; Ternité, Thomas; Friedrich, Jan

    2017-01-01

    Process flexibility and adaptability is frequently discussed, and several proposals aim to improve software processes for a given organization-/project context. A software process line (SPrL) is an instrument to systematically construct and manage variable software processes, by combining pre-def......: A metamodel-based approach to effectively construct and manage families of software process models [Ku16]. This paper was published as original research article in the Journal of Systems and Software.......Process flexibility and adaptability is frequently discussed, and several proposals aim to improve software processes for a given organization-/project context. A software process line (SPrL) is an instrument to systematically construct and manage variable software processes, by combining pre...... to construct flexible SPrLs and show its practical application in the German V-Modell XT. We contribute a proven approach that is presented as metamodel fragment for reuse and implementation in further process modeling approaches. This summary refers to the paper Flexible software process lines in practice...

  9. Application of Formal Methods in Software Engineering

    Directory of Open Access Journals (Sweden)

    Adriana Morales

    2011-12-01

    Full Text Available The purpose of this research work is to examine: (1 why are necessary the formal methods for software systems today, (2 high integrity systems through the methodology C-by-C –Correctness-by-Construction–, and (3 an affordable methodology to apply formal methods in software engineering. The research process included reviews of the literature through Internet, in publications and presentations in events. Among the Research results found that: (1 there is increasing the dependence that the nations have, the companies and people of software systems, (2 there is growing demand for software Engineering to increase social trust in the software systems, (3 exist methodologies, as C-by-C, that can provide that level of trust, (4 Formal Methods constitute a principle of computer science that can be applied software engineering to perform reliable process in software development, (5 software users have the responsibility to demand reliable software products, and (6 software engineers have the responsibility to develop reliable software products. Furthermore, it is concluded that: (1 it takes more research to identify and analyze other methodologies and tools that provide process to apply the Formal Software Engineering methods, (2 Formal Methods provide an unprecedented ability to increase the trust in the exactitude of the software products and (3 by development of new methodologies and tools is being achieved costs are not more a disadvantage for application of formal methods.

  10. Assessment of structural reliability of precast concrete buildings

    Directory of Open Access Journals (Sweden)

    Koyankin Alexandr

    2018-01-01

    Full Text Available Precast housing construction is currently being under rapid development, however, reliability of building structures made from precast reinforced concrete cannot be assessed rationally due to insufficient research data on that subject. In this regard, experimental and numerical studies were conducted to assess structural reliability of precast buildings as described in the given paper. Experimental studies of full-scale and model samples were conducted; numerical studies were held based on finite element models using “Lira” software. The objects under study included fragment of flooring of a building under construction, full-size fragment of flooring, full-scale models of precast cross-beams-to-columns joints and joints between hollow-core floor slabs and precast and cast-in-place cross-beams. Conducted research enabled to perform an objective assessment of structural reliability of precast buildings.

  11. A bridge role metric model for nodes in software networks.

    Directory of Open Access Journals (Sweden)

    Bo Li

    Full Text Available A bridge role metric model is put forward in this paper. Compared with previous metric models, our solution of a large-scale object-oriented software system as a complex network is inherently more realistic. To acquire nodes and links in an undirected network, a new model that presents the crucial connectivity of a module or the hub instead of only centrality as in previous metric models is presented. Two previous metric models are described for comparison. In addition, it is obvious that the fitting curve between the Bre results and degrees can well be fitted by a power law. The model represents many realistic characteristics of actual software structures, and a hydropower simulation system is taken as an example. This paper makes additional contributions to an accurate understanding of module design of software systems and is expected to be beneficial to software engineering practices.

  12. SOFTCOST - DEEP SPACE NETWORK SOFTWARE COST MODEL

    Science.gov (United States)

    Tausworthe, R. C.

    1994-01-01

    The early-on estimation of required resources and a schedule for the development and maintenance of software is usually the least precise aspect of the software life cycle. However, it is desirable to make some sort of an orderly and rational attempt at estimation in order to plan and organize an implementation effort. The Software Cost Estimation Model program, SOFTCOST, was developed to provide a consistent automated resource and schedule model which is more formalized than the often used guesswork model based on experience, intuition, and luck. SOFTCOST was developed after the evaluation of a number of existing cost estimation programs indicated that there was a need for a cost estimation program with a wide range of application and adaptability to diverse kinds of software. SOFTCOST combines several software cost models found in the open literature into one comprehensive set of algorithms that compensate for nearly fifty implementation factors relative to size of the task, inherited baseline, organizational and system environment, and difficulty of the task. SOFTCOST produces mean and variance estimates of software size, implementation productivity, recommended staff level, probable duration, amount of computer resources required, and amount and cost of software documentation. Since the confidence level for a project using mean estimates is small, the user is given the opportunity to enter risk-biased values for effort, duration, and staffing, to achieve higher confidence levels. SOFTCOST then produces a PERT/CPM file with subtask efforts, durations, and precedences defined so as to produce the Work Breakdown Structure (WBS) and schedule having the asked-for overall effort and duration. The SOFTCOST program operates in an interactive environment prompting the user for all of the required input. The program builds the supporting PERT data base in a file for later report generation or revision. The PERT schedule and the WBS schedule may be printed and stored in a

  13. Integrating Design Decision Management with Model-based Software Development

    DEFF Research Database (Denmark)

    Könemann, Patrick

    Design decisions are continuously made during the development of software systems and are important artifacts for design documentation. Dedicated decision management systems are often used to capture such design knowledge. Most such systems are, however, separated from the design artifacts...... of the system. In model-based software development, where design models are used to develop a software system, outcomes of many design decisions have big impact on design models. The realization of design decisions is often manual and tedious work on design models. Moreover, keeping design models consistent......, or by ignoring the causes. This substitutes manual reviews to some extent. The concepts, implemented in a tool, have been validated with design patterns, refactorings, and domain level tests that comprise a replay of a real project. This proves the applicability of the solution to realistic examples...

  14. DAE Tools: equation-based object-oriented modelling, simulation and optimisation software

    Directory of Open Access Journals (Sweden)

    Dragan D. Nikolić

    2016-04-01

    Full Text Available In this work, DAE Tools modelling, simulation and optimisation software, its programming paradigms and main features are presented. The current approaches to mathematical modelling such as the use of modelling languages and general-purpose programming languages are analysed. The common set of capabilities required by the typical simulation software are discussed, and the shortcomings of the current approaches recognised. A new hybrid approach is introduced, and the modelling languages and the hybrid approach are compared in terms of the grammar, compiler, parser and interpreter requirements, maintainability and portability. The most important characteristics of the new approach are discussed, such as: (1 support for the runtime model generation; (2 support for the runtime simulation set-up; (3 support for complex runtime operating procedures; (4 interoperability with the third party software packages (i.e. NumPy/SciPy; (5 suitability for embedding and use as a web application or software as a service; and (6 code-generation, model exchange and co-simulation capabilities. The benefits of an equation-based approach to modelling, implemented in a fourth generation object-oriented general purpose programming language such as Python are discussed. The architecture and the software implementation details as well as the type of problems that can be solved using DAE Tools software are described. Finally, some applications of the software at different levels of abstraction are presented, and its embedding capabilities and suitability for use as a software as a service is demonstrated.

  15. Finite test sets development method for test execution of safety critical software

    International Nuclear Information System (INIS)

    El-Bordany Ayman; Yun, Won Young

    2014-01-01

    It reads inputs, computes new states, and updates output for each scan cycle. Korea Nuclear Instrumentation and Control System (KNICS) has recently developed a fully digitalized Reactor Protection System (RPS) based on PLD. As a digital system, this RPS is equipped with a dedicated software. The Reliability of this software is crucial to NPPs safety where its malfunction may cause irreversible consequences and affect the whole system as a Common Cause Failure (CCF). To guarantee the reliability of the whole system, the reliability of this software needs to be quantified. There are three representative methods for software reliability quantification, namely the Verification and Validation (V and V) quality-based method, the Software Reliability Growth Model (SRGM), and the test-based method. An important concept of the guidance is that the test sets represent 'trajectories' (a series of successive values for the input variables of a program that occur during the operation of the software over time) in the space of inputs to the software.. Actually, the inputs to the software depends on the state of plant at that time, and these inputs form a new internal state of the software by changing values of some variables. In other words, internal state of the software at specific timing depends on the history of past inputs. Here the internal state of the software which can be changed by past inputs is named as Context of Software (CoS). In a certain CoS, a software failure occurs when a fault is triggered by some inputs. To cover the failure occurrence mechanism of a software, preceding researches insist that the inputs should be a trajectory form. However, in this approach, there are two critical problems. One is the length of the trajectory input. Input trajectory should long enough to cover failure mechanism, but the enough length is not clear. What is worse, to cover some accident scenario, one set of input should represent dozen hours of successive values

  16. Reliable computer systems.

    Science.gov (United States)

    Wear, L L; Pinkert, J R

    1993-11-01

    In this article, we looked at some decisions that apply to the design of reliable computer systems. We began with a discussion of several terms such as testability, then described some systems that call for highly reliable hardware and software. The article concluded with a discussion of methods that can be used to achieve higher reliability in computer systems. Reliability and fault tolerance in computers probably will continue to grow in importance. As more and more systems are computerized, people will want assurances about the reliability of these systems, and their ability to work properly even when sub-systems fail.

  17. Reliability training

    Science.gov (United States)

    Lalli, Vincent R. (Editor); Malec, Henry A. (Editor); Dillard, Richard B.; Wong, Kam L.; Barber, Frank J.; Barina, Frank J.

    1992-01-01

    Discussed here is failure physics, the study of how products, hardware, software, and systems fail and what can be done about it. The intent is to impart useful information, to extend the limits of production capability, and to assist in achieving low cost reliable products. A review of reliability for the years 1940 to 2000 is given. Next, a review of mathematics is given as well as a description of what elements contribute to product failures. Basic reliability theory and the disciplines that allow us to control and eliminate failures are elucidated.

  18. HiRel: Hybrid Automated Reliability Predictor (HARP) integrated reliability tool system, (version 7.0). Volume 4: HARP Output (HARPO) graphics display user's guide

    Science.gov (United States)

    Sproles, Darrell W.; Bavuso, Salvatore J.

    1994-01-01

    The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of highly reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed at the outset to be compatible with most computing platforms and operating systems and some programs have been beta tested within the aerospace community for over 8 years. This document is a user's guide for the HiRel graphical postprocessor program HARPO (HARP Output). HARPO reads ASCII files generated by HARP. It provides an interactive plotting capability that can be used to display alternate model data for trade-off analyses. File data can also be imported to other commercial software programs.

  19. The Software Architecture of Global Climate Models

    Science.gov (United States)

    Alexander, K. A.; Easterbrook, S. M.

    2011-12-01

    It has become common to compare and contrast the output of multiple global climate models (GCMs), such as in the Climate Model Intercomparison Project Phase 5 (CMIP5). However, intercomparisons of the software architecture of GCMs are almost nonexistent. In this qualitative study of seven GCMs from Canada, the United States, and Europe, we attempt to fill this gap in research. We describe the various representations of the climate system as computer programs, and account for architectural differences between models. Most GCMs now practice component-based software engineering, where Earth system components (such as the atmosphere or land surface) are present as highly encapsulated sub-models. This architecture facilitates a mix-and-match approach to climate modelling that allows for convenient sharing of model components between institutions, but it also leads to difficulty when choosing where to draw the lines between systems that are not encapsulated in the real world, such as sea ice. We also examine different styles of couplers in GCMs, which manage interaction and data flow between components. Finally, we pay particular attention to the varying levels of complexity in GCMs, both between and within models. Many GCMs have some components that are significantly more complex than others, a phenomenon which can be explained by the respective institution's research goals as well as the origin of the model components. In conclusion, although some features of software architecture have been adopted by every GCM we examined, other features show a wide range of different design choices and strategies. These architectural differences may provide new insights into variability and spread between models.

  20. Analytical modeling of nuclear power station operator reliability

    International Nuclear Information System (INIS)

    Sabri, Z.A.; Husseiny, A.A.

    1979-01-01

    The operator-plant interface is a critical component of power stations which requires the formulation of mathematical models to be applied in plant reliability analysis. The human model introduced here is based on cybernetic interactions and allows for use of available data from psychological experiments, hot and cold training and normal operation. The operator model is identified and integrated in the control and protection systems. The availability and reliability are given for different segments of the operator task and for specific periods of the operator life: namely, training, operation and vigilance or near retirement periods. The results can be easily and directly incorporated in system reliability analysis. (author)

  1. Software Engineering Tools for Scientific Models

    Science.gov (United States)

    Abrams, Marc; Saboo, Pallabi; Sonsini, Mike

    2013-01-01

    Software tools were constructed to address issues the NASA Fortran development community faces, and they were tested on real models currently in use at NASA. These proof-of-concept tools address the High-End Computing Program and the Modeling, Analysis, and Prediction Program. Two examples are the NASA Goddard Earth Observing System Model, Version 5 (GEOS-5) atmospheric model in Cell Fortran on the Cell Broadband Engine, and the Goddard Institute for Space Studies (GISS) coupled atmosphere- ocean model called ModelE, written in fixed format Fortran.

  2. Reliability and Model Fit

    Science.gov (United States)

    Stanley, Leanne M.; Edwards, Michael C.

    2016-01-01

    The purpose of this article is to highlight the distinction between the reliability of test scores and the fit of psychometric measurement models, reminding readers why it is important to consider both when evaluating whether test scores are valid for a proposed interpretation and/or use. It is often the case that an investigator judges both the…

  3. Use of operational data for the assessment of pre-existing software

    International Nuclear Information System (INIS)

    Helminen, Atte; Gran, Bjoern Axel; Kristiansen, Monica; Winther, Rune

    2004-01-01

    To build sufficient confidence on the reliability of the safety systems of nuclear power plants all available sources of information should be used. One important data source is the operational experience collected for the system. The operational experience is particularly applicable for systems of pre-existing software. Even though systems and devices involving pre-existing software are not considered for the functions of highest safety levels of nuclear power plants, they will most probably be introduced to functions of lower safety levels and to none-safety related applications. In the paper we shortly discuss the use of operational experience data for the reliability assessment of pre-existing software in general, and the role of pre-existing software in relation to safety applications. Then we discuss the modelling of operational profiles, the application of expert judgement on operational profiles and the need of a realistic test case. Finally, we discuss the application of operational experience data in Bayesian statistics. (Author)

  4. Evaluating the Effect of Software Quality Characteristics on Health Care Quality Indicators

    Directory of Open Access Journals (Sweden)

    Sakineh Aghazadeh

    2015-07-01

    Full Text Available Introduction: Various types of software are used in health care organizations to manage information and care processes. The quality of software has been an important concern for both health authorities and designers of Health Information Technology. Thus, assessing the effect of software quality on the performance quality of healthcare institutions is essential. Method: The most important health care quality indicators in relation to software quality characteristics are provided via an already performed literature review. ISO 9126 standard model is used for definition and integration of various characteristics of software quality. The effects of software quality characteristics and sub-characteristics on the healthcare indicators are evaluated through expert opinion analyses. A questionnaire comprising of 126 questions of 10-point Likert scale was used to gather opinions of experts in the field of Medical/Health Informatics. The data was analyzed using Structural Equation Modeling. Results: Our findings showed that software Maintainability was rated as the most effective factor on user satisfaction (R2 =0.89 and Functionality as the most important and independent variable affecting patient care quality (R2 =0.98. Efficiency was considered as the most effective factor on workflow (R2 =0.97, and Maintainability as the most important factor that affects healthcare communication (R2 =0.95. Usability and Efficiency were rated as the most effectual factor affecting patient satisfaction (R2 =0.80, 0.81. Reliability, Maintainability, and Efficiency were considered as the main factors affecting care costs (R2 =0.87, 0.74, 0.87. Conclusion: We presented a new model based on ISO standards. The model demonstrates and weighs the relations between software quality characteristics and healthcare quality indicators. The clear relationships between variables and the type of the metrics and measurement methods used in the model make it a reliable method to assess

  5. Reliability modeling of an engineered barrier system

    International Nuclear Information System (INIS)

    Ananda, M.M.A.; Singh, A.K.; Flueck, J.A.

    1993-01-01

    The Weibull distribution is widely used in reliability literature as a distribution of time to failure, as it allows for both increasing failure rate (IFR) and decreasing failure rate (DFR) models. It has also been used to develop models for an engineered barrier system (EBS), which is known to be one of the key components in a deep geological repository for high level radioactive waste (HLW). The EBS failure time can more realistically be modelled by an IFR distribution, since the failure rate for the EBS is not expected to decrease with time. In this paper, we use an IFR distribution to develop a reliability model for the EBS

  6. Reliability modeling of an engineered barrier system

    International Nuclear Information System (INIS)

    Ananda, M.M.A.; Singh, A.K.; Flueck, J.A.

    1993-01-01

    The Weibull distribution is widely used in reliability literature as a distribution of time to failure, as it allows for both increasing failure rate (IFR) and decreasing failure rate (DFR) models. It has also been used to develop models for an engineered barrier system (EBS), which is known to be one of the key components in a deep geological repository for high level radioactive waste (HLW). The EBS failure time can more realistically be modelled by an IFR distribution, since the failure rate for the EBS is not expected to decrease with time. In this paper, an IFR distribution is used to develop a reliability model for the EBS

  7. Risky module prediction for nuclear I and C software

    International Nuclear Information System (INIS)

    Kim, Young Mi; Kim, Hyeon Soo

    2012-01-01

    As software based digital I and C (Instrumentation and Control) systems are used more prevalently in nuclear plants, enhancement of software dependability has become an important issue in the area of nuclear I and C systems. Critical attributes of software dependability are safety and reliability. These attributes are tightly related to software failures caused by faults. Software testing and V and V (Verification and Validation) activities are hence important for enhancing software dependability. If the risky modules of safety-critical software can be predicted, it will be possible to focus on testing and V and V activities more efficiently and effectively. It should also make it possible to better allocate resources for regulation activities. We propose a prediction technique to estimate risky software modules by adopting machine learning models based on software complexity metrics. An empirical study with various machine learning algorithms was executed for comparing the prediction performance. Experimental results show SVMs (Support Vector Machines) perform as well or better than the other methods.

  8. Modeling and Analysis of Component Faults and Reliability

    DEFF Research Database (Denmark)

    Le Guilly, Thibaut; Olsen, Petur; Ravn, Anders Peter

    2016-01-01

    This chapter presents a process to design and validate models of reactive systems in the form of communicating timed automata. The models are extended with faults associated with probabilities of occurrence. This enables a fault tree analysis of the system using minimal cut sets that are automati......This chapter presents a process to design and validate models of reactive systems in the form of communicating timed automata. The models are extended with faults associated with probabilities of occurrence. This enables a fault tree analysis of the system using minimal cut sets...... that are automatically generated. The stochastic information on the faults is used to estimate the reliability of the fault affected system. The reliability is given with respect to properties of the system state space. We illustrate the process on a concrete example using the Uppaal model checker for validating...... the ideal system model and the fault modeling. Then the statistical version of the tool, UppaalSMC, is used to find reliability estimates....

  9. Validation and reliability of the sex estimation of the human os coxae using freely available DSP2 software for bioarchaeology and forensic anthropology.

    Science.gov (United States)

    Brůžek, Jaroslav; Santos, Frédéric; Dutailly, Bruno; Murail, Pascal; Cunha, Eugenia

    2017-10-01

    A new tool for skeletal sex estimation based on measurements of the human os coxae is presented using skeletons from a metapopulation of identified adult individuals from twelve independent population samples. For reliable sex estimation, a posterior probability greater than 0.95 was considered to be the classification threshold: below this value, estimates are considered indeterminate. By providing free software, we aim to develop an even more disseminated method for sex estimation. Ten metric variables collected from 2,040 ossa coxa of adult subjects of known sex were recorded between 1986 and 2002 (reference sample). To test both the validity and reliability, a target sample consisting of two series of adult ossa coxa of known sex (n = 623) was used. The DSP2 software (Diagnose Sexuelle Probabiliste v2) is based on Linear Discriminant Analysis, and the posterior probabilities are calculated using an R script. For the reference sample, any combination of four dimensions provides a correct sex estimate in at least 99% of cases. The percentage of individuals for whom sex can be estimated depends on the number of dimensions; for all ten variables it is higher than 90%. Those results are confirmed in the target sample. Our posterior probability threshold of 0.95 for sex estimate corresponds to the traditional sectioning point used in osteological studies. DSP2 software is replacing the former version that should not be used anymore. DSP2 is a robust and reliable technique for sexing adult os coxae, and is also user friendly. © 2017 Wiley Periodicals, Inc.

  10. Development of a methodology for assessing the safety of embedded software systems

    Science.gov (United States)

    Garrett, C. J.; Guarro, S. B.; Apostolakis, G. E.

    1993-01-01

    A Dynamic Flowgraph Methodology (DFM) based on an integrated approach to modeling and analyzing the behavior of software-driven embedded systems for assessing and verifying reliability and safety is discussed. DFM is based on an extension of the Logic Flowgraph Methodology to incorporate state transition models. System models which express the logic of the system in terms of causal relationships between physical variables and temporal characteristics of software modules are analyzed to determine how a certain state can be reached. This is done by developing timed fault trees which take the form of logical combinations of static trees relating the system parameters at different point in time. The resulting information concerning the hardware and software states can be used to eliminate unsafe execution paths and identify testing criteria for safety critical software functions.

  11. Improved Efficiency and Reliability of NGS Amplicon Sequencing Data Analysis for Genetic Diagnostic Procedures Using AGSA Software

    Directory of Open Access Journals (Sweden)

    Axel Poulet

    2016-01-01

    Full Text Available Screening for BRCA mutations in women with familial risk of breast or ovarian cancer is an ideal situation for high-throughput sequencing, providing large amounts of low cost data. However, 454, Roche, and Ion Torrent, Thermo Fisher, technologies produce homopolymer-associated indel errors, complicating their use in routine diagnostics. We developed software, named AGSA, which helps to detect false positive mutations in homopolymeric sequences. Seventy-two familial breast cancer cases were analysed in parallel by amplicon 454 pyrosequencing and Sanger dideoxy sequencing for genetic variations of the BRCA genes. All 565 variants detected by dideoxy sequencing were also detected by pyrosequencing. Furthermore, pyrosequencing detected 42 variants that were missed with Sanger technique. Six amplicons contained homopolymer tracts in the coding sequence that were systematically misread by the software supplied by Roche. Read data plotted as histograms by AGSA software aided the analysis considerably and allowed validation of the majority of homopolymers. As an optimisation, additional 250 patients were analysed using microfluidic amplification of regions of interest (Access Array Fluidigm of the BRCA genes, followed by 454 sequencing and AGSA analysis. AGSA complements a complete line of high-throughput diagnostic sequence analysis, reducing time and costs while increasing reliability, notably for homopolymer tracts.

  12. Real-Time Reliability Verification for UAV Flight Control System Supporting Airworthiness Certification.

    Science.gov (United States)

    Xu, Haiyang; Wang, Ping

    2016-01-01

    In order to verify the real-time reliability of unmanned aerial vehicle (UAV) flight control system and comply with the airworthiness certification standard, we proposed a model-based integration framework for modeling and verification of time property. Combining with the advantages of MARTE, this framework uses class diagram to create the static model of software system, and utilizes state chart to create the dynamic model. In term of the defined transformation rules, the MARTE model could be transformed to formal integrated model, and the different part of the model could also be verified by using existing formal tools. For the real-time specifications of software system, we also proposed a generating algorithm for temporal logic formula, which could automatically extract real-time property from time-sensitive live sequence chart (TLSC). Finally, we modeled the simplified flight control system of UAV to check its real-time property. The results showed that the framework could be used to create the system model, as well as precisely analyze and verify the real-time reliability of UAV flight control system.

  13. Multi-objective optimization of generalized reliability design problems using feature models-A concept for early design stages

    International Nuclear Information System (INIS)

    Limbourg, Philipp; Kochs, Hans-Dieter

    2008-01-01

    Reliability optimization problems such as the redundancy allocation problem (RAP) have been of considerable interest in the past. However, due to the restrictions of the design space formulation, they may not be applicable in all practical design problems. A method with high modelling freedom for rapid design screening is desirable, especially in early design stages. This work presents a novel approach to reliability optimization. Feature modelling, a specification method originating from software engineering, is applied for the fast specification and enumeration of complex design spaces. It is shown how feature models can not only describe arbitrary RAPs but also much more complex design problems. The design screening is accomplished by a multi-objective evolutionary algorithm for probabilistic objectives. Comparing averages or medians may hide the true characteristics of this distributions. Therefore the algorithm uses solely the probability of a system dominating another to achieve the Pareto optimal set. We illustrate the approach by specifying a RAP and a more complex design space and screening them with the evolutionary algorithm

  14. Supply chain reliability modelling

    Directory of Open Access Journals (Sweden)

    Eugen Zaitsev

    2012-03-01

    Full Text Available Background: Today it is virtually impossible to operate alone on the international level in the logistics business. This promotes the establishment and development of new integrated business entities - logistic operators. However, such cooperation within a supply chain creates also many problems related to the supply chain reliability as well as the optimization of the supplies planning. The aim of this paper was to develop and formulate the mathematical model and algorithms to find the optimum plan of supplies by using economic criterion and the model for the probability evaluating of non-failure operation of supply chain. Methods: The mathematical model and algorithms to find the optimum plan of supplies were developed and formulated by using economic criterion and the model for the probability evaluating of non-failure operation of supply chain. Results and conclusions: The problem of ensuring failure-free performance of goods supply channel analyzed in the paper is characteristic of distributed network systems that make active use of business process outsourcing technologies. The complex planning problem occurring in such systems that requires taking into account the consumer's requirements for failure-free performance in terms of supply volumes and correctness can be reduced to a relatively simple linear programming problem through logical analysis of the structures. The sequence of the operations, which should be taken into account during the process of the supply planning with the supplier's functional reliability, was presented.

  15. Methodologic model to scheduling on service systems: a software engineering approach

    Directory of Open Access Journals (Sweden)

    Eduyn Ramiro Lopez-Santana

    2016-06-01

    Full Text Available This paper presents an approach of software engineering to a research proposal to make an Expert System to scheduling on service systems using methodologies and processes of software development. We use the adaptive software development as methodology for the software architecture based on the description as a software metaprocess that characterizes the research process. We make UML’s diagrams (Unified Modeling Language to provide a visual modeling that describes the research methodology in order to identify the actors, elements and interactions in the research process.

  16. Impact of Agile Software Development Model on Software Maintainability

    Science.gov (United States)

    Gawali, Ajay R.

    2012-01-01

    Software maintenance and support costs account for up to 60% of the overall software life cycle cost and often burdens tightly budgeted information technology (IT) organizations. Agile software development approach delivers business value early, but implications on software maintainability are still unknown. The purpose of this quantitative study…

  17. A software prototype for assessing the reliability of a concrete bridge superstructure subjected to chloride-induced reinforcement corrosion

    DEFF Research Database (Denmark)

    Schneider, Ronald; Thöns, Sebastian; Fischer, Johannes

    2014-01-01

    state of the box girder and a structural model for evaluating the overall system reliability. The condition model is based on a dynamic Bayesian network (DBN) model which considers the spatial variation of the corrosion process. Inspection data are included in the calculation of the system reliability...

  18. A Simulation Model for the Waterfall Software Development Life Cycle

    OpenAIRE

    Bassil, Youssef

    2012-01-01

    Software development life cycle or SDLC for short is a methodology for designing, building, and maintaining information and industrial systems. So far, there exist many SDLC models, one of which is the Waterfall model which comprises five phases to be completed sequentially in order to develop a software solution. However, SDLC of software systems has always encountered problems and limitations that resulted in significant budget overruns, late or suspended deliveries, and dissatisfied client...

  19. 76 FR 28819 - NUREG/CR-XXXX, Development of Quantitative Software Reliability Models for Digital Protection...

    Science.gov (United States)

    2011-05-18

    ... NUCLEAR REGULATORY COMMISSION [NRC-2011-0109] NUREG/CR-XXXX, Development of Quantitative Software...: The Nuclear Regulatory Commission has issued for public comment a document entitled: NUREG/CR-XXXX...-XXXX is available electronically under ADAMS Accession Number ML111020087. Federal Rulemaking Web Site...

  20. STEM - software test and evaluation methods. A study of failure dependency in diverse software

    International Nuclear Information System (INIS)

    Bishop, P.G.; Pullen, F.D.

    1989-02-01

    STEM is a collaborative software reliability project undertaken in partnership with Halden Reactor Project, UKAEA, and the Finnish Technical Research Centre. The objective of STEM is to evaluate a number of fault detection and fault estimation methods which can be applied to high integrity software. This Report presents a study of the observed failure dependencies between faults in diversely produced software. (author)

  1. Bayesian methodology for reliability model acceptance

    International Nuclear Information System (INIS)

    Zhang Ruoxue; Mahadevan, Sankaran

    2003-01-01

    This paper develops a methodology to assess the reliability computation model validity using the concept of Bayesian hypothesis testing, by comparing the model prediction and experimental observation, when there is only one computational model available to evaluate system behavior. Time-independent and time-dependent problems are investigated, with consideration of both cases: with and without statistical uncertainty in the model. The case of time-independent failure probability prediction with no statistical uncertainty is a straightforward application of Bayesian hypothesis testing. However, for the life prediction (time-dependent reliability) problem, a new methodology is developed in this paper to make the same Bayesian hypothesis testing concept applicable. With the existence of statistical uncertainty in the model, in addition to the application of a predictor estimator of the Bayes factor, the uncertainty in the Bayes factor is explicitly quantified through treating it as a random variable and calculating the probability that it exceeds a specified value. The developed method provides a rational criterion to decision-makers for the acceptance or rejection of the computational model

  2. A Fast Optimization Method for Reliability and Performance of Cloud Services Composition Application

    Directory of Open Access Journals (Sweden)

    Zhao Wu

    2013-01-01

    Full Text Available At present the cloud computing is one of the newest trends of distributed computation, which is propelling another important revolution of software industry. The cloud services composition is one of the key techniques in software development. The optimization for reliability and performance of cloud services composition application, which is a typical stochastic optimization problem, is confronted with severe challenges due to its randomness and long transaction, as well as the characteristics of the cloud computing resources such as openness and dynamic. The traditional reliability and performance optimization techniques, for example, Markov model and state space analysis and so forth, have some defects such as being too time consuming and easy to cause state space explosion and unsatisfied the assumptions of component execution independence. To overcome these defects, we propose a fast optimization method for reliability and performance of cloud services composition application based on universal generating function and genetic algorithm in this paper. At first, a reliability and performance model for cloud service composition application based on the multiple state system theory is presented. Then the reliability and performance definition based on universal generating function is proposed. Based on this, a fast reliability and performance optimization algorithm is presented. In the end, the illustrative examples are given.

  3. A Comparison and Evaluation of Real-Time Software Systems Modeling Languages

    Science.gov (United States)

    Evensen, Kenneth D.; Weiss, Kathryn Anne

    2010-01-01

    A model-driven approach to real-time software systems development enables the conceptualization of software, fostering a more thorough understanding of its often complex architecture and behavior while promoting the documentation and analysis of concerns common to real-time embedded systems such as scheduling, resource allocation, and performance. Several modeling languages have been developed to assist in the model-driven software engineering effort for real-time systems, and these languages are beginning to gain traction with practitioners throughout the aerospace industry. This paper presents a survey of several real-time software system modeling languages, namely the Architectural Analysis and Design Language (AADL), the Unified Modeling Language (UML), Systems Modeling Language (SysML), the Modeling and Analysis of Real-Time Embedded Systems (MARTE) UML profile, and the AADL for UML profile. Each language has its advantages and disadvantages, and in order to adequately describe a real-time software system's architecture, a complementary use of multiple languages is almost certainly necessary. This paper aims to explore these languages in the context of understanding the value each brings to the model-driven software engineering effort and to determine if it is feasible and practical to combine aspects of the various modeling languages to achieve more complete coverage in architectural descriptions. To this end, each language is evaluated with respect to a set of criteria such as scope, formalisms, and architectural coverage. An example is used to help illustrate the capabilities of the various languages.

  4. Introduction to Lean Canvas Transformation Models and Metrics in Software Testing

    Directory of Open Access Journals (Sweden)

    Nidagundi Padmaraj

    2016-05-01

    Full Text Available Software plays a key role nowadays in all fields, from simple up to cutting-edge technologies and most of technology devices now work on software. Software development verification and validation have become very important to produce the high quality software according to business stakeholder requirements. Different software development methodologies have given a new dimension for software testing. In traditional waterfall software development software testing has approached the end point and begins with resource planning, a test plan is designed and test criteria are defined for acceptance testing. In this process most of test plan is well documented and it leads towards the time-consuming processes. For the modern software development methodology such as agile where long test processes and documentations are not followed strictly due to small iteration of software development and testing, lean canvas transformation models can be a solution. This paper provides a new dimension to find out the possibilities of adopting the lean transformation models and metrics in the software test plan to simplify the test process for further use of these test metrics on canvas.

  5. Simulation Modeling of Software Development Processes

    Science.gov (United States)

    Calavaro, G. F.; Basili, V. R.; Iazeolla, G.

    1996-01-01

    A simulation modeling approach is proposed for the prediction of software process productivity indices, such as cost and time-to-market, and the sensitivity analysis of such indices to changes in the organization parameters and user requirements. The approach uses a timed Petri Net and Object Oriented top-down model specification. Results demonstrate the model representativeness, and its usefulness in verifying process conformance to expectations, and in performing continuous process improvement and optimization.

  6. Control of research oriented software development

    International Nuclear Information System (INIS)

    Lewis, L.C.; Dronkers, J.J.; Pitsker, B.

    1985-12-01

    The Nuclear Waste Policy Act of 1982 directs the Department of Energy (DOE) to dispose permanently high level radioactive waste and civilian spent nuclear fuel by January 31, 1998. DOE has responded by creating an organizational structure that directs all the activities necessary to carry out the legislative demands. LLNL is conducting research in the earth sciences and is developing some unique computer codes to help establish the feasibility of geologic repositories for nuclear waste. LLNL has several codes under development. This paper examines the administrative and organizational measures that were and still are being undertaken in order to control the development of the two major codes. In the case of one code, the software quality assurance requirements were imposed five years after the code began its development. This required a retroactive application of requirements. The other code is still in the conceptual stages of development and here requirements can be applied as soon as the initial code design begins. Both codes are being developed by scientists, not computer programmers, and both are modeling codes, not data acquisition and reduction codes. Also the projects for which these codes are being developed have slightly different software quality assurance requirements. All these factors contribute unique difficulties in attempts to assure that the development not only results in a reliable prediction, but that whatever the reliability, it can be objectively shown to exist. The paper will examine a software management model. It will also discuss the reasons why it is felt that this particular model would stand a reasonable chance for success. The paper will then describe the way in which the model should be integrated into the existing management configuration and tradition

  7. Plant and control system reliability and risk model

    International Nuclear Information System (INIS)

    Niemelae, I.M.

    1986-01-01

    A new reliability modelling technique for control systems and plants is demonstrated. It is based on modified boolean algebra and it has been automated into an efficient computer code called RELVEC. The code is useful for getting an overall view of the reliability parameters or for an in-depth reliability analysis, which is essential in risk analysis, where the model must be capable of answering to specific questions like: 'What is the probability of this temperature limiter to provide a false alarm', or 'what is the probability of air pressure in this subsystem to drop below lower limit'. (orig./DG)

  8. Model of load balancing using reliable algorithm with multi-agent system

    Science.gov (United States)

    Afriansyah, M. F.; Somantri, M.; Riyadi, M. A.

    2017-04-01

    Massive technology development is linear with the growth of internet users which increase network traffic activity. It also increases load of the system. The usage of reliable algorithm and mobile agent in distributed load balancing is a viable solution to handle the load issue on a large-scale system. Mobile agent works to collect resource information and can migrate according to given task. We propose reliable load balancing algorithm using least time first byte (LFB) combined with information from the mobile agent. In system overview, the methodology consisted of defining identification system, specification requirements, network topology and design system infrastructure. The simulation method for simulated system was using 1800 request for 10 s from the user to the server and taking the data for analysis. Software simulation was based on Apache Jmeter by observing response time and reliability of each server and then compared it with existing method. Results of performed simulation show that the LFB method with mobile agent can perform load balancing with efficient systems to all backend server without bottleneck, low risk of server overload, and reliable.

  9. Reliability Modeling of Double Beam Bridge Crane

    Science.gov (United States)

    Han, Zhu; Tong, Yifei; Luan, Jiahui; Xiangdong, Li

    2018-05-01

    This paper briefly described the structure of double beam bridge crane and the basic parameters of double beam bridge crane are defined. According to the structure and system division of double beam bridge crane, the reliability architecture of double beam bridge crane system is proposed, and the reliability mathematical model is constructed.

  10. High-Reliable PLC RTOS Development and RPS Structure Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Sohn, H. S.; Song, D. Y.; Sohn, D. S.; Kim, J. H. [Enersys Co., Daejeon (Korea, Republic of)

    2008-04-15

    One of the KNICS objectives is to develop a platform for Nuclear Power Plant(NPP) I and C(Instrumentation and Control) system, especially plant protection system. The developed platform is POSAFE-Q and this work supports the development of POSAFE-Q with the development of high-reliable real-time operating system(RTOS) and programmable logic device(PLD) software. Another KNICS objective is to develop safety I and C systems, such as Reactor Protection System(RPS) and Engineered Safety Feature-Component Control System(ESF-CCS). This work plays an important role in the structure analysis for RPS. Validation and verification(V and V) of the safety critical software is an essential work to make digital plant protection system highly reliable and safe. Generally, the reliability and safety of software based system can be improved by strict quality assurance framework including the software development itself. In other words, through V and V, the reliability and safety of a system can be improved and the development activities like software requirement specification, software design specification, component tests, integration tests, and system tests shall be appropriately documented for V and V.

  11. High-Reliable PLC RTOS Development and RPS Structure Analysis

    International Nuclear Information System (INIS)

    Sohn, H. S.; Song, D. Y.; Sohn, D. S.; Kim, J. H.

    2008-04-01

    One of the KNICS objectives is to develop a platform for Nuclear Power Plant(NPP) I and C(Instrumentation and Control) system, especially plant protection system. The developed platform is POSAFE-Q and this work supports the development of POSAFE-Q with the development of high-reliable real-time operating system(RTOS) and programmable logic device(PLD) software. Another KNICS objective is to develop safety I and C systems, such as Reactor Protection System(RPS) and Engineered Safety Feature-Component Control System(ESF-CCS). This work plays an important role in the structure analysis for RPS. Validation and verification(V and V) of the safety critical software is an essential work to make digital plant protection system highly reliable and safe. Generally, the reliability and safety of software based system can be improved by strict quality assurance framework including the software development itself. In other words, through V and V, the reliability and safety of a system can be improved and the development activities like software requirement specification, software design specification, component tests, integration tests, and system tests shall be appropriately documented for V and V.

  12. Software Design for Smile Analysis

    Directory of Open Access Journals (Sweden)

    A. Sarkhosh

    2010-12-01

    Full Text Available Introduction: Esthetics and attractiveness of the smile is one of the major demands in contemporary orthodontic treatment. In order to improve a smile design, it is necessary to record “posed smile” as an intentional, non-pressure, static, natural and reproduciblesmile. The record then should be analyzed to determine its characteristics. In this study,we intended to design and introduce a software to analyze the smile rapidly and precisely in order to produce an attractive smile for the patients.Materials and Methods: For this purpose, a practical study was performed to design multimedia software “Smile Analysis” which can receive patients’ photographs and videographs. After giving records to the software, the operator should mark the points and lines which are displayed on the system’s guide and also define the correct scale for each image. Thirty-three variables are measured by the software and displayed on the report page. Reliability of measurements in both image and video was significantly high(=0.7-1.Results: In order to evaluate intra- operator and inter-operator reliability, five cases were selected randomly. Statistical analysis showed that calculations performed in smile analysis software were both valid and highly reliable (for both video and photo.Conclusion: The results obtained from smile analysis could be used in diagnosis,treatment planning and evaluation of the treatment progress.

  13. The Astringency of the GP Algorithm for Forecasting Software Failure Data Series

    Directory of Open Access Journals (Sweden)

    Yong-qiang Zhang

    2007-05-01

    Full Text Available The forecasting of software failure data series by Genetic Programming (GP can be realized without any assumptions before modeling. This discovery has transformed traditional statistical modeling methods as well as improved consistency for model applicability. The individuals' different characteristics during the evolution of generations, which are randomly changeable, are treated as Markov random processes. This paper also proposes that a GP algorithm with "optimal individuals reserved strategy" is the best solution to this problem, and therefore the adaptive individuals finally will be evolved. This will allow practical applications in software reliability modeling analysis and forecasting for failure behaviors. Moreover it can verify the feasibility and availability of the GP algorithm, which is applied to software failure data series forecasting on a theoretical basis. The results show that the GP algorithm is the best solution for software failure behaviors in a variety of disciplines.

  14. Reliability and protection against failure in computer systems

    International Nuclear Information System (INIS)

    Daniels, B.K.

    1979-01-01

    Computers are being increasingly integrated into the control and safety systems of large and potentially hazardous industrial processes. This development introduces problems which are particular to computer systems and opens the way to new techniques of solving conventional reliability and availability problems. References to the developing fields of software reliability, human factors and software design are given, and these subjects are related, where possible, to the quantified assessment of reliability. Original material is presented in the areas of reliability growth and computer hardware failure data. The report draws on the experience of the National Centre of Systems Reliability in assessing the capability and reliability of computer systems both within the nuclear industry, and from the work carried out in other industries by the Systems Reliability Service. (author)

  15. Development and application of a complex numerical model and software for the computation of dose conversion factors for radon progenies.

    Science.gov (United States)

    Farkas, Árpád; Balásházy, Imre

    2015-04-01

    A more exact determination of dose conversion factors associated with radon progeny inhalation was possible due to the advancements in epidemiological health risk estimates in the last years. The enhancement of computational power and the development of numerical techniques allow computing dose conversion factors with increasing reliability. The objective of this study was to develop an integrated model and software based on a self-developed airway deposition code, an own bronchial dosimetry model and the computational methods accepted by International Commission on Radiological Protection (ICRP) to calculate dose conversion coefficients for different exposure conditions. The model was tested by its application for exposure and breathing conditions characteristic of mines and homes. The dose conversion factors were 8 and 16 mSv WLM(-1) for homes and mines when applying a stochastic deposition model combined with the ICRP dosimetry model (named PM-A model), and 9 and 17 mSv WLM(-1) when applying the same deposition model combined with authors' bronchial dosimetry model and the ICRP bronchiolar and alveolar-interstitial dosimetry model (called PM-B model). User friendly software for the computation of dose conversion factors has also been developed. The software allows one to compute conversion factors for a large range of exposure and breathing parameters and to perform sensitivity analyses. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  16. Probabilistic risk assessment for a loss of coolant accident in McMaster Nuclear Reactor and application of reliability physics model for modeling human reliability

    Science.gov (United States)

    Ha, Taesung

    A probabilistic risk assessment (PRA) was conducted for a loss of coolant accident, (LOCA) in the McMaster Nuclear Reactor (MNR). A level 1 PRA was completed including event sequence modeling, system modeling, and quantification. To support the quantification of the accident sequence identified, data analysis using the Bayesian method and human reliability analysis (HRA) using the accident sequence evaluation procedure (ASEP) approach were performed. Since human performance in research reactors is significantly different from that in power reactors, a time-oriented HRA model (reliability physics model) was applied for the human error probability (HEP) estimation of the core relocation. This model is based on two competing random variables: phenomenological time and performance time. The response surface and direct Monte Carlo simulation with Latin Hypercube sampling were applied for estimating the phenomenological time, whereas the performance time was obtained from interviews with operators. An appropriate probability distribution for the phenomenological time was assigned by statistical goodness-of-fit tests. The human error probability (HEP) for the core relocation was estimated from these two competing quantities: phenomenological time and operators' performance time. The sensitivity of each probability distribution in human reliability estimation was investigated. In order to quantify the uncertainty in the predicted HEPs, a Bayesian approach was selected due to its capability of incorporating uncertainties in model itself and the parameters in that model. The HEP from the current time-oriented model was compared with that from the ASEP approach. Both results were used to evaluate the sensitivity of alternative huinan reliability modeling for the manual core relocation in the LOCA risk model. This exercise demonstrated the applicability of a reliability physics model supplemented with a. Bayesian approach for modeling human reliability and its potential

  17. Space Vehicle Reliability Modeling in DIORAMA

    Energy Technology Data Exchange (ETDEWEB)

    Tornga, Shawn Robert [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-12

    When modeling system performance of space based detection systems it is important to consider spacecraft reliability. As space vehicles age the components become prone to failure for a variety of reasons such as radiation damage. Additionally, some vehicles may lose the ability to maneuver once they exhaust fuel supplies. Typically failure is divided into two categories: engineering mistakes and technology surprise. This document will report on a method of simulating space vehicle reliability in the DIORAMA framework.

  18. The Legacy of Space Shuttle Flight Software

    Science.gov (United States)

    Hickey, Christopher J.; Loveall, James B.; Orr, James K.; Klausman, Andrew L.

    2011-01-01

    The initial goals of the Space Shuttle Program required that the avionics and software systems blaze new trails in advancing avionics system technology. Many of the requirements placed on avionics and software were accomplished for the first time on this program. Examples include comprehensive digital fly-by-wire technology, use of a digital databus for flight critical functions, fail operational/fail safe requirements, complex automated redundancy management, and the use of a high-order software language for flight software development. In order to meet the operational and safety goals of the program, the Space Shuttle software had to be extremely high quality, reliable, robust, reconfigurable and maintainable. To achieve this, the software development team evolved a software process focused on continuous process improvement and defect elimination that consistently produced highly predictable and top quality results, providing software managers the confidence needed to sign each Certificate of Flight Readiness (COFR). This process, which has been appraised at Capability Maturity Model (CMM)/Capability Maturity Model Integration (CMMI) Level 5, has resulted in one of the lowest software defect rates in the industry. This paper will present an overview of the evolution of the Primary Avionics Software System (PASS) project and processes over thirty years, an argument for strong statistical control of software processes with examples, an overview of the success story for identifying and driving out errors before flight, a case study of the few significant software issues and how they were either identified before flight or slipped through the process onto a flight vehicle, and identification of the valuable lessons learned over the life of the project.

  19. STAMPS: development and verification of swallowing kinematic analysis software.

    Science.gov (United States)

    Lee, Woo Hyung; Chun, Changmook; Seo, Han Gil; Lee, Seung Hak; Oh, Byung-Mo

    2017-10-17

    Swallowing impairment is a common complication in various geriatric and neurodegenerative diseases. Swallowing kinematic analysis is essential to quantitatively evaluate the swallowing motion of the oropharyngeal structures. This study aims to develop a novel swallowing kinematic analysis software, called spatio-temporal analyzer for motion and physiologic study (STAMPS), and verify its validity and reliability. STAMPS was developed in MATLAB, which is one of the most popular platforms for biomedical analysis. This software was constructed to acquire, process, and analyze the data of swallowing motion. The target of swallowing structures includes bony structures (hyoid bone, mandible, maxilla, and cervical vertebral bodies), cartilages (epiglottis and arytenoid), soft tissues (larynx and upper esophageal sphincter), and food bolus. Numerous functions are available for the spatiotemporal parameters of the swallowing structures. Testing for validity and reliability was performed in 10 dysphagia patients with diverse etiologies and using the instrumental swallowing model which was designed to mimic the motion of the hyoid bone and the epiglottis. The intra- and inter-rater reliability tests showed excellent agreement for displacement and moderate to excellent agreement for velocity. The Pearson correlation coefficients between the measured and instrumental reference values were nearly 1.00 (P software is expected to be useful for researchers who are interested in the swallowing motion analysis.

  20. Stochastic modeling for reliability shocks, burn-in and heterogeneous populations

    CERN Document Server

    Finkelstein, Maxim

    2013-01-01

    Focusing on shocks modeling, burn-in and heterogeneous populations, Stochastic Modeling for Reliability naturally combines these three topics in the unified stochastic framework and presents numerous practical examples that illustrate recent theoretical findings of the authors.  The populations of manufactured items in industry are usually heterogeneous. However, the conventional reliability analysis is performed under the implicit assumption of homogeneity, which can result in distortion of the corresponding reliability indices and various misconceptions. Stochastic Modeling for Reliability fills this gap and presents the basics and further developments of reliability theory for heterogeneous populations. Specifically, the authors consider burn-in as a method of elimination of ‘weak’ items from heterogeneous populations. The real life objects are operating in a changing environment. One of the ways to model an impact of this environment is via the external shocks occurring in accordance with some stocha...

  1. Reliability of lower limb alignment measures using an established landmark-based method with a customized computer software program

    Science.gov (United States)

    Sled, Elizabeth A.; Sheehy, Lisa M.; Felson, David T.; Costigan, Patrick A.; Lam, Miu; Cooke, T. Derek V.

    2010-01-01

    The objective of the study was to evaluate the reliability of frontal plane lower limb alignment measures using a landmark-based method by (1) comparing inter- and intra-reader reliability between measurements of alignment obtained manually with those using a computer program, and (2) determining inter- and intra-reader reliability of computer-assisted alignment measures from full-limb radiographs. An established method for measuring alignment was used, involving selection of 10 femoral and tibial bone landmarks. 1) To compare manual and computer methods, we used digital images and matching paper copies of five alignment patterns simulating healthy and malaligned limbs drawn using AutoCAD. Seven readers were trained in each system. Paper copies were measured manually and repeat measurements were performed daily for 3 days, followed by a similar routine with the digital images using the computer. 2) To examine the reliability of computer-assisted measures from full-limb radiographs, 100 images (200 limbs) were selected as a random sample from 1,500 full-limb digital radiographs which were part of the Multicenter Osteoarthritis (MOST) Study. Three trained readers used the software program to measure alignment twice from the batch of 100 images, with two or more weeks between batch handling. Manual and computer measures of alignment showed excellent agreement (intraclass correlations [ICCs] 0.977 – 0.999 for computer analysis; 0.820 – 0.995 for manual measures). The computer program applied to full-limb radiographs produced alignment measurements with high inter- and intra-reader reliability (ICCs 0.839 – 0.998). In conclusion, alignment measures using a bone landmark-based approach and a computer program were highly reliable between multiple readers. PMID:19882339

  2. Saphire models and software for ASP evaluations

    International Nuclear Information System (INIS)

    Sattison, M.B.

    1997-01-01

    The Idaho National Engineering Laboratory (INEL) over the three years has created 75 plant-specific Accident Sequence Precursor (ASP) models using the SAPHIRE suite of PRA codes. Along with the new models, the INEL has also developed a new module for SAPHIRE which is tailored specifically to the unique needs of ASP evaluations. These models and software will be the next generation of risk tools for the evaluation of accident precursors by both the U.S. Nuclear Regulatory Commission's (NRC's) Office of Nuclear Reactor Regulation (NRR) and the Office for Analysis and Evaluation of Operational Data (AEOD). This paper presents an overview of the models and software. Key characteristics include: (1) classification of the plant models according to plant response with a unique set of event trees for each plant class, (2) plant-specific fault trees using supercomponents, (3) generation and retention of all system and sequence cutsets, (4) full flexibility in modifying logic, regenerating cutsets, and requantifying results, and (5) user interface for streamlined evaluation of ASP events. Future plans for the ASP models is also presented

  3. Tuning COCOMO-II for Software Process Improvement: A Tool Based Approach

    Directory of Open Access Journals (Sweden)

    SYEDA UMEMA HANI

    2016-10-01

    Full Text Available In order to compete in the international software development market the software organizations have to adopt internationally accepted software practices i.e. standard like ISO (International Standard Organization or CMMI (Capability Maturity Model Integration in spite of having scarce resources and tools. The aim of this study is to develop a tool which could be used to present an actual picture of Software Process Improvement benefits in front of the software development companies. However, there are few tools available to assist in making predictions, they are too expensive and could not cover dataset that reflect the cultural behavior of organizations for software development in developing countries. In extension to our previously done research reported elsewhere for Pakistani software development organizations which has quantified benefits of SDPI (Software Development Process Improvement, this research has used sixty-two datasets from three different software development organizations against the set of metrics used in COCOMO-II (Constructive Cost Model 2000. It derived a verifiable equation for calculating ISF (Ideal Scale Factor and tuned the COCOMO-II model to bring prediction capability for SDPI (benefit measurement classes such as ESCP (Effort, Schedule, Cost, and Productivity. This research has contributed towards software industry by giving a reliable and low-cost mechanism for generating prediction models with high prediction accuracy. Hopefully, this study will help software organizations to use this tool not only to predict ESCP but also to predict an exact impact of SDPI.

  4. A model-based software development methodology for high-end automotive components

    NARCIS (Netherlands)

    Ravanan, Mahmoud

    2014-01-01

    This report provides a model-based software development methodology for high-end automotive components. The V-model is used as a process model throughout the development of the software platform. It offers a framework that simplifies the relation between requirements, design, implementation,

  5. A process improvement model for software verification and validation

    Science.gov (United States)

    Callahan, John; Sabolish, George

    1994-01-01

    We describe ongoing work at the NASA Independent Verification and Validation (IV&V) Facility to establish a process improvement model for software verification and validation (V&V) organizations. This model, similar to those used by some software development organizations, uses measurement-based techniques to identify problem areas and introduce incremental improvements. We seek to replicate this model for organizations involved in V&V on large-scale software development projects such as EOS and space station. At the IV&V Facility, a university research group and V&V contractors are working together to collect metrics across projects in order to determine the effectiveness of V&V and improve its application. Since V&V processes are intimately tied to development processes, this paper also examines the repercussions for development organizations in large-scale efforts.

  6. Creating a simulation model of software testing using Simulink package

    Directory of Open Access Journals (Sweden)

    V. M. Dubovoi

    2016-12-01

    Full Text Available The determination of the solution model of software testing that allows prediction both the whole process and its specific stages is actual for IT-industry. The article focuses on solving this problem. The aim of the article is prediction the time and improvement the quality of software testing. The analysis of the software testing process shows that it can be attributed to the branched cyclic technological processes because it is cyclical with decision-making on control operations. The investigation uses authors' previous works andsoftware testing process method based on Markov model. The proposed method enables execution the prediction for each software module, which leads to better decision-making of each controlled suboperation of all processes. Simulink simulation model shows implementation and verification of results of proposed technique. Results of the research have practically implemented in the IT-industry.

  7. Modeling of system reliability Petri nets with aging tokens

    International Nuclear Information System (INIS)

    Volovoi, V.

    2004-01-01

    The paper addresses the dynamic modeling of degrading and repairable complex systems. Emphasis is placed on the convenience of modeling for the end user, with special attention being paid to the modeling part of a problem, which is considered to be decoupled from the choice of solution algorithms. Depending on the nature of the problem, these solution algorithms can include discrete event simulation or numerical solution of the differential equations that govern underlying stochastic processes. Such modularity allows a focus on the needs of system reliability modeling and tailoring of the modeling formalism accordingly. To this end, several salient features are chosen from the multitude of existing extensions of Petri nets, and a new concept of aging tokens (tokens with memory) is introduced. The resulting framework provides for flexible and transparent graphical modeling with excellent representational power that is particularly suited for system reliability modeling with non-exponentially distributed firing times. The new framework is compared with existing Petri-net approaches and other system reliability modeling techniques such as reliability block diagrams and fault trees. The relative differences are emphasized and illustrated with several examples, including modeling of load sharing, imperfect repair of pooled items, multiphase missions, and damage-tolerant maintenance. Finally, a simple implementation of the framework using discrete event simulation is described

  8. Sustainable, Reliable Mission-Systems Architecture

    Science.gov (United States)

    O'Neil, Graham; Orr, James K.; Watson, Steve

    2007-01-01

    A mission-systems architecture, based on a highly modular infrastructure utilizing: open-standards hardware and software interfaces as the enabling technology is essential for affordable and sustainable space exploration programs. This mission-systems architecture requires (a) robust communication between heterogeneous system, (b) high reliability, (c) minimal mission-to-mission reconfiguration, (d) affordable development, system integration, and verification of systems, and (e) minimal sustaining engineering. This paper proposes such an architecture. Lessons learned from the Space Shuttle program and Earthbound complex engineered system are applied to define the model. Technology projections reaching out 5 years are mde to refine model details.

  9. Modeling of Some Chaotic Systems with AnyLogic Software

    Directory of Open Access Journals (Sweden)

    Biljana Zlatanovska

    2018-05-01

    Full Text Available The chaotic systems are already known in the theory of chaos. In our paper will be analyzed the following chaotic systems: Rossler, Chua and Chen systems. All of them are systems of ordinary differential equations. By mathematical software Mathematica and MatLab, their graphical representation as continuous dynamical systems is already known. By computer simulations, via examples, the systems will be analyzed using AnyLogic software. We would like to present the way how ordinary differential equations are modeling with AnyLogic software, as one of the simplest software for use.

  10. Development of a software for the curimeter model cdn102

    International Nuclear Information System (INIS)

    Dotres Llera, Armando

    2001-01-01

    The characteristics of the software for the Curimeter Model CD-N102 developed at CEADEN are presented. The software consists of two main parts: a basic software for the electrometer block and an application software for a P C. The basic software is totally independent of the Pc and performs all the basic functions of the process of measurement. The application software is optional and offers a friendlier interface and additional options to the user. Among these is the possibility to keep a statistical record of the measurements in a database, to create labels and to introduce new isotopes and calibrate them. A more detailed explanation of both software is given

  11. Modeling and managing risk early in software development

    Science.gov (United States)

    Briand, Lionel C.; Thomas, William M.; Hetmanski, Christopher J.

    1993-01-01

    In order to improve the quality of the software development process, we need to be able to build empirical multivariate models based on data collectable early in the software process. These models need to be both useful for prediction and easy to interpret, so that remedial actions may be taken in order to control and optimize the development process. We present an automated modeling technique which can be used as an alternative to regression techniques. We show how it can be used to facilitate the identification and aid the interpretation of the significant trends which characterize 'high risk' components in several Ada systems. Finally, we evaluate the effectiveness of our technique based on a comparison with logistic regression based models.

  12. Scientists' Needs in Software Ecosystem Modeling

    NARCIS (Netherlands)

    Jansen, Slinger; Handoyo, Eko; Alves, C.

    2015-01-01

    Currently the landscape of software ecosystem modelling methods and languages is like Babel after the fall of the tower: there are many methods and languages available and interchanging data between researchers and organizations that actively govern their ecosystem, is practically impossible. The

  13. Software Testing and Verification in Climate Model Development

    Science.gov (United States)

    Clune, Thomas L.; Rood, RIchard B.

    2011-01-01

    Over the past 30 years most climate models have grown from relatively simple representations of a few atmospheric processes to a complex multi-disciplinary system. Computer infrastructure over that period has gone from punch card mainframes to modem parallel clusters. Model implementations have become complex, brittle, and increasingly difficult to extend and maintain. Existing verification processes for model implementations rely almost exclusively upon some combination of detailed analysis of output from full climate simulations and system-level regression tests. In additional to being quite costly in terms of developer time and computing resources, these testing methodologies are limited in terms of the types of defects that can be detected, isolated and diagnosed. Mitigating these weaknesses of coarse-grained testing with finer-grained "unit" tests has been perceived as cumbersome and counter-productive. In the commercial software sector, recent advances in tools and methodology have led to a renaissance for systematic fine-grained testing. We discuss the availability of analogous tools for scientific software and examine benefits that similar testing methodologies could bring to climate modeling software. We describe the unique challenges faced when testing complex numerical algorithms and suggest techniques to minimize and/or eliminate the difficulties.

  14. ARC Software and Models

    Science.gov (United States)

    Archives RESEARCH ▼ Research Areas Ongoing Projects Completed Projects SOFTWARE CONTACT ▼ Primary Contacts Researchers External Link MLibrary Deep Blue Software Archive Most research conducted at the ARC produce software code and methodologies that are transferred to TARDEC and industry partners. These

  15. Software Engineering Support of the Third Round of Scientific Grand Challenge Investigations: Earth System Modeling Software Framework Survey

    Science.gov (United States)

    Talbot, Bryan; Zhou, Shu-Jia; Higgins, Glenn; Zukor, Dorothy (Technical Monitor)

    2002-01-01

    One of the most significant challenges in large-scale climate modeling, as well as in high-performance computing in other scientific fields, is that of effectively integrating many software models from multiple contributors. A software framework facilitates the integration task, both in the development and runtime stages of the simulation. Effective software frameworks reduce the programming burden for the investigators, freeing them to focus more on the science and less on the parallel communication implementation. while maintaining high performance across numerous supercomputer and workstation architectures. This document surveys numerous software frameworks for potential use in Earth science modeling. Several frameworks are evaluated in depth, including Parallel Object-Oriented Methods and Applications (POOMA), Cactus (from (he relativistic physics community), Overture, Goddard Earth Modeling System (GEMS), the National Center for Atmospheric Research Flux Coupler, and UCLA/UCB Distributed Data Broker (DDB). Frameworks evaluated in less detail include ROOT, Parallel Application Workspace (PAWS), and Advanced Large-Scale Integrated Computational Environment (ALICE). A host of other frameworks and related tools are referenced in this context. The frameworks are evaluated individually and also compared with each other.

  16. Composable Framework Support for Software-FMEA Through Model Execution

    Science.gov (United States)

    Kocsis, Imre; Patricia, Andras; Brancati, Francesco; Rossi, Francesco

    2016-08-01

    Performing Failure Modes and Effect Analysis (FMEA) during software architecture design is becoming a basic requirement in an increasing number of domains; however, due to the lack of standardized early design phase model execution, classic SW-FMEA approaches carry significant risks and are human effort-intensive even in processes that use Model-Driven Engineering.Recently, modelling languages with standardized executable semantics have emerged. Building on earlier results, this paper describes framework support for generating executable error propagation models from such models during software architecture design. The approach carries the promise of increased precision, decreased risk and more automated execution for SW-FMEA during dependability- critical system development.

  17. A software complex intended for constructing applied models and meta-models on the basis of mathematical programming principles

    Directory of Open Access Journals (Sweden)

    Михаил Юрьевич Чернышов

    2013-12-01

    Full Text Available A software complex (SC elaborated by the authors on the basis of the language LMPL and representing a software tool intended for synthesis of applied software models and meta-models constructed on the basis of mathematical programming (MP principles is described. LMPL provides for an explicit form of declarative representation of MP-models, presumes automatic constructing and transformation of models and the capability of adding external software packages. The following software versions of the SC have been implemented: 1 a SC intended for representing the process of choosing an optimal hydroelectric power plant model (on the principles of meta-modeling and 2 a SC intended for representing the logic-sense relations between the models of a set of discourse formations in the discourse meta-model.

  18. A Study on Quantitative Assessment of Design Specification of Reactor Protection System Software Using Bayesian Belief Networks

    International Nuclear Information System (INIS)

    Eom, H. S.; Kang, H. G.; Chang, S. C.; Park, G. Y.; Kwon, K. C.

    2007-02-01

    This report propose a method that can produce quantitative reliability of safety-critical software for PSA by making use of Bayesian Belief Networks (BBN). BBN has generally been used to model the uncertain system in many research fields. The proposed method was constructed by utilizing BBN that can combine the qualitative and the quantitative evidence relevant to the reliability of safety-critical software, and then can infer a conclusion in a formal and a quantitative way. A case study was also carried out with the proposed method to assess the quality of software design specification of safety-critical software that will be embedded in reactor protection system. The V and V results of the software were used as inputs for the BBN model. The calculation results of the BBN model showed that its conclusion is mostly equivalent to those of the V and V expert for a given input data set. The method and the results of the case study will be utilized in PSA of NPP. The method also can support the V and V expert's decision making process in controlling further V and V activities

  19. Customizing Standard Software as a Business Model in the IT Industry

    DEFF Research Database (Denmark)

    Kautz, Karlheinz; Rab, Sameen M.; Sinnet, Michael

    2011-01-01

    This research studies a new business model in the IT industry, the customization of standard software as the sole foundation for a software company’s earnings. Based on a theoretical background which combines the concepts of inter-organizational networks and open innovation we provide an interpre......This research studies a new business model in the IT industry, the customization of standard software as the sole foundation for a software company’s earnings. Based on a theoretical background which combines the concepts of inter-organizational networks and open innovation we provide...... an interpretive case study of a small software company which customizes a standard product. We investigate the company’s interactions with a large global software company which is the producer of the original software product and with other companies which are involved in the software customization process. We...... primarily on complex, formal partnerships, in which also opportunistic behavior occurs and where informal relations are invaluable sources of knowledge. In addition, the original software producer’s view and treatment of these companies has a vital impact on the customizing company’s practice which...

  20. Quantification of Safety-Critical Software Test Uncertainty

    International Nuclear Information System (INIS)

    Khalaquzzaman, M.; Cho, Jaehyun; Lee, Seung Jun; Jung, Wondea

    2015-01-01

    The method, conservatively assumes that the failure probability of a software for the untested inputs is 1, and the failure probability turns in 0 for successful testing of all test cases. However, in reality the chance of failure exists due to the test uncertainty. Some studies have been carried out to identify the test attributes that affect the test quality. Cao discussed the testing effort, testing coverage, and testing environment. Management of the test uncertainties was discussed in. In this study, the test uncertainty has been considered to estimate the software failure probability because the software testing process is considered to be inherently uncertain. A reliability estimation of software is very important for a probabilistic safety analysis of a digital safety critical system of NPPs. This study focused on the estimation of the probability of a software failure that considers the uncertainty in software testing. In our study, BBN has been employed as an example model for software test uncertainty quantification. Although it can be argued that the direct expert elicitation of test uncertainty is much simpler than BBN estimation, however the BBN approach provides more insights and a basis for uncertainty estimation

  1. Improving the reliability of nuclear reprocessing by application of computers and mathematical modelling

    International Nuclear Information System (INIS)

    Gabowitsch, E.; Trauboth, H.

    1982-01-01

    After a brief survey of the present and expected future state of nuclear energy utilization, which should demonstrate the significance of nuclear reprocessing, safety and reliability aspects of nuclear reprocessing plants (NRP) are considered. Then, the principal possibilities of modern computer technology including computer systems architecture and application-oriented software for improving the reliability and availability are outlined. In this context, two information systems being developed at the Nuclear Research Center Karlsruhe (KfK) are briefly described. For design evaluation of certain areas of a large NRP mathematical methods and computer-aided tools developed, used or being designed by KfK are discussed. In conclusion, future research to be pursued in information processing and applied mathematics in support of reliable operation of NRP's is proposed. (Auth.)

  2. Generating Protocol Software from CPN Models Annotated with Pragmatics

    DEFF Research Database (Denmark)

    Simonsen, Kent Inge; Kristensen, Lars M.; Kindler, Ekkart

    2013-01-01

    and verify protocol software, but limited work exists on using CPN models of protocols as a basis for automated code generation. The contribution of this paper is a method for generating protocol software from a class of CPN models annotated with code generation pragmatics. Our code generation method...... consists of three main steps: automatically adding so-called derived pragmatics to the CPN model, computing an abstract template tree, which associates pragmatics with code templates, and applying the templates to generate code which can then be compiled. We illustrate our method using a unidirectional...

  3. Software life cycle dynamic simulation model: The organizational performance submodel

    Science.gov (United States)

    Tausworthe, Robert C.

    1985-01-01

    The submodel structure of a software life cycle dynamic simulation model is described. The software process is divided into seven phases, each with product, staff, and funding flows. The model is subdivided into an organizational response submodel, a management submodel, a management influence interface, and a model analyst interface. The concentration here is on the organizational response model, which simulates the performance characteristics of a software development subject to external and internal influences. These influences emanate from two sources: the model analyst interface, which configures the model to simulate the response of an implementing organization subject to its own internal influences, and the management submodel that exerts external dynamic control over the production process. A complete characterization is given of the organizational response submodel in the form of parameterized differential equations governing product, staffing, and funding levels. The parameter values and functions are allocated to the two interfaces.

  4. Software Piracy Detection Model Using Ant Colony Optimization Algorithm

    Science.gov (United States)

    Astiqah Omar, Nor; Zakuan, Zeti Zuryani Mohd; Saian, Rizauddin

    2017-06-01

    Internet enables information to be accessible anytime and anywhere. This scenario creates an environment whereby information can be easily copied. Easy access to the internet is one of the factors which contribute towards piracy in Malaysia as well as the rest of the world. According to a survey conducted by Compliance Gap BSA Global Software Survey in 2013 on software piracy, found out that 43 percent of the software installed on PCs around the world was not properly licensed, the commercial value of the unlicensed installations worldwide was reported to be 62.7 billion. Piracy can happen anywhere including universities. Malaysia as well as other countries in the world is faced with issues of piracy committed by the students in universities. Piracy in universities concern about acts of stealing intellectual property. It can be in the form of software piracy, music piracy, movies piracy and piracy of intellectual materials such as books, articles and journals. This scenario affected the owner of intellectual property as their property is in jeopardy. This study has developed a classification model for detecting software piracy. The model was developed using a swarm intelligence algorithm called the Ant Colony Optimization algorithm. The data for training was collected by a study conducted in Universiti Teknologi MARA (Perlis). Experimental results show that the model detection accuracy rate is better as compared to J48 algorithm.

  5. Optimal Release Time and Sensitivity Analysis Using a New NHPP Software Reliability Model with Probability of Fault Removal Subject to Operating Environments

    OpenAIRE

    Kwang Yoon Song; In Hong Chang; Hoang Pham

    2018-01-01

    With the latest technological developments, the software industry is at the center of the fourth industrial revolution. In today’s complex and rapidly changing environment, where software applications must be developed quickly and easily, software must be focused on rapidly changing information technology. The basic goal of software engineering is to produce high-quality software at low cost. However, because of the complexity of software systems, software development can be time consum...

  6. Learning reliable manipulation strategies without initial physical models

    Science.gov (United States)

    Christiansen, Alan D.; Mason, Matthew T.; Mitchell, Tom M.

    1990-01-01

    A description is given of a robot, possessing limited sensory and effectory capabilities but no initial model of the effects of its actions on the world, that acquires such a model through exploration, practice, and observation. By acquiring an increasingly correct model of its actions, it generates increasingly successful plans to achieve its goals. In an apparently nondeterministic world, achieving reliability requires the identification of reliable actions and a preference for using such actions. Furthermore, by selecting its training actions carefully, the robot can significantly improve its learning rate.

  7. Developing a Collection of Composable Data Translation Software Units to Improve Efficiency and Reproducibility in Ecohydrologic Modeling Workflows

    Science.gov (United States)

    Olschanowsky, C.; Flores, A. N.; FitzGerald, K.; Masarik, M. T.; Rudisill, W. J.; Aguayo, M.

    2017-12-01

    Dynamic models of the spatiotemporal evolution of water, energy, and nutrient cycling are important tools to assess impacts of climate and other environmental changes on ecohydrologic systems. These models require spatiotemporally varying environmental forcings like precipitation, temperature, humidity, windspeed, and solar radiation. These input data originate from a variety of sources, including global and regional weather and climate models, global and regional reanalysis products, and geostatistically interpolated surface observations. Data translation measures, often subsetting in space and/or time and transforming and converting variable units, represent a seemingly mundane, but critical step in the application workflows. Translation steps can introduce errors, misrepresentations of data, slow execution time, and interrupt data provenance. We leverage a workflow that subsets a large regional dataset derived from the Weather Research and Forecasting (WRF) model and prepares inputs to the Parflow integrated hydrologic model to demonstrate the impact translation tool software quality on scientific workflow results and performance. We propose that such workflows will benefit from a community approved collection of data transformation components. The components should be self-contained composable units of code. This design pattern enables automated parallelization and software verification, improving performance and reliability. Ensuring that individual translation components are self-contained and target minute tasks increases reliability. The small code size of each component enables effective unit and regression testing. The components can be automatically composed for efficient execution. An efficient data translation framework should be written to minimize data movement. Composing components within a single streaming process reduces data movement. Each component will typically have a low arithmetic intensity, meaning that it requires about the same number of

  8. The Relationship of Personality Models and Development Tasks in Software Engineering

    OpenAIRE

    Wiesche, Manuel;Krcmar, Helmut

    2015-01-01

    Understanding the personality of software developers has been an ongoing topic in software engineering research. Software engineering researchers applied different theoretical models to understand software developers? personalities to better predict software developers? performance, orchestrate more effective and motivated teams, and identify the person that fits a certain job best. However, empirical results were found as contradicting, challenging validity, and missing guidance for IT perso...

  9. Experiences in Teaching a Graduate Course on Model-Driven Software Development

    Science.gov (United States)

    Tekinerdogan, Bedir

    2011-01-01

    Model-driven software development (MDSD) aims to support the development and evolution of software intensive systems using the basic concepts of model, metamodel, and model transformation. In parallel with the ongoing academic research, MDSD is more and more applied in industrial practices. After being accepted both by a broad community of…

  10. The impact of software quality characteristics on healthcare outcome: a literature review.

    Science.gov (United States)

    Aghazadeh, Sakineh; Pirnejad, Habibollah; Moradkhani, Alireza; Aliev, Alvosat

    2014-01-01

    The aim of this study was to discover the effect of software quality characteristics on healthcare quality and efficiency indicators. Through a systematic literature review, we selected and analyzed 37 original research papers to investigate the impact of the software indicators (coming from the standard ISO 9126 quality characteristics and sub-characteristics) on some of healthcare important outcome indicators and finally ranked these software indicators. The results showed that the software characteristics usability, reliability and efficiency were mostly favored in the studies, indicating their importance. On the other hand, user satisfaction, quality of patient care, clinical workflow efficiency, providers' communication and information exchange, patient satisfaction and care costs were among the healthcare outcome indicators frequently evaluated in relation to the mentioned software characteristics. Regression Logistic Method was the most common assessment methodology, and Confirmatory Factor Analysis and Structural Equation Modeling were performed to test the structural model's fit. The software characteristics were considered to impact the healthcare outcome indicators through other intermediate factors (variables).

  11. The Software Management Environment (SME)

    Science.gov (United States)

    Valett, Jon D.; Decker, William; Buell, John

    1988-01-01

    The Software Management Environment (SME) is a research effort designed to utilize the past experiences and results of the Software Engineering Laboratory (SEL) and to incorporate this knowledge into a tool for managing projects. SME provides the software development manager with the ability to observe, compare, predict, analyze, and control key software development parameters such as effort, reliability, and resource utilization. The major components of the SME, the architecture of the system, and examples of the functionality of the tool are discussed.

  12. Effective Software Engineering Leadership for Development Programs

    Science.gov (United States)

    Cagle West, Marsha

    2010-01-01

    Software is a critical component of systems ranging from simple consumer appliances to complex health, nuclear, and flight control systems. The development of quality, reliable, and effective software solutions requires the incorporation of effective software engineering processes and leadership. Processes, approaches, and methodologies for…

  13. Software Assurance: Five Essential Considerations for Acquisition Officials

    National Research Council Canada - National Science Library

    Polydys, Mary L; Wisseman, Stan

    2007-01-01

    .... A recent Chief Information Office (CIO) Executive Council poll indicated that the top two most important attributes of software are reliable software that functions as promised and software free from security vulnerabilities and malicious code...

  14. RELIABILITY MODELING BASED ON INCOMPLETE DATA: OIL PUMP APPLICATION

    Directory of Open Access Journals (Sweden)

    Ahmed HAFAIFA

    2014-07-01

    Full Text Available The reliability analysis for industrial maintenance is now increasingly demanded by the industrialists in the world. Indeed, the modern manufacturing facilities are equipped by data acquisition and monitoring system, these systems generates a large volume of data. These data can be used to infer future decisions affecting the health facilities. These data can be used to infer future decisions affecting the state of the exploited equipment. However, in most practical cases the data used in reliability modelling are incomplete or not reliable. In this context, to analyze the reliability of an oil pump, this work proposes to examine and treat the incomplete, incorrect or aberrant data to the reliability modeling of an oil pump. The objective of this paper is to propose a suitable methodology for replacing the incomplete data using a regression method.

  15. Multi-physics fluid-structure interaction modelling software

    CSIR Research Space (South Africa)

    Malan, AG

    2008-11-01

    Full Text Available -structure interaction modelling software AG MALAN AND O OXTOBY CSIR Defence, Peace, Safety and Security, PO Box 395, Pretoria, 0001 Email: amalan@csir.co.za – www.csir.co.za Internationally leading aerospace company Airbus sponsored key components... of the development of the CSIR fl uid-structure interaction (FSI) software. Below are extracts from their evaluation of the devel- oped technology: “The fi eld of FSI covers a massive range of engineering problems, each with their own multi-parameter, individual...

  16. Estimation of some stochastic models used in reliability engineering

    International Nuclear Information System (INIS)

    Huovinen, T.

    1989-04-01

    The work aims to study the estimation of some stochastic models used in reliability engineering. In reliability engineering continuous probability distributions have been used as models for the lifetime of technical components. We consider here the following distributions: exponential, 2-mixture exponential, conditional exponential, Weibull, lognormal and gamma. Maximum likelihood method is used to estimate distributions from observed data which may be either complete or censored. We consider models based on homogeneous Poisson processes such as gamma-poisson and lognormal-poisson models for analysis of failure intensity. We study also a beta-binomial model for analysis of failure probability. The estimators of the parameters for three models are estimated by the matching moments method and in the case of gamma-poisson and beta-binomial models also by maximum likelihood method. A great deal of mathematical or statistical problems that arise in reliability engineering can be solved by utilizing point processes. Here we consider the statistical analysis of non-homogeneous Poisson processes to describe the failing phenomena of a set of components with a Weibull intensity function. We use the method of maximum likelihood to estimate the parameters of the Weibull model. A common cause failure can seriously reduce the reliability of a system. We consider a binomial failure rate (BFR) model as an application of the marked point processes for modelling common cause failure in a system. The parameters of the binomial failure rate model are estimated with the maximum likelihood method

  17. Software Estimation: Developing an Accurate, Reliable Method

    Science.gov (United States)

    2011-08-01

    based and size-based estimates is able to accurately plan, launch, and execute on schedule. Bob Sinclair, NAWCWD Chris Rickets , NAWCWD Brad Hodgins...Office by Carnegie Mellon University. SMPSP and SMTSP are service marks of Carnegie Mellon University. 1. Rickets , Chris A, “A TSP Software Maintenance...Life Cycle”, CrossTalk, March, 2005. 2. Koch, Alan S, “TSP Can Be the Building blocks for CMMI”, CrossTalk, March, 2005. 3. Hodgins, Brad, Rickets

  18. Reliability and validity of a novel Kinect-based software program for measuring posture, balance and side-bending.

    Science.gov (United States)

    Grooten, Wilhelmus Johannes Andreas; Sandberg, Lisa; Ressman, John; Diamantoglou, Nicolas; Johansson, Elin; Rasmussen-Barr, Eva

    2018-01-08

    Clinical examinations are subjective and often show a low validity and reliability. Objective and highly reliable quantitative assessments are available in laboratory settings using 3D motion analysis, but these systems are too expensive to use for simple clinical examinations. Qinematic™ is an interactive movement analyses system based on the Kinect camera and is an easy-to-use clinical measurement system for assessing posture, balance and side-bending. The aim of the study was to test the test-retest the reliability and construct validity of Qinematic™ in a healthy population, and to calculate the minimal clinical differences for the variables of interest. A further aim was to identify the discriminative validity of Qinematic™ in people with low-back pain (LBP). We performed a test-retest reliability study (n = 37) with around 1 week between the occasions, a construct validity study (n = 30) in which Qinematic™ was tested against a 3D motion capture system, and a discriminative validity study, in which a group of people with LBP (n = 20) was compared to healthy controls (n = 17). We tested a large range of psychometric properties of 18 variables in three sections: posture (head and pelvic position, weight distribution), balance (sway area and velocity in single- and double-leg stance), and side-bending. The majority of the variables in the posture and balance sections, showed poor/fair reliability (ICC validity (Spearman reliability (ICC =0.898), excellent validity (r = 0.943), and Qinematic™ could differentiate between LPB and healthy individuals (p = 0.012). This paper shows that a novel software program (Qinematic™) based on the Kinect camera for measuring balance, posture and side-bending has poor psychometric properties, indicating that the variables on balance and posture should not be used for monitoring individual changes over time or in research. Future research on the dynamic tasks of Qinematic™ is warranted.

  19. Reliable RANSAC Using a Novel Preprocessing Model

    Directory of Open Access Journals (Sweden)

    Xiaoyan Wang

    2013-01-01

    Full Text Available Geometric assumption and verification with RANSAC has become a crucial step for corresponding to local features due to its wide applications in biomedical feature analysis and vision computing. However, conventional RANSAC is very time-consuming due to redundant sampling times, especially dealing with the case of numerous matching pairs. This paper presents a novel preprocessing model to explore a reduced set with reliable correspondences from initial matching dataset. Both geometric model generation and verification are carried out on this reduced set, which leads to considerable speedups. Afterwards, this paper proposes a reliable RANSAC framework using preprocessing model, which was implemented and verified using Harris and SIFT features, respectively. Compared with traditional RANSAC, experimental results show that our method is more efficient.

  20. Software test and validation of wireless sensor nodes used in nuclear power plant

    International Nuclear Information System (INIS)

    Deng Changjian; Chen Dongyi; Zhang Heng

    2015-01-01

    The software test and validation of wireless sensor nodes is one of the key approaches to improve or guarantee the reliability of wireless network application in nuclear power plants (NPPs). At first, to validate the software test, some concepts are defined quantitatively, for example the robustness of software, the reliability of software, and the security of software. Then the development tools and simulators of discrete event drive operating system are compared, in order to present robustness, reliability and security of software test approach based on input-output function. Some simple preliminary test results are given to show that different development software can obtain almost same measurement and communication results although the software of special application may be different than normal application. (author)

  1. Finding upper bounds for software failure probabilities - experiments and results

    International Nuclear Information System (INIS)

    Kristiansen, Monica; Winther, Rune

    2005-09-01

    This report looks into some aspects of using Bayesian hypothesis testing to find upper bounds for software failure probabilities. In the first part, the report evaluates the Bayesian hypothesis testing approach for finding upper bounds for failure probabilities of single software components. The report shows how different choices of prior probability distributions for a software component's failure probability influence the number of tests required to obtain adequate confidence in a software component. In the evaluation, both the effect of the shape of the prior distribution as well as one's prior confidence in the software component were investigated. In addition, different choices of prior probability distributions are discussed based on their relevance in a software context. In the second part, ideas on how the Bayesian hypothesis testing approach can be extended to assess systems consisting of multiple software components are given. One of the main challenges when assessing systems consisting of multiple software components is to include dependency aspects in the software reliability models. However, different types of failure dependencies between software components must be modelled differently. Identifying different types of failure dependencies are therefore an important condition for choosing a prior probability distribution, which correctly reflects one's prior belief in the probability for software components failing dependently. In this report, software components include both general in-house software components, as well as pre-developed software components (e.g. COTS, SOUP, etc). (Author)

  2. The Five 'R's' for Developing Trusted Software Frameworks to increase confidence in, and maximise reuse of, Open Source Software.

    Science.gov (United States)

    Fraser, Ryan; Gross, Lutz; Wyborn, Lesley; Evans, Ben; Klump, Jens

    2015-04-01

    benchmark cases. This will be achieved by linking the code in the software framework to peer review forums such as Mozilla Science or appropriate Journals (e.g. Geoscientific Model Development Journal) to assist users to know which codes to trust. 3) Referencing will be accomplished by linking the Software Framework to groups such as Figshare or ImpactStory that help disseminate and measure the impact of scientific research, including program code. 4) The Run component will draw on information supplied in the registration process, benchmark cases described in the review and relevant information to instantiate the scientific code on the selected environment. 5) The Repeat component will tap into existing Provenance Workflow engines that will automatically capture information that relate to a particular run of that software, including identification of all input and output artefacts, and all elements and transactions within that workflow. The proposed trusted software framework will enable users to rapidly discover and access reliable code, reduce the time to deploy it and greatly facilitate sharing, reuse and reinstallation of code. Properly designed it could enable an ability to scale out to massively parallel systems and be accessed nationally/ internationally for multiple use cases, including Supercomputer centres, cloud facilities, and local computers.

  3. Beyond Reactive Planning: Self Adaptive Software and Self Modeling Software in Predictive Deliberation Management

    National Research Council Canada - National Science Library

    Lenahan, Jack; Nash, Michael P; Charles, Phil

    2008-01-01

    .... We present the following hypothesis: predictive deliberation management using self-adapting and self-modeling software will be required to provide mission planning adjustments after the start of a mission...

  4. Failure database and tools for wind turbine availability and reliability analyses. The application of reliability data for selected wind turbines

    DEFF Research Database (Denmark)

    Kozine, Igor; Christensen, P.; Winther-Jensen, M.

    2000-01-01

    The objective of this project was to develop and establish a database for collecting reliability and reliability-related data, for assessing the reliability of wind turbine components and subsystems and wind turbines as a whole, as well as for assessingwind turbine availability while ranking the ...... similar safety systems. The database was established with Microsoft Access DatabaseManagement System, the software for reliability and availability assessments was created with Visual Basic....... the contributions at both the component and system levels. The project resulted in a software package combining a failure database with programs for predicting WTB availability and the reliability of all thecomponents and systems, especially the safety system. The report consists of a description of the theoretical......The objective of this project was to develop and establish a database for collecting reliability and reliability-related data, for assessing the reliability of wind turbine components and subsystems and wind turbines as a whole, as well as for assessingwind turbine availability while ranking...

  5. A software product certification model

    NARCIS (Netherlands)

    Heck, P.M.; Klabbers, M.D.; van Eekelen, Marko

    2010-01-01

    Certification of software artifacts offers organizations more certainty and confidence about software. Certification of software helps software sales, acquisition, and can be used to certify legislative compliance or to achieve acceptable deliverables in outsourcing. In this article, we present a

  6. Quantitative metal magnetic memory reliability modeling for welded joints

    Science.gov (United States)

    Xing, Haiyan; Dang, Yongbin; Wang, Ben; Leng, Jiancheng

    2016-03-01

    Metal magnetic memory(MMM) testing has been widely used to detect welded joints. However, load levels, environmental magnetic field, and measurement noises make the MMM data dispersive and bring difficulty to quantitative evaluation. In order to promote the development of quantitative MMM reliability assessment, a new MMM model is presented for welded joints. Steel Q235 welded specimens are tested along the longitudinal and horizontal lines by TSC-2M-8 instrument in the tensile fatigue experiments. The X-ray testing is carried out synchronously to verify the MMM results. It is found that MMM testing can detect the hidden crack earlier than X-ray testing. Moreover, the MMM gradient vector sum K vs is sensitive to the damage degree, especially at early and hidden damage stages. Considering the dispersion of MMM data, the K vs statistical law is investigated, which shows that K vs obeys Gaussian distribution. So K vs is the suitable MMM parameter to establish reliability model of welded joints. At last, the original quantitative MMM reliability model is first presented based on the improved stress strength interference theory. It is shown that the reliability degree R gradually decreases with the decreasing of the residual life ratio T, and the maximal error between prediction reliability degree R 1 and verification reliability degree R 2 is 9.15%. This presented method provides a novel tool of reliability testing and evaluating in practical engineering for welded joints.

  7. Object-Oriented Technology-Based Software Library for Operations of Water Reclamation Centers

    Science.gov (United States)

    Otani, Tetsuo; Shimada, Takehiro; Yoshida, Norio; Abe, Wataru

    SCADA systems in water reclamation centers have been constructed based on hardware and software that each manufacturer produced according to their design. Even though this approach used to be effective to realize real-time and reliable execution, it is an obstacle to cost reduction about system construction and maintenance. A promising solution to address the problem is to set specifications that can be used commonly. In terms of software, information model approach has been adopted in SCADA systems in other field, such as telecommunications and power systems. An information model is a piece of software specification that describes a physical or logical object to be monitored. In this paper, we propose information models for operations of water reclamation centers, which have not ever existed. In addition, we show the feasibility of the information model in terms of common use and processing performance.

  8. Methodologies of the hardware reliability prediction for PSA of digital I and C systems

    International Nuclear Information System (INIS)

    Jung, H. S.; Sung, T. Y.; Eom, H. S.; Park, J. K.; Kang, H. G.; Park, J.

    2000-09-01

    Digital I and C systems are being used widely in the Non-safety systems of the NPP and they are expanding their applications to safety critical systems. The regulatory body shifts their policy to risk based and may require Probabilistic Safety Assessment for the digital I and C systems. But there is no established reliability prediction methodology for the digital I and C systems including both software and hardware yet. This survey report includes a lot of reliability prediction methods for electronic systems in view of hardware. Each method has both the strong and the weak points. This report provides the state-of-art of prediction methods and focus on Bellcore method and MIL-HDBK-217F method in deeply. The reliability analysis models are reviewed and discussed to help analysts. Also this report includes state-of-art of software tools that are supporting reliability prediction

  9. Methodologies of the hardware reliability prediction for PSA of digital I and C systems

    Energy Technology Data Exchange (ETDEWEB)

    Jung, H. S.; Sung, T. Y.; Eom, H. S.; Park, J. K.; Kang, H. G.; Park, J

    2000-09-01

    Digital I and C systems are being used widely in the Non-safety systems of the NPP and they are expanding their applications to safety critical systems. The regulatory body shifts their policy to risk based and may require Probabilistic Safety Assessment for the digital I and C systems. But there is no established reliability prediction methodology for the digital I and C systems including both software and hardware yet. This survey report includes a lot of reliability prediction methods for electronic systems in view of hardware. Each method has both the strong and the weak points. This report provides the state-of-art of prediction methods and focus on Bellcore method and MIL-HDBK-217F method in deeply. The reliability analysis models are reviewed and discussed to help analysts. Also this report includes state-of-art of software tools that are supporting reliability prediction.

  10. Computer Model to Estimate Reliability Engineering for Air Conditioning Systems

    International Nuclear Information System (INIS)

    Afrah Al-Bossly, A.; El-Berry, A.; El-Berry, A.

    2012-01-01

    Reliability engineering is used to predict the performance and optimize design and maintenance of air conditioning systems. Air conditioning systems are expose to a number of failures. The failures of an air conditioner such as turn on, loss of air conditioner cooling capacity, reduced air conditioning output temperatures, loss of cool air supply and loss of air flow entirely can be due to a variety of problems with one or more components of an air conditioner or air conditioning system. Forecasting for system failure rates are very important for maintenance. This paper focused on the reliability of the air conditioning systems. Statistical distributions that were commonly applied in reliability settings: the standard (2 parameter) Weibull and Gamma distributions. After distributions parameters had been estimated, reliability estimations and predictions were used for evaluations. To evaluate good operating condition in a building, the reliability of the air conditioning system that supplies conditioned air to the several The company's departments. This air conditioning system is divided into two, namely the main chilled water system and the ten air handling systems that serves the ten departments. In a chilled-water system the air conditioner cools water down to 40-45 degree F (4-7 degree C). The chilled water is distributed throughout the building in a piping system and connected to air condition cooling units wherever needed. Data analysis has been done with support a computer aided reliability software, this is due to the Weibull and Gamma distributions indicated that the reliability for the systems equal to 86.012% and 77.7% respectively. A comparison between the two important families of distribution functions, namely, the Weibull and Gamma families was studied. It was found that Weibull method performed for decision making.

  11. Software: our quest for excellence. Honoring 50 years of software history, progress, and process

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-06-01

    The Software Quality Forum was established by the Software Quality Assurance (SQA) Subcommittee, which serves as a technical advisory group on software engineering and quality initiatives and issues for DOE`s quality managers. The forum serves as an opportunity for all those involved in implementing SQA programs to meet and share ideas and concerns. Participation from managers, quality engineers, and software professionals provides an ideal environment for identifying and discussing issues and concerns. The interaction provided by the forum contributes to the realization of a shared goal--high quality software product. Topics include: testing, software measurement, software surety, software reliability, SQA practices, assessments, software process improvement, certification and licensing of software professionals, CASE tools, software project management, inspections, and management`s role in ensuring SQA. The bulk of this document consists of vugraphs. Selected papers have been indexed separately for inclusion in the Energy Science and Technology Database.

  12. MATHEMATICAL MODEL FOR SOFTWARE USABILITY AUTOMATED EVALUATION AND ASSURANCE

    Directory of Open Access Journals (Sweden)

    І. Гученко

    2011-04-01

    Full Text Available The subject of the research is software usability and the aim is construction of mathematicalmodel of estimation and providing of the set level of usability. Methodology of structural analysis,methods of multicriterion optimization and theory of making decision, method of convolution,scientific methods of analysis and analogies is used in the research. The result of executed work isthe model for software usability automated evaluation and assurance that allows not only toestimate the current level of usability during every iteration of agile development but also tomanage the usability of created software products. Results can be used for the construction ofautomated support systems of management the software usability.

  13. Modern software approaches applied to a Hydrological model: the GEOtop Open-Source Software Project

    Science.gov (United States)

    Cozzini, Stefano; Endrizzi, Stefano; Cordano, Emanuele; Bertoldi, Giacomo; Dall'Amico, Matteo

    2017-04-01

    The GEOtop hydrological scientific package is an integrated hydrological model that simulates the heat and water budgets at and below the soil surface. It describes the three-dimensional water flow in the soil and the energy exchange with the atmosphere, considering the radiative and turbulent fluxes. Furthermore, it reproduces the highly non-linear interactions between the water and energy balance during soil freezing and thawing, and simulates the temporal evolution of snow cover, soil temperature and moisture. The core components of the package were presented in the 2.0 version (Endrizzi et al, 2014), which was released as Free Software Open-source project. However, despite the high scientific quality of the project, a modern software engineering approach was still missing. Such weakness hindered its scientific potential and its use both as a standalone package and, more importantly, in an integrate way with other hydrological software tools. In this contribution we present our recent software re-engineering efforts to create a robust and stable scientific software package open to the hydrological community, easily usable by researchers and experts, and interoperable with other packages. The activity takes as a starting point the 2.0 version, scientifically tested and published. This version, together with several test cases based on recent published or available GEOtop applications (Cordano and Rigon, 2013, WRR, Kollet et al, 2016, WRR) provides the baseline code and a certain number of referenced results as benchmark. Comparison and scientific validation can then be performed for each software re-engineering activity performed on the package. To keep track of any single change the package is published on its own github repository geotopmodel.github.io/geotop/ under GPL v3.0 license. A Continuous Integration mechanism by means of Travis-CI has been enabled on the github repository on master and main development branches. The usage of CMake configuration tool

  14. 3D MODELLING BY LOW-COST RANGE CAMERA: SOFTWARE EVALUATION AND COMPARISON

    Directory of Open Access Journals (Sweden)

    R. Ravanelli

    2017-11-01

    Full Text Available The aim of this work is to present a comparison among three software applications currently available for the Occipital Structure SensorTM; all these software were developed for collecting 3D models of objects easily and in real-time with this structured light range camera. The SKANECT, itSeez3D and Scanner applications were thus tested: a DUPLOTM bricks construction was scanned with the three applications and the obtained models were compared to the model virtually generated with a standard CAD software, which served as reference. The results demonstrate that all the software applications are generally characterized by the same level of geometric accuracy, which amounts to very few millimetres. However, the itSeez3D software, which requires a payment of $7 to export each model, represents surely the best solution, both from the point of view of the geometric accuracy and, mostly, at the level of the color restitution. On the other hand, Scanner, which is a free software, presents an accuracy comparable to that of itSeez3D. At the same time, though, the colors are often smoothed and not perfectly overlapped to the corresponding part of the model. Lastly, SKANECT is the software that generates the highest number of points, but it has also some issues with the rendering of the colors.

  15. Experiment research on cognition reliability model of nuclear power plant

    International Nuclear Information System (INIS)

    Zhao Bingquan; Fang Xiang

    1999-01-01

    The objective of the paper is to improve the reliability of operation on real nuclear power plant of operators through the simulation research to the cognition reliability of nuclear power plant operators. The research method of the paper is to make use of simulator of nuclear power plant as research platform, to take present international research model of reliability of human cognition based on three-parameter Weibull distribution for reference, to develop and get the research model of Chinese nuclear power plant operators based on two-parameter Weibull distribution. By making use of two-parameter Weibull distribution research model of cognition reliability, the experiments about the cognition reliability of nuclear power plant operators have been done. Compared with the results of other countries such USA and Hungary, the same results can be obtained, which can do good to the safety operation of nuclear power plant

  16. Reliability Engineering for Service Oriented Architectures

    Science.gov (United States)

    2013-02-01

    Common Object Request Broker Architecture Ecosystem In software , an ecosystem is a set of applications and/or services that grad- ually build up over time...Enterprise Service Bus Foreign In an SOA context: Any SOA, service or software which the owners of the calling software do not have control of, either...SOA Service Oriented Architecture SRE Software Reliability Engineering System Mode Many systems exhibit different modes of operation. E.g. the cockpit

  17. Reliability in the Rasch Model

    Czech Academy of Sciences Publication Activity Database

    Martinková, Patrícia; Zvára, K.

    2007-01-01

    Roč. 43, č. 3 (2007), s. 315-326 ISSN 0023-5954 R&D Projects: GA MŠk(CZ) 1M06014 Institutional research plan: CEZ:AV0Z10300504 Keywords : Cronbach's alpha * Rasch model * reliability Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.552, year: 2007 http://dml.cz/handle/10338.dmlcz/135776

  18. ERP Reliability Analysis (ERA) Toolbox: An open-source toolbox for analyzing the reliability of event-related brain potentials.

    Science.gov (United States)

    Clayson, Peter E; Miller, Gregory A

    2017-01-01

    Generalizability theory (G theory) provides a flexible, multifaceted approach to estimating score reliability. G theory's approach to estimating score reliability has important advantages over classical test theory that are relevant for research using event-related brain potentials (ERPs). For example, G theory does not require parallel forms (i.e., equal means, variances, and covariances), can handle unbalanced designs, and provides a single reliability estimate for designs with multiple sources of error. This monograph provides a detailed description of the conceptual framework of G theory using examples relevant to ERP researchers, presents the algorithms needed to estimate ERP score reliability, and provides a detailed walkthrough of newly-developed software, the ERP Reliability Analysis (ERA) Toolbox, that calculates score reliability using G theory. The ERA Toolbox is open-source, Matlab software that uses G theory to estimate the contribution of the number of trials retained for averaging, group, and/or event types on ERP score reliability. The toolbox facilitates the rigorous evaluation of psychometric properties of ERP scores recommended elsewhere in this special issue. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. A reliability-risk modelling of nuclear rad-waste facilities

    International Nuclear Information System (INIS)

    Lehmann, P.H.; El-Bassioni, A.A.

    1975-01-01

    Rad-waste disposal systems of nuclear power sites are designed and operated to collect, delay, contain, and concentrate radioactive wastes from reactor plant processes such that on-site and off-site exposures to radiation are well below permissible limits. To assist the designer in achieving minimum release/exposure goals, a computerized reliability-risk model has been developed to simulate the rad-waste system. The objectives of the model are to furnish a practical tool for quantifying the effects of changes in system configuration, operation, and equipment, and for the identification of weak segments in the system design. Primarily, the model comprises a marriage of system analysis, reliability analysis, and release-risk assessment. Provisions have been included in the model to permit the optimization of the system design subject to constraints on cost and rad-releases. The system analysis phase involves the preparation of a physical and functional description of the rad-waste facility accompanied by the formation of a system tree diagram. The reliability analysis phase embodies the formulation of appropriate reliability models and the collection of model parameters. Release-risk assessment constitutes the analytical basis whereupon further system and reliability analyses may be warranted. Release-risk represents the potential for release of radioactivity and is defined as the product of an element's unreliability at time, t, and the radioactivity available for release in time interval, Δt. A computer code (RARISK) has been written to simulate the tree diagram of the rad-waste system. Reliability and release-risk results have been generated for cases which examined the process flow paths of typical rad-waste systems, the effects of repair and standby, the variations of equipment failure and repair rates, and changes in system configurations. The essential feature of this model is that a complex system like the rad-waste facility can be easily decomposed into its

  20. Global Software Engineering: A Software Process Approach

    Science.gov (United States)

    Richardson, Ita; Casey, Valentine; Burton, John; McCaffery, Fergal

    Our research has shown that many companies are struggling with the successful implementation of global software engineering, due to temporal, cultural and geographical distance, which causes a range of factors to come into play. For example, cultural, project managementproject management and communication difficulties continually cause problems for software engineers and project managers. While the implementation of efficient software processes can be used to improve the quality of the software product, published software process models do not cater explicitly for the recent growth in global software engineering. Our thesis is that global software engineering factors should be included in software process models to ensure their continued usefulness in global organisations. Based on extensive global software engineering research, we have developed a software process, Global Teaming, which includes specific practices and sub-practices. The purpose is to ensure that requirements for successful global software engineering are stipulated so that organisations can ensure successful implementation of global software engineering.

  1. Evaluating Sustainability Models for Interoperability through Brokering Software

    Science.gov (United States)

    Pearlman, Jay; Benedict, Karl; Best, Mairi; Fyfe, Sue; Jacobs, Cliff; Michener, William; Nativi, Stefano; Powers, Lindsay; Turner, Andrew

    2016-04-01

    Sustainability of software and research support systems is an element of innovation that is not often discussed. Yet, sustainment is essential if we expect research communities to make the time investment to learn and adopt new technologies. As the Research Data Alliance (RDA) is developing new approaches to interoperability, the question of uptake and sustainability is important. Brokering software sustainability is one of the areas that is being addressed in RDA. The Business Models Team of the Research Data Alliance Brokering Governance Working Group examined several support models proposed to promote the long-term sustainability of brokering middleware. The business model analysis includes examination of funding source, implementation frameworks and challenges, and policy and legal considerations. Results of this comprehensive analysis highlight advantages and disadvantages of the various models with respect to the specific requirements for brokering services. We offer recommendations based on the outcomes of this analysis that suggest that hybrid funding models present the most likely avenue to long term sustainability.

  2. Mining software specifications methodologies and applications

    CERN Document Server

    Lo, David

    2011-01-01

    An emerging topic in software engineering and data mining, specification mining tackles software maintenance and reliability issues that cost economies billions of dollars each year. The first unified reference on the subject, Mining Software Specifications: Methodologies and Applications describes recent approaches for mining specifications of software systems. Experts in the field illustrate how to apply state-of-the-art data mining and machine learning techniques to address software engineering concerns. In the first set of chapters, the book introduces a number of studies on mining finite

  3. Multiobjective Reliable Cloud Storage with Its Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Xiyang Liu

    2016-01-01

    Full Text Available Information abounds in all fields of the real life, which is often recorded as digital data in computer systems and treated as a kind of increasingly important resource. Its increasing volume growth causes great difficulties in both storage and analysis. The massive data storage in cloud environments has significant impacts on the quality of service (QoS of the systems, which is becoming an increasingly challenging problem. In this paper, we propose a multiobjective optimization model for the reliable data storage in clouds through considering both cost and reliability of the storage service simultaneously. In the proposed model, the total cost is analyzed to be composed of storage space occupation cost, data migration cost, and communication cost. According to the analysis of the storage process, the transmission reliability, equipment stability, and software reliability are taken into account in the storage reliability evaluation. To solve the proposed multiobjective model, a Constrained Multiobjective Particle Swarm Optimization (CMPSO algorithm is designed. At last, experiments are designed to validate the proposed model and its solution PSO algorithm. In the experiments, the proposed model is tested in cooperation with 3 storage strategies. Experimental results show that the proposed model is positive and effective. The experimental results also demonstrate that the proposed model can perform much better in alliance with proper file splitting methods.

  4. Assessment of physical server reliability in multi cloud computing system

    Science.gov (United States)

    Kalyani, B. J. D.; Rao, Kolasani Ramchand H.

    2018-04-01

    Business organizations nowadays functioning with more than one cloud provider. By spreading cloud deployment across multiple service providers, it creates space for competitive prices that minimize the burden on enterprises spending budget. To assess the software reliability of multi cloud application layered software reliability assessment paradigm is considered with three levels of abstractions application layer, virtualization layer, and server layer. The reliability of each layer is assessed separately and is combined to get the reliability of multi-cloud computing application. In this paper, we focused on how to assess the reliability of server layer with required algorithms and explore the steps in the assessment of server reliability.

  5. Software sensors based on the grey-box modelling approach

    DEFF Research Database (Denmark)

    Carstensen, J.; Harremoës, P.; Strube, Rune

    1996-01-01

    In recent years the grey-box modelling approach has been applied to wastewater transportation and treatment Grey-box models are characterized by the combination of deterministic and stochastic terms to form a model where all the parameters are statistically identifiable from the on......-box model for the specific dynamics is identified. Similarly, an on-line software sensor for detecting the occurrence of backwater phenomena can be developed by comparing the dynamics of a flow measurement with a nearby level measurement. For treatment plants it is found that grey-box models applied to on......-line measurements. With respect to the development of software sensors, the grey-box models possess two important features. Firstly, the on-line measurements can be filtered according to the grey-box model in order to remove noise deriving from the measuring equipment and controlling devices. Secondly, the grey...

  6. Development of a Conservative Model Validation Approach for Reliable Analysis

    Science.gov (United States)

    2015-01-01

    CIE 2015 August 2-5, 2015, Boston, Massachusetts, USA [DRAFT] DETC2015-46982 DEVELOPMENT OF A CONSERVATIVE MODEL VALIDATION APPROACH FOR RELIABLE...obtain a conservative simulation model for reliable design even with limited experimental data. Very little research has taken into account the...3, the proposed conservative model validation is briefly compared to the conventional model validation approach. Section 4 describes how to account

  7. On integrating modeling software for application to total-system performance assessment

    International Nuclear Information System (INIS)

    Lewis, L.C.; Wilson, M.L.

    1994-05-01

    We examine the processes and methods used to facilitate collaboration in software development between two organizations at separate locations -- Lawrence Livermore National Laboratory (LLNL) in California and Sandia National Laboratories (SNL) in New Mexico. Our software development process integrated the efforts of these two laboratories. Software developed at LLNL to model corrosion and failure of waste packages and subsequent releases of radionuclides was incorporated as a source term into SNLs computer models for fluid flow and radionuclide transport through the geosphere

  8. Test process for the safety-critical embedded software

    International Nuclear Information System (INIS)

    Sung, Ahyoung; Choi, Byoungju; Lee, Jangsoo

    2004-01-01

    Digitalization of nuclear Instrumentation and Control (I and C) system requires high reliability of not only hardware but also software. Verification and Validation (V and V) process is recommended for software reliability. But a more quantitative method is necessary such as software testing. Most of software in the nuclear I and C system is safety-critical embedded software. Safety-critical embedded software is specified, verified and developed according to V and V process. Hence two types of software testing techniques are necessary for the developed code. First, code-based software testing is required to examine the developed code. Second, after code-based software testing, software testing affected by hardware is required to reveal the interaction fault that may cause unexpected results. We call the testing of hardware's influence on software, an interaction testing. In case of safety-critical embedded software, it is also important to consider the interaction between hardware and software. Even if no faults are detected when testing either hardware or software alone, combining these components may lead to unexpected results due to the interaction. In this paper, we propose a software test process that embraces test levels, test techniques, required test tasks and documents for safety-critical embedded software. We apply the proposed test process to safety-critical embedded software as a case study, and show the effectiveness of it. (author)

  9. Data Used in Quantified Reliability Models

    Science.gov (United States)

    DeMott, Diana; Kleinhammer, Roger K.; Kahn, C. J.

    2014-01-01

    Data is the crux to developing quantitative risk and reliability models, without the data there is no quantification. The means to find and identify reliability data or failure numbers to quantify fault tree models during conceptual and design phases is often the quagmire that precludes early decision makers consideration of potential risk drivers that will influence design. The analyst tasked with addressing a system or product reliability depends on the availability of data. But, where is does that data come from and what does it really apply to? Commercial industries, government agencies, and other international sources might have available data similar to what you are looking for. In general, internal and external technical reports and data based on similar and dissimilar equipment is often the first and only place checked. A common philosophy is "I have a number - that is good enough". But, is it? Have you ever considered the difference in reported data from various federal datasets and technical reports when compared to similar sources from national and/or international datasets? Just how well does your data compare? Understanding how the reported data was derived, and interpreting the information and details associated with the data is as important as the data itself.

  10. Availability simulation software adaptation to the IFMIF accelerator facility RAMI analyses

    Energy Technology Data Exchange (ETDEWEB)

    Bargalló, Enric, E-mail: enric.bargallo-font@upc.edu [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC) Barcelona-Tech, Barcelona (Spain); Sureda, Pere Joan [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC) Barcelona-Tech, Barcelona (Spain); Arroyo, Jose Manuel [Laboratorio Nacional de Fusión por Confinamiento Magnético – CIEMAT, Madrid (Spain); Abal, Javier; De Blas, Alfredo; Dies, Javier; Tapia, Carlos [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC) Barcelona-Tech, Barcelona (Spain); Mollá, Joaquín; Ibarra, Ángel [Laboratorio Nacional de Fusión por Confinamiento Magnético – CIEMAT, Madrid (Spain)

    2014-10-15

    Highlights: • The reason why IFMIF RAMI analyses needs a simulation is explained. • Changes, modifications and software validations done to AvailSim are described. • First IFMIF RAMI results obtained with AvailSim 2.0 are shown. • Implications of AvailSim 2.0 in IFMIF RAMI analyses are evaluated. - Abstract: Several problems were found when using generic reliability tools to perform RAMI (Reliability Availability Maintainability Inspectability) studies for the IFMIF (International Fusion Materials Irradiation Facility) accelerator. A dedicated simulation tool was necessary to model properly the complexity of the accelerator facility. AvailSim, the availability simulation software used for the International Linear Collider (ILC) became an excellent option to fulfill RAMI analyses needs. Nevertheless, this software needed to be adapted and modified to simulate the IFMIF accelerator facility in a useful way for the RAMI analyses in the current design phase. Furthermore, some improvements and new features have been added to the software. This software has become a great tool to simulate the peculiarities of the IFMIF accelerator facility allowing obtaining a realistic availability simulation. Degraded operation simulation and maintenance strategies are the main relevant features. In this paper, the necessity of this software, main modifications to improve it and its adaptation to IFMIF RAMI analysis are described. Moreover, first results obtained with AvailSim 2.0 and a comparison with previous results is shown.

  11. Availability simulation software adaptation to the IFMIF accelerator facility RAMI analyses

    International Nuclear Information System (INIS)

    Bargalló, Enric; Sureda, Pere Joan; Arroyo, Jose Manuel; Abal, Javier; De Blas, Alfredo; Dies, Javier; Tapia, Carlos; Mollá, Joaquín; Ibarra, Ángel

    2014-01-01

    Highlights: • The reason why IFMIF RAMI analyses needs a simulation is explained. • Changes, modifications and software validations done to AvailSim are described. • First IFMIF RAMI results obtained with AvailSim 2.0 are shown. • Implications of AvailSim 2.0 in IFMIF RAMI analyses are evaluated. - Abstract: Several problems were found when using generic reliability tools to perform RAMI (Reliability Availability Maintainability Inspectability) studies for the IFMIF (International Fusion Materials Irradiation Facility) accelerator. A dedicated simulation tool was necessary to model properly the complexity of the accelerator facility. AvailSim, the availability simulation software used for the International Linear Collider (ILC) became an excellent option to fulfill RAMI analyses needs. Nevertheless, this software needed to be adapted and modified to simulate the IFMIF accelerator facility in a useful way for the RAMI analyses in the current design phase. Furthermore, some improvements and new features have been added to the software. This software has become a great tool to simulate the peculiarities of the IFMIF accelerator facility allowing obtaining a realistic availability simulation. Degraded operation simulation and maintenance strategies are the main relevant features. In this paper, the necessity of this software, main modifications to improve it and its adaptation to IFMIF RAMI analysis are described. Moreover, first results obtained with AvailSim 2.0 and a comparison with previous results is shown

  12. A Technology-Neutral Role-Based Collaboration Model for Software Ecosystems

    DEFF Research Database (Denmark)

    Stanciulescu, Stefan; Rabiser, Daniela; Seidl, Christoph

    2016-01-01

    by contributing a role-based collaboration model for software ecosystems to make such implicit similarities explicit and to raise awareness among developers during their ongoing efforts. We extract this model based on realization artifacts in a specific programming language located in a particular source code......In large-scale software ecosystems, many developers contribute extensions to a common software platform. Due to the independent development efforts and the lack of a central steering mechanism, similar functionality may be developed multiple times by different developers. We tackle this problem...... efforts and information of ongoing development efforts. Finally, using the collaborations defined in the formalism we model real artifacts from Marlin, a firmware for 3D printers, and we show that for the selected scenarios, the five collaborations were sufficient to raise awareness and make implicit...

  13. Computer-aided software development

    International Nuclear Information System (INIS)

    Teichroew, D.; Hershey, E.A. III; Yamamoto, Y.

    1978-01-01

    In recent years, as the hardware cost/capability ratio has continued to decrease and as much of the routine data processing has been computerized, the emphasis in software development has shifted from just getting systems operational to the maintenance of existing systems, reduction of duplication by integration, selective addition of new applications, systems that are more usable, maintainable, portable and reliable and to improving the productivity of software developers. This paper examines a number of trends that are changing the methods by which software is being produced and used. (Auth.)

  14. Software engineering the mixed model for genome-wide association studies on large samples.

    Science.gov (United States)

    Zhang, Zhiwu; Buckler, Edward S; Casstevens, Terry M; Bradbury, Peter J

    2009-11-01

    Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample size and number of markers used for GWAS is increasing dramatically, resulting in greater statistical power to detect those associations. The use of mixed models with increasingly large data sets depends on the availability of software for analyzing those models. While multiple software packages implement the mixed model method, no single package provides the best combination of fast computation, ability to handle large samples, flexible modeling and ease of use. Key elements of association analysis with mixed models are reviewed, including modeling phenotype-genotype associations using mixed models, population stratification, kinship and its estimation, variance component estimation, use of best linear unbiased predictors or residuals in place of raw phenotype, improving efficiency and software-user interaction. The available software packages are evaluated, and suggestions made for future software development.

  15. Operator reliability assessment system (OPERAS)

    International Nuclear Information System (INIS)

    Singh, A.; Spurgin, A.J.; Martin, T.; Welsch, J.; Hallam, J.W.

    1991-01-01

    OPERAS is a personal-computer (PC) based software to collect and process simulator data on control-room operators responses during requalification training scenarios. The data collection scheme is based upon approach developed earlier during the EPRI Operator Reliability Experiments project. The software allows automated data collection from simulator, thus minimizing simulator staff time and resources to collect, maintain and process data which can be useful in monitoring, assessing and enhancing the progress of crew reliability and effectiveness. The system is designed to provide the data and output information in the form of user-friendly charts, tables and figures for use by plant staff. OPERAS prototype software has been implemented at the Diablo Canyon (PWR) and Millstone (BWR) plants and is currently being used to collect operator response data. Data collected from similator include plant-state variables such as reactor pressure and temperature, malfunction, times at which annunciators are activated, operator actions and observations of crew behavior by training staff. The data and systematic analytical results provided by the OPERAS system can contribute to increase objectivity by the utility probabilistic risk analysis (PRA) and training staff in monitoring and assessing reliability of their crews

  16. Reliability modeling of Clinch River breeder reactor electrical shutdown systems

    International Nuclear Information System (INIS)

    Schatz, R.A.; Duetsch, K.L.

    1974-01-01

    The initial simulation of the probabilistic properties of the Clinch River Breeder Reactor Plant (CRBRP) electrical shutdown systems is described. A model of the reliability (and availability) of the systems is presented utilizing Success State and continuous-time, discrete state Markov modeling techniques as significant elements of an overall reliability assessment process capable of demonstrating the achievement of program goals. This model is examined for its sensitivity to safe/unsafe failure rates, sybsystem redundant configurations, test and repair intervals, monitoring by reactor operators; and the control exercised over system reliability by design modifications and the selection of system operating characteristics. (U.S.)

  17. The Robust Software Feedback Model: An Effective Waterfall Model Tailoring for Space SW

    Science.gov (United States)

    Tipaldi, Massimo; Gotz, Christoph; Ferraguto, Massimo; Troiano, Luigi; Bruenjes, Bernhard

    2013-08-01

    The selection of the most suitable software life cycle process is of paramount importance in any space SW project. Despite being the preferred choice, the waterfall model is often exposed to some criticism. As matter of fact, its main assumption of moving to a phase only when the preceding one is completed and perfected (and under the demanding SW schedule constraints) is not easily attainable. In this paper, a tailoring of the software waterfall model (named “Robust Software Feedback Model”) is presented. The proposed methodology sorts out these issues by combining a SW waterfall model with a SW prototyping approach. The former is aligned with the SW main production line and is based on the full ECSS-E-ST-40C life-cycle reviews, whereas the latter is carried out in advance versus the main SW streamline (so as to inject its lessons learnt into the main streamline) and is based on a lightweight approach.

  18. Evaluating North American Electric Grid Reliability Using the Barabasi-Albert Network Model

    OpenAIRE

    Chassin, David P.; Posse, Christian

    2004-01-01

    The reliability of electric transmission systems is examined using a scale-free model of network structure and failure propagation. The topologies of the North American eastern and western electric networks are analyzed to estimate their reliability based on the Barabasi-Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using s...

  19. Multiphysics software and the challenge to validating physical models

    International Nuclear Information System (INIS)

    Luxat, J.C.

    2008-01-01

    This paper discusses multi physics software and validation of physical models in the nuclear industry. The major challenge is to convert the general purpose software package to a robust application-specific solution. This requires greater knowledge of the underlying solution techniques and the limitations of the packages. Good user interfaces and neat graphics do not compensate for any deficiencies

  20. Research of real-time communication software

    Science.gov (United States)

    Li, Maotang; Guo, Jingbo; Liu, Yuzhong; Li, Jiahong

    2003-11-01

    Real-time communication has been playing an increasingly important role in our work, life and ocean monitor. With the rapid progress of computer and communication technique as well as the miniaturization of communication system, it is needed to develop the adaptable and reliable real-time communication software in the ocean monitor system. This paper involves the real-time communication software research based on the point-to-point satellite intercommunication system. The object-oriented design method is adopted, which can transmit and receive video data and audio data as well as engineering data by satellite channel. In the real-time communication software, some software modules are developed, which can realize the point-to-point satellite intercommunication in the ocean monitor system. There are three advantages for the real-time communication software. One is that the real-time communication software increases the reliability of the point-to-point satellite intercommunication system working. Second is that some optional parameters are intercalated, which greatly increases the flexibility of the system working. Third is that some hardware is substituted by the real-time communication software, which not only decrease the expense of the system and promotes the miniaturization of communication system, but also aggrandizes the agility of the system.

  1. Coevolution of variability models and related software artifacts

    DEFF Research Database (Denmark)

    Passos, Leonardo; Teixeira, Leopoldo; Dinztner, Nicolas

    2015-01-01

    models coevolve with other artifact types, we study a large and complex real-world variant-rich software system: the Linux kernel. Specifically, we extract variability-coevolution patterns capturing changes in the variability model of the Linux kernel with subsequent changes in Makefiles and C source...

  2. Reliability modelling and simulation of switched linear system ...

    African Journals Online (AJOL)

    Reliability modelling and simulation of switched linear system control using temporal databases. ... design of fault-tolerant real-time switching systems control and modelling embedded micro-schedulers for complex systems maintenance.

  3. Reliability and maintainability

    International Nuclear Information System (INIS)

    1994-01-01

    Several communications in this conference are concerned with nuclear plant reliability and maintainability; their titles are: maintenance optimization of stand-by Diesels of 900 MW nuclear power plants; CLAIRE: an event-based simulation tool for software testing; reliability as one important issue within the periodic safety review of nuclear power plants; design of nuclear building ventilation by the means of functional analysis; operation characteristic analysis for a power industry plant park, as a function of influence parameters

  4. Models for reliability and management of NDT data

    International Nuclear Information System (INIS)

    Simola, K.

    1997-01-01

    In this paper the reliability of NDT measurements was approached from three directions. We have modelled the flaw sizing performance, the probability of flaw detection, and developed models to update the knowledge of true flaw size based on sequential measurement results and flaw sizing reliability model. In discussed models the measured flaw characteristics (depth, length) are assumed to be simple functions of the true characteristics and random noise corresponding to measurement errors, and the models are based on logarithmic transforms. Models for Bayesian updating of the flaw size distributions were developed. Using these models, it is possible to take into account the prior information of the flaw size and combine it with the measured results. A Bayesian approach could contribute e. g. to the definition of an appropriate combination of practical assessments and technical justifications in NDT system qualifications, as expressed by the European regulatory bodies

  5. Strategic bidding of generating units in competitive electricity market with considering their reliability

    International Nuclear Information System (INIS)

    Soleymani, S.; Ranjbar, A.M.; Shirani, A.R.

    2008-01-01

    In the restructured power systems, they are typically scheduled based on the offers and bids to buy and sell energy and ancillary services (AS) subject to operational and security constraints. Generally, no account is taken of unit reliability when scheduling it. Therefore generating units have no incentive to improve their reliability. This paper proposes a new method to obtain the equilibrium points for reliability and price bidding strategy of units when the unit reliability is considered in the scheduling problem. The proposed methodology employs the supply function equilibrium (SFE) for modeling a unit's bidding strategy. Units change their bidding strategies and improve their reliability until Nash equilibrium points are obtained. GAMS (general algebraic modeling system) language has been used to solve the market scheduling problem using DICOPT optimization software with mixed integer non-linear programming. (author)

  6. Studies in Software Cost Model Behavior: Do We Really Understand Cost Model Performance?

    Science.gov (United States)

    Lum, Karen; Hihn, Jairus; Menzies, Tim

    2006-01-01

    While there exists extensive literature on software cost estimation techniques, industry practice continues to rely upon standard regression-based algorithms. These software effort models are typically calibrated or tuned to local conditions using local data. This paper cautions that current approaches to model calibration often produce sub-optimal models because of the large variance problem inherent in cost data and by including far more effort multipliers than the data supports. Building optimal models requires that a wider range of models be considered while correctly calibrating these models requires rejection rules that prune variables and records and use multiple criteria for evaluating model performance. The main contribution of this paper is to document a standard method that integrates formal model identification, estimation, and validation. It also documents what we call the large variance problem that is a leading cause of cost model brittleness or instability.

  7. Appliance of software engineering in development of nuclear power plant

    Energy Technology Data Exchange (ETDEWEB)

    Baek, Y. W.; Kim, H. C.; Yun, C. [Chungnam National Univ., Taejon (Korea, Republic of); Kim, B. R. [KINS, Taejon (Korea, Republic of)

    1999-10-01

    Application of computer technology in nuclear power plant is also a necessary transformation as in other industry fields. But until now, application of software technology was not wide-spread because of its potential effect to safety in nuclear field. It is an urgent theme to develop evaluation guide and regulation techniques to guarantee safety, reliability and quality assurance. To meet these changes, techniques for development and operation should be enhanced to ensure the quality of software systems. In this study, we show the difference between waterfall model and software life-cycle needed in development of nuclear power plant and propose the consistent framework needed in development of instrumentation and control system of nuclear power plant.

  8. Appliance of software engineering in development of nuclear power plant

    International Nuclear Information System (INIS)

    Baek, Y. W.; Kim, H. C.; Yun, C.; Kim, B. R.

    1999-01-01

    Application of computer technology in nuclear power plant is also a necessary transformation as in other industry fields. But until now, application of software technology was not wide-spread because of its potential effect to safety in nuclear field. It is an urgent theme to develop evaluation guide and regulation techniques to guarantee safety, reliability and quality assurance. To meet these changes, techniques for development and operation should be enhanced to ensure the quality of software systems. In this study, we show the difference between waterfall model and software life-cycle needed in development of nuclear power plant and propose the consistent framework needed in development of instrumentation and control system of nuclear power plant

  9. Comparison of Model Reliabilities from Single-Step and Bivariate Blending Methods

    DEFF Research Database (Denmark)

    Taskinen, Matti; Mäntysaari, Esa; Lidauer, Martin

    2013-01-01

    Model based reliabilities in genetic evaluation are compared between three methods: animal model BLUP, single-step BLUP, and bivariate blending after genomic BLUP. The original bivariate blending is revised in this work to better account animal models. The study data is extracted from...... be calculated. Model reliabilities by the single-step and the bivariate blending methods were higher than by animal model due to genomic information. Compared to the single-step method, the bivariate blending method reliability estimates were, in general, lower. Computationally bivariate blending method was......, on the other hand, lighter than the single-step method....

  10. Staying in the Light: Evaluating Sustainability Models for Brokering Software

    Science.gov (United States)

    Powers, L. A.; Benedict, K. K.; Best, M.; Fyfe, S.; Jacobs, C. A.; Michener, W. K.; Pearlman, J.; Turner, A.; Nativi, S.

    2015-12-01

    The Business Models Team of the Research Data Alliance Brokering Governance Working Group examined several support models proposed to promote the long-term sustainability of brokering middleware. The business model analysis includes examination of funding source, implementation frameworks and obstacles, and policy and legal considerations. The issue of sustainability is not unique to brokering software and these models may be relevant to many applications. Results of this comprehensive analysis highlight advantages and disadvantages of the various models in respect to the specific requirements for brokering services. We offer recommendations based on the outcomes of this analysis while recognizing that all software is part of an evolutionary process and has a lifespan.

  11. Radiation transport analyses for IFMIF design by the Attila software using a Monte-Carlo source model

    International Nuclear Information System (INIS)

    Arter, W.; Loughlin, M.J.

    2009-01-01

    Accurate calculation of the neutron transport through the shielding of the IFMIF test cell, defined by CAD, is a difficult task for several reasons. The ability of the powerful deterministic radiation transport code Attila, to do this rapidly and reliably has been studied. Three models of increasing geometrical complexity were produced from the CAD using the CADfix software. A fourth model was produced to represent transport within the cell. The work also involved the conversion of the Vitenea-IEF database for high energy neutrons into a format usable by Attila, and the conversion of a particle source specified in MCNP wssaformat to a form usable by Attila. The final model encompassed the entire test cell environment, with only minor modifications. On a state-of-the-art PC, Attila took approximately 3 h to perform the calculations, as a consequence of a careful mesh 'layering'. The results strongly suggest that Attila will be a valuable tool for modelling radiation transport in IFMIF, and for similar problems

  12. Software hazard analysis for nuclear digital protection system by Colored Petri Net

    International Nuclear Information System (INIS)

    Bai, Tao; Chen, Wei-Hua; Liu, Zhen; Gao, Feng

    2017-01-01

    Highlights: •A dynamic hazard analysis method is proposed for the safety-critical software. •The mechanism relies on Colored Petri Net. •Complex interactions between software and hardware are captured properly. •Common failure mode in software are identified effectively. -- Abstract: The software safety of a nuclear digital protection system is critical for the safety of nuclear power plants as any software defect may result in severe damage. In order to ensure the safety and reliability of safety-critical digital system products and their applications, software hazard analysis is required to be performed during the lifecycle of software development. The dynamic software hazard modeling and analysis method based on Colored Petri Net is proposed and applied to the safety-critical control software of the nuclear digital protection system in this paper. The analysis results show that the proposed method can explain the complex interactions between software and hardware and identify the potential common cause failure in software properly and effectively. Moreover, the method can find the dominant software induced hazard to safety control actions, which aids in increasing software quality.

  13. [Reliability of three dimensional resin model by rapid prototyping manufacturing and digital modeling].

    Science.gov (United States)

    Zeng, Fei-huang; Xu, Yuan-zhi; Fang, Li; Tang, Xiao-shan

    2012-02-01

    To describe a new technique for fabricating an 3D resin model by 3D reconstruction and rapid prototyping, and to analyze the precision of this method. An optical grating scanner was used to acquire the data of silastic cavity block , digital dental cast was reconstructed with the data through Geomagic Studio image processing software. The final 3D reconstruction was saved in the pattern of Stl. The 3D resin model was fabricated by fuse deposition modeling, and was compared with the digital model and gypsum model. The data of three groups were statistically analyzed using SPSS 16.0 software package. No significant difference was found in gypsum model,digital dental cast and 3D resin model (P>0.05). Rapid prototyping manufacturing and digital modeling would be helpful for dental information acquisition, treatment design, appliance manufacturing, and can improve the communications between patients and doctors.

  14. Orthographic Software Modelling: A Novel Approach to View-Based Software Engineering

    Science.gov (United States)

    Atkinson, Colin

    The need to support multiple views of complex software architectures, each capturing a different aspect of the system under development, has been recognized for a long time. Even the very first object-oriented analysis/design methods such as the Booch method and OMT supported a number of different diagram types (e.g. structural, behavioral, operational) and subsequent methods such as Fusion, Kruchten's 4+1 views and the Rational Unified Process (RUP) have added many more views over time. Today's leading modeling languages such as the UML and SysML, are also oriented towards supporting different views (i.e. diagram types) each able to portray a different facets of a system's architecture. More recently, so called enterprise architecture frameworks such as the Zachman Framework, TOGAF and RM-ODP have become popular. These add a whole set of new non-functional views to the views typically emphasized in traditional software engineering environments.

  15. A Reliability Based Model for Wind Turbine Selection

    Directory of Open Access Journals (Sweden)

    A.K. Rajeevan

    2013-06-01

    Full Text Available A wind turbine generator output at a specific site depends on many factors, particularly cut- in, rated and cut-out wind speed parameters. Hence power output varies from turbine to turbine. The objective of this paper is to develop a mathematical relationship between reliability and wind power generation. The analytical computation of monthly wind power is obtained from weibull statistical model using cubic mean cube root of wind speed. Reliability calculation is based on failure probability analysis. There are many different types of wind turbinescommercially available in the market. From reliability point of view, to get optimum reliability in power generation, it is desirable to select a wind turbine generator which is best suited for a site. The mathematical relationship developed in this paper can be used for site-matching turbine selection in reliability point of view.

  16. Free Open Source Software: Social Phenomenon, New Management, New Business Models

    Directory of Open Access Journals (Sweden)

    Žilvinas Jančoras

    2011-08-01

    Full Text Available In the paper assumptions of free open source software existence, development, financing and competition models are presented. The free software as a social phenomenon and the open source software as the technological and managerial innovation environment are revealed. The social and business interaction processes are analyzed.Article in Lithuanian

  17. Development of RBDGG Solver and Its Application to System Reliability Analysis

    International Nuclear Information System (INIS)

    Kim, Man Cheol

    2010-01-01

    For the purpose of making system reliability analysis easier and more intuitive, RBDGG (Reliability Block diagram with General Gates) methodology was introduced as an extension of the conventional reliability block diagram. The advantage of the RBDGG methodology is that the structure of a RBDGG model is very similar to the actual structure of the analyzed system, and therefore the modeling of a system for system reliability and unavailability analysis becomes very intuitive and easy. The main idea of the development of the RBDGG methodology is similar with that of the development of the RGGG (Reliability Graph with General Gates) methodology, which is an extension of a conventional reliability graph. The newly proposed methodology is now implemented into a software tool, RBDGG Solver. RBDGG Solver was developed as a WIN32 console application. RBDGG Solver receives information on the failure modes and failure probabilities of each component in the system, along with the connection structure and connection logics among the components in the system. Based on the received information, RBDGG Solver automatically generates a system reliability analysis model for the system, and then provides the analysis results. In this paper, application of RBDGG Solver to the reliability analysis of an example system, and verification of the calculation results are provided for the purpose of demonstrating how RBDGG Solver is used for system reliability analysis

  18. Development of core technology for KNGR system design; development of quantitative reliability evaluation methodologies of KNGR digital I and C components

    Energy Technology Data Exchange (ETDEWEB)

    Seong, Poong Hyun; Choi, Jong Gyun; Kim, Ung Soo; Kim, Jong Hyun; Kim, Man Cheol; Lee, Seung Jun; Lee, Young Je; Ha, Jun Soo [Korea Advanced Institute of Science and Technology, Taejeon (Korea)

    2002-03-01

    For the digital systems to be applied to the nuclear industry, which has its unique conservertive to safety, reliability assessment of digital systems is a prerequisite. But, because digital systems show different failure modes from compared to existing analog systems, the existing reliability assessment method cannot be applied to digital systems. It means that a new reliability assessment method for digital systems should be developed. The goal of this study is development of reliability assessment method for digital systems on board level and related software tool. To achieve the goal, we have conducted researches on development of a database for hardware components for digital I and C systems, development of a reliability assessment model for the reliability prediction of digital systems on board level, and the applicability to KNGR digital I and C systems. We developed a database for reliability assessment of digital hardware components, a reliability assessment method for digital systems with consideration of software and hardware together, and a software tool for the reliability assessment of digital systems, which is named as RelPredic. We plan to apply the results of this study to the reliability assessment of digital systems in KNGR digital I and C systems. 13 refs., 71 figs., 31 tabs. (Author)

  19. Framework for Small-Scale Experiments in Software Engineering: Guidance and Control Software Project: Software Engineering Case Study

    Science.gov (United States)

    Hayhurst, Kelly J.

    1998-01-01

    Software is becoming increasingly significant in today's critical avionics systems. To achieve safe, reliable software, government regulatory agencies such as the Federal Aviation Administration (FAA) and the Department of Defense mandate the use of certain software development methods. However, little scientific evidence exists to show a correlation between software development methods and product quality. Given this lack of evidence, a series of experiments has been conducted to understand why and how software fails. The Guidance and Control Software (GCS) project is the latest in this series. The GCS project is a case study of the Requirements and Technical Concepts for Aviation RTCA/DO-178B guidelines, Software Considerations in Airborne Systems and Equipment Certification. All civil transport airframe and equipment vendors are expected to comply with these guidelines in building systems to be certified by the FAA for use in commercial aircraft. For the case study, two implementations of a guidance and control application were developed to comply with the DO-178B guidelines for Level A (critical) software. The development included the requirements, design, coding, verification, configuration management, and quality assurance processes. This paper discusses the details of the GCS project and presents the results of the case study.

  20. Software design for resilient computer systems

    CERN Document Server

    Schagaev, Igor

    2016-01-01

    This book addresses the question of how system software should be designed to account for faults, and which fault tolerance features it should provide for highest reliability. The authors first show how the system software interacts with the hardware to tolerate faults. They analyze and further develop the theory of fault tolerance to understand the different ways to increase the reliability of a system, with special attention on the role of system software in this process. They further develop the general algorithm of fault tolerance (GAFT) with its three main processes: hardware checking, preparation for recovery, and the recovery procedure. For each of the three processes, they analyze the requirements and properties theoretically and give possible implementation scenarios and system software support required. Based on the theoretical results, the authors derive an Oberon-based programming language with direct support of the three processes of GAFT. In the last part of this book, they introduce a simulator...