WorldWideScience

Sample records for testing redundant software

  1. Software engineering : redundancy is key

    NARCIS (Netherlands)

    Brand, van den M.G.J.; Groote, J.F.

    2015-01-01

    Software engineers are humans and so they make lots of mistakes. Typically 1 out of 10 to 100 tasks go wrong. The only way to avoid these mistakes is to introduce redundancy in the software engineering process. This article is a plea to consciously introduce several levels of redundancy for each

  2. Evaluation of software based redundancy algorithms for the EOS storage system at CERN

    International Nuclear Information System (INIS)

    Peters, Andreas-Joachim; Sindrilaru, Elvin Alin; Zigann, Philipp

    2012-01-01

    EOS is a new disk based storage system used in production at CERN since autumn 2011. It is implemented using the plug-in architecture of the XRootD software framework and allows remote file access via XRootD protocol or POSIX-like file access via FUSE mounting. EOS was designed to fulfill specific requirements of disk storage scalability and IO scheduling performance for LHC analysis use cases. This is achieved by following a strategy of decoupling disk and tape storage as individual storage systems. A key point of the EOS design is to provide high availability and redundancy of files via a software implementation which uses disk-only storage systems without hardware RAID arrays. All this is aimed at reducing the overall cost of the system and also simplifying the operational procedures. This paper presents the advantages and disadvantages of redundancy by hardware (most classical storage installations) in comparison to redundancy by software. The latter is implemented in the EOS system and achieves its goal by spawning data and parity stripes via remote file access over nodes. The gain in redundancy and reliability comes with a trade-off in the following areas: • Increased complexity of the network connectivity • CPU intensive parity computations during file creation and recovery • Performance loss through remote disk coupling An evaluation and performance figures of several redundancy algorithms are presented for dual parity RAID and Reed-Solomon codecs. Moreover, the characteristics and applicability of these algorithms are discussed in the context of reliable data storage systems.

  3. Markov Chains For Testing Redundant Software

    Science.gov (United States)

    White, Allan L.; Sjogren, Jon A.

    1990-01-01

    Preliminary design developed for validation experiment that addresses problems unique to assuring extremely high quality of multiple-version programs in process-control software. Approach takes into account inertia of controlled system in sense it takes more than one failure of control program to cause controlled system to fail. Verification procedure consists of two steps: experimentation (numerical simulation) and computation, with Markov model for each step.

  4. Application study of EPICS-based redundant method for reactor control system

    International Nuclear Information System (INIS)

    Zhang Ning; Han Lifeng; Chen Yongzhong; Guo Bing; Yin Congcong

    2013-01-01

    In the reactor control system prototype development of TMSR (Thorium Molten Salt Reactor system, CAS) project, EPICS (Experimental Physics and Industrial Control System) is adopted as Instrument and Control software platform. For the aim of IOC (Input/Output Controller) redundancy and data synchronization of the system, the EPICS-based RMT (Redundancy Monitor Task ) software package and its data-synchronization component CCE (Continuous Control Executive) were introduced. By the development of related IOC driver, redundant switch-over control of server IOC was implemented. The method of redundancy implementation using RMT in server and redundancy performance test for power control system are discussed in this paper. (authors)

  5. Fault detection in multiply-redundant measurement systems via sequential testing

    International Nuclear Information System (INIS)

    Ray, A.

    1988-01-01

    The theory and application of a sequential test procedure for fault detection and isolation. The test procedure is suited for development of intelligent instrumentation in strategic processes like aircraft and nuclear plants where redundant measurements are usually available for individual critical variables. The test procedure consists of: (1) a generic redundancy management procedure which is essentially independent of the fault detection strategy and measurement noise statistics, and (2) a modified version of sequential probability ratio test algorithm for fault detection and isolation, which functions within the framework of this redundancy management procedure. The sequential test procedure is suitable for real-time applications using commercially available microcomputers and its efficacy has been verified by online fault detection in an operating nuclear reactor. 15 references

  6. Signal validation in nuclear power plants using redundant measurements

    International Nuclear Information System (INIS)

    Glockler, O.; Upadhyaya, B.R.; Morgenstern, V.M.

    1989-01-01

    This paper discusses the basic principles of a multivariable signal validation software system utilizing redundant sensor readings of process variables in nuclear power plants (NPPs). The technique has been tested in numerical experiments, and was applied to actual data from a pressurized water reactor (PWR). The simultaneous checking within one redundant measurement set, and the cross-checking among redundant measurement sets of dissimilar process variables, results in an algorithm capable of detecting and isolating bias-type errors. A case in point occurs when a majority of the direct redundant measurements of more than one process variable has failed simultaneously by a common-mode or correlated failures can be detected by the developed approach. 5 refs

  7. Selective Redundancy Removal: A Framework for Data Hiding

    Directory of Open Access Journals (Sweden)

    Ugo Fiore

    2010-02-01

    Full Text Available Data hiding techniques have so far concentrated on adding or modifying irrelevant information in order to hide a message. However, files in widespread use, such as HTML documents, usually exhibit high redundancy levels, caused by code-generation programs. Such redundancy may be removed by means of optimization software. Redundancy removal, if applied selectively, enables information hiding. This work introduces Selective Redundancy Removal (SRR as a framework for hiding data. An example application of the framework is given in terms of hiding information in HTML documents. Non-uniformity across documents may raise alarms. Nevertheless, selective application of optimization techniques might be due to the legitimate use of optimization software not supporting all the optimization methods, or configured to not use all of them.

  8. Practical, redundant, failure-tolerant, self-reconfiguring embedded system architecture

    Science.gov (United States)

    Klarer, Paul R.; Hayward, David R.; Amai, Wendy A.

    2006-10-03

    This invention relates to system architectures, specifically failure-tolerant and self-reconfiguring embedded system architectures. The invention provides both a method and architecture for redundancy. There can be redundancy in both software and hardware for multiple levels of redundancy. The invention provides a self-reconfiguring architecture for activating redundant modules whenever other modules fail. The architecture comprises: a communication backbone connected to two or more processors and software modules running on each of the processors. Each software module runs on one processor and resides on one or more of the other processors to be available as a backup module in the event of failure. Each module and backup module reports its status over the communication backbone. If a primary module does not report, its backup module takes over its function. If the primary module becomes available again, the backup module returns to its backup status.

  9. REDUNDANT ELECTRIC MOTOR DRIVE CONTROL UNIT DESIGN USING AUTOMATA-BASED APPROACH

    Directory of Open Access Journals (Sweden)

    Yuri Yu. Yankin

    2014-11-01

    Full Text Available Implementation of redundant unit for motor drive control based on programmable logic devices is discussed. Continuous redundancy method is used. As compared to segregated standby redundancy and whole system standby redundancy, such method provides preservation of all unit functions in case of redundancy and gives the possibility for continuous monitoring of major and redundant elements. Example of that unit is given. Electric motor drive control channel block diagram contains two control units – the major and redundant; it also contains four power supply units. Control units programming was carried out using automata-based approach. Electric motor drive control channel model was developed; it provides complex simulation of control state-machine and power converter. Through visibility and hierarchy of finite state machines debug time was shortened as compared to traditional programming. Control state-machine description using hardware description language is required for its synthesis with FPGA-devices vendor design software. This description was generated automatically by MATLAB software package. To verify results two prototype control units, two prototype power supply units, and device mock-up were developed and manufactured. Units were installed in the device mock-up. Prototype units were created in accordance with requirements claimed to deliverable hardware. Control channel simulation and tests results in the perfect state and during imitation of major element fault are presented. Automata-based approach made it possible to observe and debug control state-machine transitions during simulation of transient processes, occurring at imitation of faults. Results of this work can be used in development of fault tolerant electric motor drive control channels.

  10. Beyond redundancy how geographic redundancy can improve service availability and reliability of computer-based systems

    CERN Document Server

    Bauer, Eric; Eustace, Dan

    2012-01-01

    "While geographic redundancy can obviously be a huge benefit for disaster recovery, it is far less obvious what benefit is feasible and likely for more typical non-catastrophic hardware, software, and human failures. Georedundancy and Service Availability provides both a theoretical and practical treatment of the feasible and likely benefits of geographic redundancy for both service availability and service reliability. The text provides network/system planners, IS/IT operations folks, system architects, system engineers, developers, testers, and other industry practitioners with a general discussion about the capital expense/operating expense tradeoff that frames system redundancy and georedundancy"--

  11. Application of Software Safety Analysis Methods

    International Nuclear Information System (INIS)

    Park, G. Y.; Hur, S.; Cheon, S. W.; Kim, D. H.; Lee, D. Y.; Kwon, K. C.; Lee, S. J.; Koo, Y. H.

    2009-01-01

    A fully digitalized reactor protection system, which is called the IDiPS-RPS, was developed through the KNICS project. The IDiPS-RPS has four redundant and separated channels. Each channel is mainly composed of a group of bistable processors which redundantly compare process variables with their corresponding setpoints and a group of coincidence processors that generate a final trip signal when a trip condition is satisfied. Each channel also contains a test processor called the ATIP and a display and command processor called the COM. All the functions were implemented in software. During the development of the safety software, various software safety analysis methods were applied, in parallel to the verification and validation (V and V) activities, along the software development life cycle. The software safety analysis methods employed were the software hazard and operability (Software HAZOP) study, the software fault tree analysis (Software FTA), and the software failure modes and effects analysis (Software FMEA)

  12. Testing the significance of canonical axes in redundancy analysis

    NARCIS (Netherlands)

    Legendre, P.; Oksanen, J.; Braak, ter C.J.F.

    2011-01-01

    1. Tests of significance of the individual canonical axes in redundancy analysis allow researchers to determine which of the axes represent variation that can be distinguished from random. Variation along the significant axes can be mapped, used to draw biplots or interpreted through subsequent

  13. Dtest Testing Software

    Science.gov (United States)

    Jain, Abhinandan; Cameron, Jonathan M.; Myint, Steven

    2013-01-01

    This software runs a suite of arbitrary software tests spanning various software languages and types of tests (unit level, system level, or file comparison tests). The dtest utility can be set to automate periodic testing of large suites of software, as well as running individual tests. It supports distributing multiple tests over multiple CPU cores, if available. The dtest tool is a utility program (written in Python) that scans through a directory (and its subdirectories) and finds all directories that match a certain pattern and then executes any tests in that directory as described in simple configuration files.

  14. Self-checking in a logic protection system using a redundant computer structure

    International Nuclear Information System (INIS)

    Darier, P.; Lallement, D.

    1978-01-01

    The logic protection system described has a parallel and redundant computer structure. Each assembly consists of an analog and digital acquisition module, a processing unit and a simple and reliable logical decision device adapted to the redundancy of the system (two-out-of-three or two-out-of-four). Each processing unit monitors all the detection units, processes the values received and works out the corresponding decisions. The decisions are collated in the final stage in a majority decision element which sets the protective action in motion. At all levels the quality of data transmission and validity of processing are tested in order that the best strategy may be applied in the event of damage to the system. Several self-checking techniques were used: they include a set of electronic test modules connected to the processing unit, on-line system testing and control software and time control devices which provide a final verification of the overall operation of the system. Self-checking and test operations are performed regularly throughout surveillance cycles. Problems of fault detection and of protection against the effects of faults are examined from the aspect of hardware and software. (author)

  15. A concept of software testing for SMART MMIS software

    International Nuclear Information System (INIS)

    Seo, Yong Seok; Seong, Seung Hwan; Park, Keun Ok; Hur, Sub; Kim, Dong Hoon

    2001-01-01

    In order to achieve high quality of SMART MMIS software, the well-constructed software testing concept shall be required. This paper established software testing concept which is to be applied to SMART MMIS software, in terms of software testing organization, documentation. procedure, and methods. The software testing methods are classified into source code static analysis and dynamic testing. The software dynamic testing methods are discussed with two aspects: white-box and black-box testing. As software testing concept introduced in this paper is applied to the SMART MMIS software. the high quality of the software will be produced. In the future, software failure data will be collected through the construction of SMART MMIS prototyping facility which the software testing concept of this paper is applied to

  16. Interface-based software testing

    Directory of Open Access Journals (Sweden)

    Aziz Ahmad Rais

    2016-10-01

    Full Text Available Software quality is determined by assessing the characteristics that specify how it should work, which are verified through testing. If it were possible to touch, see, or measure software, it would be easier to analyze and prove its quality. Unfortunately, software is an intangible asset, which makes testing complex. This is especially true when software quality is not a question of particular functions that can be tested through a graphical user interface. The primary objective of software architecture is to design quality of software through modeling and visualization. There are many methods and standards that define how to control and manage quality. However, many IT software development projects still fail due to the difficulties involved in measuring, controlling, and managing software quality. Software quality failure factors are numerous. Examples include beginning to test software too late in the development process, or failing properly to understand, or design, the software architecture and the software component structure. The goal of this article is to provide an interface-based software testing technique that better measures software quality, automates software quality testing, encourages early testing, and increases the software’s overall testability

  17. Model-Based Software Testing for Object-Oriented Software

    Science.gov (United States)

    Biju, Soly Mathew

    2008-01-01

    Model-based testing is one of the best solutions for testing object-oriented software. It has a better test coverage than other testing styles. Model-based testing takes into consideration behavioural aspects of a class, which are usually unchecked in other testing methods. An increase in the complexity of software has forced the software industry…

  18. Statistical Redundancy Testing for Improved Gene Selection in Cancer Classification Using Microarray Data

    Directory of Open Access Journals (Sweden)

    J. Sunil Rao

    2007-01-01

    Full Text Available In gene selection for cancer classifi cation using microarray data, we define an eigenvalue-ratio statistic to measure a gene’s contribution to the joint discriminability when this gene is included into a set of genes. Based on this eigenvalueratio statistic, we define a novel hypothesis testing for gene statistical redundancy and propose two gene selection methods. Simulation studies illustrate the agreement between statistical redundancy testing and gene selection methods. Real data examples show the proposed gene selection methods can select a compact gene subset which can not only be used to build high quality cancer classifiers but also show biological relevance.

  19. Software testing concepts and operations

    CERN Document Server

    Mili, Ali

    2015-01-01

    Explores and identifies the main issues, concepts, principles and evolution of software testing, including software quality engineering and testing concepts, test data generation, test deployment analysis, and software test managementThis book examines the principles, concepts, and processes that are fundamental to the software testing function. This book is divided into five broad parts. Part I introduces software testing in the broader context of software engineering and explores the qualities that testing aims to achieve or ascertain, as well as the lifecycle of software testing. Part II c

  20. The Art of Software Testing

    CERN Document Server

    Myers, Glenford J; Badgett, Tom

    2011-01-01

    The classic, landmark work on software testing The hardware and software of computing have changed markedly in the three decades since the first edition of The Art of Software Testing, but this book's powerful underlying analysis has stood the test of time. Whereas most books on software testing target particular development techniques, languages, or testing methods, The Art of Software Testing, Third Edition provides a brief but powerful and comprehensive presentation of time-proven software testing approaches. If your software development project is mission critical, this book is an investme

  1. Testing the race model inequality in redundant stimuli with variable onset asynchrony

    DEFF Research Database (Denmark)

    Gondan, Matthias

    2009-01-01

    distributions of response times for the single-modality stimuli. It has been derived for synchronous stimuli and for stimuli with stimulus onset asynchrony (SOA). In most experiments with asynchronous stimuli, discrete SOA values are chosen and the race model inequality is separately tested for each SOA. Due...... to SOAs at which the violation of the race model prediction is expected to be large. In addition, the method enables data analysis for experiments in which stimuli are presented with SOA from a continuous distribution rather than in discrete steps.......In speeded response tasks with redundant signals, parallel processing of the signals is tested by the race model inequality. This inequality states that given a race of two signals, the cumulative distribution of response times for redundant stimuli never exceeds the sum of the cumulative...

  2. Interface-based software testing

    OpenAIRE

    Aziz Ahmad Rais

    2016-01-01

    Software quality is determined by assessing the characteristics that specify how it should work, which are verified through testing. If it were possible to touch, see, or measure software, it would be easier to analyze and prove its quality. Unfortunately, software is an intangible asset, which makes testing complex. This is especially true when software quality is not a question of particular functions that can be tested through a graphical user interface. The primary objective of softwar...

  3. Reliability of software

    International Nuclear Information System (INIS)

    Kopetz, H.

    1980-01-01

    Common factors and differences in the reliability of hardware and software; reliability increase by means of methods of software redundancy. Maintenance of software for long term operating behavior. (HP) [de

  4. Trends in software testing

    CERN Document Server

    Mohanty, J; Balakrishnan, Arunkumar

    2017-01-01

    This book is focused on the advancements in the field of software testing and the innovative practices that the industry is adopting. Considering the widely varied nature of software testing, the book addresses contemporary aspects that are important for both academia and industry. There are dedicated chapters on seamless high-efficiency frameworks, automation on regression testing, software by search, and system evolution management. There are a host of mathematical models that are promising for software quality improvement by model-based testing. There are three chapters addressing this concern. Students and researchers in particular will find these chapters useful for their mathematical strength and rigor. Other topics covered include uncertainty in testing, software security testing, testing as a service, test technical debt (or test debt), disruption caused by digital advancement (social media, cloud computing, mobile application and data analytics), and challenges and benefits of outsourcing. The book w...

  5. On the optimal scheduling of periodic tests and maintenance for reliable redundant components

    International Nuclear Information System (INIS)

    Courtois, Pierre-Jacques; Delsarte, Philippe

    2006-01-01

    Periodically, some m of the n redundant components of a dependable system may have to be taken out of service for inspection, testing or preventive maintenance. The system is then constrained to operate with lower (n-m) redundancy and thus with less reliability during these periods. However, more frequent periodic inspections decrease the probability that a component fail undetected in the time interval between successive inspections. An optimal time schedule of periodic preventive operations arises from these two conflicting factors, balancing the loss of redundancy during inspections against the reliability benefits of more frequent inspections. Considering no other factor than this decreased redundancy at inspection time, this paper demonstrates the existence of an optimal interval between inspections, which maximizes the mean time between system failures. By suitable transformations and variable identifications, an analytic closed form expression of the optimum is obtained for the general (m, n) case. The optimum is shown to be unique within the ranges of parameter values valid in practice; its expression is easy to evaluate and shown to be useful to analyze and understand the influence of these parameters. Inspections are assumed to be perfect, i.e. they cause no component failure by themselves and leave no failure undetected. In this sense, the optimum determines a lowest bound for the system failure rate that can be achieved by a system of n-redundant components, m of which require for inspection or maintenance recurrent periods of unavailability of length t. The model and its general closed form solution are believed to be new . Previous work had computed optimal values for an estimation of a time average of system unavailability, but by numerical procedures only and with different numerical approximations, other objectives and model assumptions (one component only inspected at a time), and taking into account failures caused by testing itself, repair and

  6. Testing Object-Oriented Software

    DEFF Research Database (Denmark)

    Caspersen, Michael Edelgaard; Madsen, Ole Lehrmann; Skov, Stefan H.

    The report is a result of an activity within the project Centre for Object Technology (COT), case 2. In case 2 a number of pilot projects have been carried out to test the feasibility of using object technology within embedded software. Some of the pilot projects have resulted in proto-types that......The report is a result of an activity within the project Centre for Object Technology (COT), case 2. In case 2 a number of pilot projects have been carried out to test the feasibility of using object technology within embedded software. Some of the pilot projects have resulted in proto......-types that are currently being developed into production versions. To assure a high quality in the product it was decided to carry out an activ-ity regarding issues in testing OO software. The purpose of this report is to discuss the issues of testing object-oriented software. It is often claimed that testing of OO...... software is radically different form testing traditional software developed using imperative/procedural programming. Other authors claim that there is no difference. In this report we will attempt to give an answer to these questions (or at least initiate a discussion)....

  7. Design, Development and Pre-Flight Testing of the Communications, Navigation, and Networking Reconfigurable Testbed (Connect) to Investigate Software Defined Radio Architecture on the International Space Station

    Science.gov (United States)

    Over, Ann P.; Barrett, Michael J.; Reinhart, Richard C.; Free, James M.; Cikanek, Harry A., III

    2011-01-01

    The Communication Navigation and Networking Reconfigurable Testbed (CoNNeCT) is a NASA-sponsored mission, which will investigate the usage of Software Defined Radios (SDRs) as a multi-function communication system for space missions. A softwaredefined radio system is a communication system in which typical components of the system (e.g., modulators) are incorporated into software. The software-defined capability allows flexibility and experimentation in different modulation, coding and other parameters to understand their effects on performance. This flexibility builds inherent redundancy and flexibility into the system for improved operational efficiency, real-time changes to space missions and enhanced reliability/redundancy. The CoNNeCT Project is a collaboration between industrial radio providers and NASA. The industrial radio providers are providing the SDRs and NASA is designing, building and testing the entire flight system. The flight system will be integrated on the Express Logistics Carrier (ELC) on the International Space Station (ISS) after launch on the H-IIB Transfer Vehicle in 2012. This paper provides an overview of the technology research objectives, payload description, design challenges and pre-flight testing results.

  8. Test af Software

    DEFF Research Database (Denmark)

    Dette dokument udgør slutrapporten for netværkssamarbejdet ”Testnet”, som er udført i perioden 1.4.2006 til 31.12.2008. Netværket beskæftiger sig navnlig med emner inden for test af indlejret og teknisk software, men et antal eksempler på problemstillinger og løsninger forbundet med test af...... administrativ software indgår også. Rapporten er opdelt i følgende 3 dele: Overblik. Her giver vi et resumé af netværkets formål, aktiviteter og resultater. State of the art af software test ridses op. Vi omtaler, at CISS og netværket tager nye tiltag. Netværket. Formål, deltagere og behandlede emner på ti...

  9. An efficient simulated annealing algorithm for the redundancy allocation problem with a choice of redundancy strategies

    International Nuclear Information System (INIS)

    Chambari, Amirhossain; Najafi, Amir Abbas; Rahmati, Seyed Habib A.; Karimi, Aida

    2013-01-01

    The redundancy allocation problem (RAP) is an important reliability optimization problem. This paper studies a specific RAP in which redundancy strategies are chosen. To do so, the choice of the redundancy strategies among active and cold standby is considered as decision variables. The goal is to select the redundancy strategy, component, and redundancy level for each subsystem such that the system reliability is maximized. Since RAP is a NP-hard problem, we propose an efficient simulated annealing algorithm (SA) to solve it. In addition, to evaluating the performance of the proposed algorithm, it is compared with well-known algorithms in the literature for different test problems. The results of the performance analysis show a relatively satisfactory efficiency of the proposed SA algorithm

  10. Decision Support for Software Process Management Teams: An Intelligent Software Agent Approach

    National Research Council Canada - National Science Library

    Church, Lori

    2000-01-01

    ... to market, eliminate redundancy, and ease job stress. This thesis proposes a conceptual model for software process management decision support in the form of an intelligent software agent network...

  11. Learning software testing with Test Studio

    CERN Document Server

    Madi, Rawane

    2013-01-01

    Learning Software Testing with Test Studio is a practical, hands-on guide that will help you get started with Test Studio to design your automated solution and tests. All through the book, there are best practices and tips and tricks inside Test Studio which can be employed to improve your solution just like an experienced QA.If you are a beginner or a professional QA who is seeking a fast, clear, and direct to the point start in automated software testing inside Test Studio, this book is for you. You should be familiar with the .NET framework, mainly Visual Studio, C#, and SQL, as the book's

  12. Synchronization and fault-masking in redundant real-time systems

    Science.gov (United States)

    Krishna, C. M.; Shin, K. G.; Butler, R. W.

    1983-01-01

    A real time computer may fail because of massive component failures or not responding quickly enough to satisfy real time requirements. An increase in redundancy - a conventional means of improving reliability - can improve the former but can - in some cases - degrade the latter considerably due to the overhead associated with redundancy management, namely the time delay resulting from synchronization and voting/interactive consistency techniques. The implications of synchronization and voting/interactive consistency algorithms in N-modular clusters on reliability are considered. All these studies were carried out in the context of real time applications. As a demonstrative example, we have analyzed results from experiments conducted at the NASA Airlab on the Software Implemented Fault Tolerance (SIFT) computer. This analysis has indeed indicated that in most real time applications, it is better to employ hardware synchronization instead of software synchronization and not allow reconfiguration.

  13. Validation testing of safety-critical software

    International Nuclear Information System (INIS)

    Kim, Hang Bae; Han, Jae Bok

    1995-01-01

    A software engineering process has been developed for the design of safety critical software for Wolsung 2/3/4 project to satisfy the requirements of the regulatory body. Among the process, this paper described the detail process of validation testing performed to ensure that the software with its hardware, developed by the design group, satisfies the requirements of the functional specification prepared by the independent functional group. To perform the tests, test facility and test software were developed and actual safety system computer was connected. Three kinds of test cases, i.e., functional test, performance test and self-check test, were programmed and run to verify each functional specifications. Test failures were feedback to the design group to revise the software and test results were analyzed and documented in the report to submit to the regulatory body. The test methodology and procedure were very efficient and satisfactory to perform the systematic and automatic test. The test results were also acceptable and successful to verify the software acts as specified in the program functional specification. This methodology can be applied to the validation of other safety-critical software. 2 figs., 2 tabs., 14 refs. (Author)

  14. Safety management of software-based equipment

    CERN Document Server

    Boulanger, Jean-Louis

    2013-01-01

    A review of the principles of the safety of software-based equipment, this book begins by presenting the definition principles of safety objectives. It then moves on to show how it is possible to define a safety architecture (including redundancy, diversification, error-detection techniques) on the basis of safety objectives and how to identify objectives related to software programs. From software objectives, the authors present the different safety techniques (fault detection, redundancy and quality control). "Certifiable system" aspects are taken into account throughout the book. C

  15. Software testing in roughness calculation

    International Nuclear Information System (INIS)

    Chen, Y L; Hsieh, P F; Fu, W E

    2005-01-01

    A test method to determine the function quality provided by the software for roughness measurement is presented in this study. The function quality of the software requirements should be part of and assessed through the entire life cycle of the software package. The specific function, or output accuracy, is crucial for the analysis of the experimental data. For scientific applications, however, commercial software is usually embedded with specific instrument, which is used for measurement or analysis during the manufacture process. In general, the error ratio caused by the software would be more apparent especially when dealing with relatively small quantities, like the measurements in the nanometer-scale range. The model of 'using a data generator' proposed by NPL of UK was applied in this study. An example of the roughness software is tested and analyzed by the above mentioned process. After selecting the 'reference results', the 'reference data' was generated by a programmable 'data generator'. The filter function of 0.8 mm long cutoff value, defined in ISO 11562 was tested with 66 sinusoid data at different wavelengths. Test results from commercial software and CMS written program were compared to the theoretical data calculated from ISO standards. As for the filter function in this software, the result showed a significant disagreement between the reference and test results. The short cutoff feature for filtering at the high frequencies does not function properly, while the long cutoff feature has the maximum difference in the filtering ratio, which is more than 70% between the wavelength of 300 μm and 500 μm. Conclusively, the commercial software needs to be tested more extensively for specific application by appropriate design of reference dataset to ensure its function quality

  16. Software quality testing process analysis

    OpenAIRE

    Mera Paz, Julián

    2016-01-01

    Introduction: This article is the result of reading, review, analysis of books, magazines and articles well known for their scientific and research quality, which have addressed the software quality testing process. The author, based on his work experience in software development companies, teaching and other areas, has compiled and selected information to argue and substantiate the importance of the software quality testing process. Methodology: the existing literature on the software qualit...

  17. Testing Scientific Software: A Systematic Literature Review

    Science.gov (United States)

    Kanewala, Upulee; Bieman, James M.

    2014-01-01

    Context Scientific software plays an important role in critical decision making, for example making weather predictions based on climate models, and computation of evidence for research publications. Recently, scientists have had to retract publications due to errors caused by software faults. Systematic testing can identify such faults in code. Objective This study aims to identify specific challenges, proposed solutions, and unsolved problems faced when testing scientific software. Method We conducted a systematic literature survey to identify and analyze relevant literature. We identified 62 studies that provided relevant information about testing scientific software. Results We found that challenges faced when testing scientific software fall into two main categories: (1) testing challenges that occur due to characteristics of scientific software such as oracle problems and (2) testing challenges that occur due to cultural differences between scientists and the software engineering community such as viewing the code and the model that it implements as inseparable entities. In addition, we identified methods to potentially overcome these challenges and their limitations. Finally we describe unsolved challenges and how software engineering researchers and practitioners can help to overcome them. Conclusions Scientific software presents special challenges for testing. Specifically, cultural differences between scientist developers and software engineers, along with the characteristics of the scientific software make testing more difficult. Existing techniques such as code clone detection can help to improve the testing process. Software engineers should consider special challenges posed by scientific software such as oracle problems when developing testing techniques. PMID:25125798

  18. Testing Scientific Software: A Systematic Literature Review.

    Science.gov (United States)

    Kanewala, Upulee; Bieman, James M

    2014-10-01

    Scientific software plays an important role in critical decision making, for example making weather predictions based on climate models, and computation of evidence for research publications. Recently, scientists have had to retract publications due to errors caused by software faults. Systematic testing can identify such faults in code. This study aims to identify specific challenges, proposed solutions, and unsolved problems faced when testing scientific software. We conducted a systematic literature survey to identify and analyze relevant literature. We identified 62 studies that provided relevant information about testing scientific software. We found that challenges faced when testing scientific software fall into two main categories: (1) testing challenges that occur due to characteristics of scientific software such as oracle problems and (2) testing challenges that occur due to cultural differences between scientists and the software engineering community such as viewing the code and the model that it implements as inseparable entities. In addition, we identified methods to potentially overcome these challenges and their limitations. Finally we describe unsolved challenges and how software engineering researchers and practitioners can help to overcome them. Scientific software presents special challenges for testing. Specifically, cultural differences between scientist developers and software engineers, along with the characteristics of the scientific software make testing more difficult. Existing techniques such as code clone detection can help to improve the testing process. Software engineers should consider special challenges posed by scientific software such as oracle problems when developing testing techniques.

  19. Testing of real-time-software

    International Nuclear Information System (INIS)

    Friesland, G.; Ovenhausen, H.

    1975-05-01

    The situation in the area of testing real-time-software is unsatisfactory. During the first phase of the project PROMOTE (prozessorientiertes Modul- und Gesamttestsystem) an analysis of the momentary situation took place, results of which are summarized in the following study about some user interviews and an analysis of relevant literature. 22 users (industry, software-houses, hardware-manufacturers, and institutes) have been interviewed. Discussions were held about reliability of real-time software with special interest to error avoidance, testing, and debugging. Main aims of the analysis of the literature were elaboration of standard terms, comparison of existing test methods and -systems, and the definition of boundaries to related areas. During the further steps of this project some means and techniques will be worked out to systematically test real-time software. (orig.) [de

  20. Test process for the safety-critical embedded software

    International Nuclear Information System (INIS)

    Sung, Ahyoung; Choi, Byoungju; Lee, Jangsoo

    2004-01-01

    Digitalization of nuclear Instrumentation and Control (I and C) system requires high reliability of not only hardware but also software. Verification and Validation (V and V) process is recommended for software reliability. But a more quantitative method is necessary such as software testing. Most of software in the nuclear I and C system is safety-critical embedded software. Safety-critical embedded software is specified, verified and developed according to V and V process. Hence two types of software testing techniques are necessary for the developed code. First, code-based software testing is required to examine the developed code. Second, after code-based software testing, software testing affected by hardware is required to reveal the interaction fault that may cause unexpected results. We call the testing of hardware's influence on software, an interaction testing. In case of safety-critical embedded software, it is also important to consider the interaction between hardware and software. Even if no faults are detected when testing either hardware or software alone, combining these components may lead to unexpected results due to the interaction. In this paper, we propose a software test process that embraces test levels, test techniques, required test tasks and documents for safety-critical embedded software. We apply the proposed test process to safety-critical embedded software as a case study, and show the effectiveness of it. (author)

  1. Test software for BESIII MDC electronics system

    International Nuclear Information System (INIS)

    Zhang Hongyu; Sheng Huayi; Zhu Haitao; Ji Xiaolu; Zhao Dongxu

    2006-01-01

    This paper presents the design of Test System Software for BESIII MDC Electronics. Two kinds of test systems, SBS VP7 based and PowerPC based systems, and their corresponding test software are introduced. The software is developed in LabVIEW 7.1 and Microsoft Visual C++ 6.0, some test functions of the software, as well as their user interfaces, are described in detail. The software has been applied in hardware debugging, performance test and long term stability test. (authors)

  2. Predicting genome-wide redundancy using machine learning

    Directory of Open Access Journals (Sweden)

    Shasha Dennis E

    2010-11-01

    Full Text Available Abstract Background Gene duplication can lead to genetic redundancy, which masks the function of mutated genes in genetic analyses. Methods to increase sensitivity in identifying genetic redundancy can improve the efficiency of reverse genetics and lend insights into the evolutionary outcomes of gene duplication. Machine learning techniques are well suited to classifying gene family members into redundant and non-redundant gene pairs in model species where sufficient genetic and genomic data is available, such as Arabidopsis thaliana, the test case used here. Results Machine learning techniques that combine multiple attributes led to a dramatic improvement in predicting genetic redundancy over single trait classifiers alone, such as BLAST E-values or expression correlation. In withholding analysis, one of the methods used here, Support Vector Machines, was two-fold more precise than single attribute classifiers, reaching a level where the majority of redundant calls were correctly labeled. Using this higher confidence in identifying redundancy, machine learning predicts that about half of all genes in Arabidopsis showed the signature of predicted redundancy with at least one but typically less than three other family members. Interestingly, a large proportion of predicted redundant gene pairs were relatively old duplications (e.g., Ks > 1, suggesting that redundancy is stable over long evolutionary periods. Conclusions Machine learning predicts that most genes will have a functionally redundant paralog but will exhibit redundancy with relatively few genes within a family. The predictions and gene pair attributes for Arabidopsis provide a new resource for research in genetics and genome evolution. These techniques can now be applied to other organisms.

  3. Analysis of failure dependent test, repair and shutdown strategies for redundant trains

    International Nuclear Information System (INIS)

    Uryasev, S.; Samanta, P.

    1994-09-01

    Failure-dependent testing implies a test of a redundant components (or trains) when failure of one component has been detected. The purpose of such testing is to detect any common cause failures (CCFs) of multiple components so that a corrective action such as repair or plant shutdown can be taken to reduce the residence time of multiple failures, given a failure has been detected. This type of testing focuses on reducing the conditional risk of CCFs. Formulas for calculating the conditional failure probability of a two train system with different test, repair and shutdown strategies are developed. A methodology is presented with an example calculation showing the risk-effectiveness of failure-dependent strategies for emergency diesel generators (EDGs) in nuclear power plants (NPPs)

  4. Quantification of Safety-Critical Software Test Uncertainty

    International Nuclear Information System (INIS)

    Khalaquzzaman, M.; Cho, Jaehyun; Lee, Seung Jun; Jung, Wondea

    2015-01-01

    The method, conservatively assumes that the failure probability of a software for the untested inputs is 1, and the failure probability turns in 0 for successful testing of all test cases. However, in reality the chance of failure exists due to the test uncertainty. Some studies have been carried out to identify the test attributes that affect the test quality. Cao discussed the testing effort, testing coverage, and testing environment. Management of the test uncertainties was discussed in. In this study, the test uncertainty has been considered to estimate the software failure probability because the software testing process is considered to be inherently uncertain. A reliability estimation of software is very important for a probabilistic safety analysis of a digital safety critical system of NPPs. This study focused on the estimation of the probability of a software failure that considers the uncertainty in software testing. In our study, BBN has been employed as an example model for software test uncertainty quantification. Although it can be argued that the direct expert elicitation of test uncertainty is much simpler than BBN estimation, however the BBN approach provides more insights and a basis for uncertainty estimation

  5. Software safety analysis practice in installation phase

    Energy Technology Data Exchange (ETDEWEB)

    Huang, H. W.; Chen, M. H.; Shyu, S. S., E-mail: hwhwang@iner.gov.t [Institute of Nuclear Energy Research, No. 1000 Wenhua Road, Chiaan Village, Longtan Township, 32546 Taoyuan County, Taiwan (China)

    2010-10-15

    This work performed a software safety analysis in the installation phase of the Lung men nuclear power plant in Taiwan, under the cooperation of Institute of Nuclear Energy Research and Tpc. The US Nuclear Regulatory Commission requests licensee to perform software safety analysis and software verification and validation in each phase of software development life cycle with Branch Technical Position 7-14. In this work, 37 safety grade digital instrumentation and control systems were analyzed by failure mode and effects analysis, which is suggested by IEEE standard 7-4.3.2-2003. During the installation phase, skew tests for safety grade network and point to point tests were performed. The failure mode and effects analysis showed all the single failure modes can be resolved by the redundant means. Most of the common mode failures can be resolved by operator manual actions. (Author)

  6. Software safety analysis practice in installation phase

    International Nuclear Information System (INIS)

    Huang, H. W.; Chen, M. H.; Shyu, S. S.

    2010-10-01

    This work performed a software safety analysis in the installation phase of the Lung men nuclear power plant in Taiwan, under the cooperation of Institute of Nuclear Energy Research and Tpc. The US Nuclear Regulatory Commission requests licensee to perform software safety analysis and software verification and validation in each phase of software development life cycle with Branch Technical Position 7-14. In this work, 37 safety grade digital instrumentation and control systems were analyzed by failure mode and effects analysis, which is suggested by IEEE standard 7-4.3.2-2003. During the installation phase, skew tests for safety grade network and point to point tests were performed. The failure mode and effects analysis showed all the single failure modes can be resolved by the redundant means. Most of the common mode failures can be resolved by operator manual actions. (Author)

  7. Software Testing as Science

    Directory of Open Access Journals (Sweden)

    Ingrid Gallesdic

    2013-06-01

    Full Text Available The most widespread opinion among people who have some connection with software testing is that this activity is an art. In fact, books have been published widely whose titles refer to it as art, role or process. But because software complexity is increasing every year, this paper proposes a new approach, conceiving the test as a science. This is because the processes by which they are applied are the steps of the scientific method: inputs, processes, outputs. The contents of this paper examines the similarities and test characteristics as science.

  8. Redundant arrays of IDE drives

    Energy Technology Data Exchange (ETDEWEB)

    D.A. Sanders et al.

    2002-01-02

    The authors report tests of redundant arrays of IDE disk drives for use in offline high energy physics data analysis. Parts costs of total systems using commodity EIDE disks are now at the $4000 per Terabyte level. Disk storage prices have now decreased to the point where they equal the cost per Terabyte of Storage Technology tape silos. The disks, however, offer far better granularity; even small institutions can afford to deploy systems. The tests include reports on software RAID-5 systems running under Linux 2.4 using Promise Ultra 100{trademark} disk controllers. RAID-5 protects data in case of a single disk failure by providing parity bits. Tape backup is not required. Journaling file systems are used to allow rapid recovery from crashes. The data analysis strategy is to encapsulate data and CPU processing power. Analysis for a particular part of a data set takes place on the PC where the data resides. The network is only used to put results together. They explore three methods of moving data between sites; internet transfers, not pluggable IDE disks in FireWire cases, and DVD-R disks.

  9. Redundant arrays of IDE drives

    International Nuclear Information System (INIS)

    Sanders, D.A.

    2002-01-01

    The authors report tests of redundant arrays of IDE disk drives for use in offline high energy physics data analysis. Parts costs of total systems using commodity EIDE disks are now at the $4000 per Terabyte level. Disk storage prices have now decreased to the point where they equal the cost per Terabyte of Storage Technology tape silos. The disks, however, offer far better granularity; even small institutions can afford to deploy systems. The tests include reports on software RAID-5 systems running under Linux 2.4 using Promise Ultra 100trademark disk controllers. RAID-5 protects data in case of a single disk failure by providing parity bits. Tape backup is not required. Journaling file systems are used to allow rapid recovery from crashes. The data analysis strategy is to encapsulate data and CPU processing power. Analysis for a particular part of a data set takes place on the PC where the data resides. The network is only used to put results together. They explore three methods of moving data between sites; internet transfers, not pluggable IDE disks in FireWire cases, and DVD-R disks

  10. CHALLENGES OF SOFTWARE QUALITY ASSURANCE AND TESTING

    Directory of Open Access Journals (Sweden)

    Md.Shahadat Hossain

    2018-02-01

    Full Text Available Uncertainty exists in Software Company over the world. Software quality problem is leading issue for the software industry. The issue exists from 40 years or 50 years long. The industry is suffering and closing for this issue. In this circumstance, it is important to address and remove its root cause. Otherwise, day by day industry economic loss will increase. I figure out some vital challenges of software quality assurance and testing which have been facing by software industries. The research focused on several small and medium software companies of the world. This paper represents different category of challenges along with responsible stakeholders. This research finds out that testing tools are available testing elements are available testing process has improved but still software has some testing challenges. My research figured out the bottleneck of challenges and explained in this paper. Here software engineers have scope to improve & overcome those challenges. This paper suggests systematic approach to solve the problem.

  11. Software Testing An ISEB Intermediate Certificate

    CERN Document Server

    Hambling, Brian

    2009-01-01

    Covering testing fundamentals, reviews, testing and risk, test management and test analysis, this book helps newly qualified software testers to learn the skills and techniques to take them to the next level. Written by leading authors in the field, this is the only official textbook of the ISEB Intermediate Certificate in Software Testing.

  12. Module Testing Techniques for Nuclear Safety Critical Software Using LDRA Testing Tool

    International Nuclear Information System (INIS)

    Moon, Kwon-Ki; Kim, Do-Yeon; Chang, Hoon-Seon; Chang, Young-Woo; Yun, Jae-Hee; Park, Jee-Duck; Kim, Jae-Hack

    2006-01-01

    The safety critical software in the I and C systems of nuclear power plants requires high functional integrity and reliability. To achieve those requirement goals, the safety critical software should be verified and tested according to related codes and standards through verification and validation (V and V) activities. The safety critical software testing is performed at various stages during the development of the software, and is generally classified as three major activities: module testing, system integration testing, and system validation testing. Module testing involves the evaluation of module level functions of hardware and software. System integration testing investigates the characteristics of a collection of modules and aims at establishing their correct interactions. System validation testing demonstrates that the complete system satisfies its functional requirements. In order to generate reliable software and reduce high maintenance cost, it is important that software testing is carried out at module level. Module testing for the nuclear safety critical software has rarely been performed by formal and proven testing tools because of its various constraints. LDRA testing tool is a widely used and proven tool set that provides powerful source code testing and analysis facilities for the V and V of general purpose software and safety critical software. Use of the tool set is indispensable where software is required to be reliable and as error-free as possible, and its use brings in substantial time and cost savings, and efficiency

  13. Software safety analysis application in installation phase

    International Nuclear Information System (INIS)

    Huang, H. W.; Yih, S.; Wang, L. H.; Liao, B. C.; Lin, J. M.; Kao, T. M.

    2010-01-01

    This work performed a software safety analysis (SSA) in the installation phase of the Lungmen nuclear power plant (LMNPP) in Taiwan, under the cooperation of INER and TPC. The US Nuclear Regulatory Commission (USNRC) requests licensee to perform software safety analysis (SSA) and software verification and validation (SV and V) in each phase of software development life cycle with Branch Technical Position (BTP) 7-14. In this work, 37 safety grade digital instrumentation and control (I and C) systems were analyzed by Failure Mode and Effects Analysis (FMEA), which is suggested by IEEE Standard 7-4.3.2-2003. During the installation phase, skew tests for safety grade network and point to point tests were performed. The FMEA showed all the single failure modes can be resolved by the redundant means. Most of the common mode failures can be resolved by operator manual actions. (authors)

  14. A control system verifier using automated reasoning software

    International Nuclear Information System (INIS)

    Smith, D.E.; Seeman, S.E.

    1985-08-01

    An on-line, automated reasoning software system for verifying the actions of other software or human control systems has been developed. It was demonstrated by verifying the actions of an automated procedure generation system. The verifier uses an interactive theorem prover as its inference engine with the rules included as logical axioms. Operation of the verifier is generally transparent except when the verifier disagrees with the actions of the monitored software. Testing with an automated procedure generation system demonstrates the successful application of automated reasoning software for verification of logical actions in a diverse, redundant manner. A higher degree of confidence may be placed in the verified actions of the combined system

  15. Light Duty Utility Arm Software Test Plan

    International Nuclear Information System (INIS)

    Kiebel, G.R.

    1995-01-01

    This plan describes how validation testing of the software will be implemented for the integrated control and data acquisition system of the Light Duty Utility Arm System (LDUA). The purpose of LDUA software validation testing is to demonstrate and document that the LDUA software meets its software requirements specification

  16. Recent trends on Software Verification and Validation Testing

    International Nuclear Information System (INIS)

    Kim, Hyungtae; Jeong, Choongheui

    2013-01-01

    Verification and Validation (V and V) include the analysis, evaluation, review, inspection, assessment, and testing of products. Especially testing is an important method to verify and validate software. Software V and V testing covers test planning to execution. IEEE Std. 1012 is a standard on the software V and V. Recently, IEEE Std. 1012-2012 was published. This standard is a major revision to IEEE Std. 1012-2004 which defines only software V and V. It expands the scope of the V and V processes to include system and hardware as well as software. This standard describes the scope of V and V testing according to integrity level. In addition, independent V and V requirement related to software V and V testing in IEEE 7-4.3.2-2010 have been revised. This paper provides a recent trend of software V and V testing by reviewing of IEEE Std. 1012-2012 and IEEE 7-4.3.2-2010. There are no major changes of software V and V testing activities and tasks in IEEE 1012-2012 compared with IEEE 1012-2004. But the positions on the responsibility to perform software V and V testing are changed. In addition IEEE 7-4.3.2-2010 newly describes the positions on responsibility to perform Software V and V Testing. However, the positions of these standards on the V and V testing are different. For integrity level 3 and 4, IEEE 1012-2012 basically requires that V and V organization shall conduct all of V and V testing tasks such as test plan, test design, test case, and test procedure except test execution. If V and V testing is conducted by not V and V but another organization, the results of that testing shall be analyzed by the V and V organization. For safety-related software, IEEE 7-4.3.2-2010 requires that test procedures and reports shall be independently verified by the alternate organization regardless of who writes the procedures and/or conducts the tests

  17. A new model for the redundancy allocation problem with component mixing and mixed redundancy strategy

    International Nuclear Information System (INIS)

    Gholinezhad, Hadi; Zeinal Hamadani, Ali

    2017-01-01

    This paper develops a new model for redundancy allocation problem. In this paper, like many recent papers, the choice of the redundancy strategy is considered as a decision variable. But, in our model each subsystem can exploit both active and cold-standby strategies simultaneously. Moreover, the model allows for component mixing such that components of different types may be used in each subsystem. The problem, therefore, boils down to determining the types of components, redundancy levels, and number of active and cold-standby units of each type for each subsystem to maximize system reliability by considering such constraints as available budget, weight, and space. Since RAP belongs to the NP-hard class of optimization problems, a genetic algorithm (GA) is developed for solving the problem. Finally, the performance of the proposed algorithm is evaluated by applying it to a well-known test problem from the literature with relatively satisfactory results. - Highlights: • A new model for the redundancy allocation problem in series–parallel systems is proposed. • The redundancy strategy of each subsystem is considered as a decision variable and can be active, cold-standby or mixed. • Component mixing is allowed, in other words components of any subsystem can be non-identical. • A genetic algorithm is developed for solving the problem. • Computational experiments demonstrate that the new model leads to interesting results.

  18. Creating and Testing Simulation Software

    Science.gov (United States)

    Heinich, Christina M.

    2013-01-01

    The goal of this project is to learn about the software development process, specifically the process to test and fix components of the software. The paper will cover the techniques of testing code, and the benefits of using one style of testing over another. It will also discuss the overall software design and development lifecycle, and how code testing plays an integral role in it. Coding is notorious for always needing to be debugged due to coding errors or faulty program design. Writing tests either before or during program creation that cover all aspects of the code provide a relatively easy way to locate and fix errors, which will in turn decrease the necessity to fix a program after it is released for common use. The backdrop for this paper is the Spaceport Command and Control System (SCCS) Simulation Computer Software Configuration Item (CSCI), a project whose goal is to simulate a launch using simulated models of the ground systems and the connections between them and the control room. The simulations will be used for training and to ensure that all possible outcomes and complications are prepared for before the actual launch day. The code being tested is the Programmable Logic Controller Interface (PLCIF) code, the component responsible for transferring the information from the models to the model Programmable Logic Controllers (PLCs), basic computers that are used for very simple tasks.

  19. Reliability–redundancy allocation problem considering optimal redundancy strategy using parallel genetic algorithm

    International Nuclear Information System (INIS)

    Kim, Heungseob; Kim, Pansoo

    2017-01-01

    To maximize the reliability of a system, the traditional reliability–redundancy allocation problem (RRAP) determines the component reliability and level of redundancy for each subsystem. This paper proposes an advanced RRAP that also considers the optimal redundancy strategy, either active or cold standby. In addition, new examples are presented for it. Furthermore, the exact reliability function for a cold standby redundant subsystem with an imperfect detector/switch is suggested, and is expected to replace the previous approximating model that has been used in most related studies. A parallel genetic algorithm for solving the RRAP as a mixed-integer nonlinear programming model is presented, and its performance is compared with those of previous studies by using numerical examples on three benchmark problems. - Highlights: • Optimal strategy is proposed to solve reliability redundancy allocation problem. • The redundancy strategy uses parallel genetic algorithm. • Improved reliability function for a cold standby subsystem is suggested. • Proposed redundancy strategy enhances the system reliability.

  20. Simulation software support (S3) system a software testing and debugging tool

    International Nuclear Information System (INIS)

    Burgess, D.C.; Mahjouri, F.S.

    1990-01-01

    The largest percentage of technical effort in the software development process is accounted for debugging and testing. It is not unusual for a software development organization to spend over 50% of the total project effort on testing. In the extreme, testing of human-rated software (e.g., nuclear reactor monitoring, training simulator) can cost three to five times as much as all other software engineering steps combined. The Simulation Software Support (S 3 ) System, developed by the Link-Miles Simulation Corporation is ideally suited for real-time simulation applications which involve a large database with models programmed in FORTRAN. This paper will focus on testing elements of the S 3 system. In this paper system support software utilities are provided which enable the loading and execution of modules in the development environment. These elements include the Linking/Loader (LLD) for dynamically linking program modules and loading them into memory and the interactive executive (IEXEC) for controlling the execution of the modules. Features of the Interactive Symbolic Debugger (SD) and the Real Time Executive (RTEXEC) to support the unit and integrated testing will be explored

  1. 15 CFR 995.27 - Format validation software testing.

    Science.gov (United States)

    2010-01-01

    ... of NOAA ENC Products § 995.27 Format validation software testing. Tests shall be performed verifying... specification. These tests may be combined with testing of the conversion software. ... 15 Commerce and Foreign Trade 3 2010-01-01 2010-01-01 false Format validation software testing...

  2. Application of automated reasoning software: procedure generation system verifier

    International Nuclear Information System (INIS)

    Smith, D.E.; Seeman, S.E.

    1984-09-01

    An on-line, automated reasoning software system for verifying the actions of other software or human control systems has been developed. It was demonstrated by verifying the actions of an automated procedure generation system. The verifier uses an interactive theorem prover as its inference engine with the rules included as logic axioms. Operation of the verifier is generally transparent except when the verifier disagrees with the actions of the monitored software. Testing with an automated procedure generation system demonstrates the successful application of automated reasoning software for verification of logical actions in a diverse, redundant manner. A higher degree of confidence may be placed in the verified actions gathered by the combined system

  3. Ethernet redundancy

    Energy Technology Data Exchange (ETDEWEB)

    Burak, K. [Invensys Process Systems, M/S C42-2B, 33 Commercial Street, Foxboro, MA 02035 (United States)

    2006-07-01

    We describe the Ethernet systems and their evolution: LAN Segmentation, DUAL networks, network loops, network redundancy and redundant network access. Ethernet (IEEE 802.3) is an open standard with no licensing fees and its specifications are freely available. As a result, it is the most popular data link protocol in use. It is important that the network be redundant and standard Ethernet protocols like RSTP (IEEE 802.1w) provide the fast network fault detection and recovery times that is required today. As Ethernet does continue to evolve, network redundancy is and will be a mixture of technology standards. So it is very important that both end-stations and networking devices be Ethernet (IEEE 802.3) compliant. Then when new technologies, such as the IEEE 802.1aq Shortest Path Bridging protocol, come to market they can be easily deployed in the network without worry.

  4. Ethernet redundancy

    International Nuclear Information System (INIS)

    Burak, K.

    2006-01-01

    We describe the Ethernet systems and their evolution: LAN Segmentation, DUAL networks, network loops, network redundancy and redundant network access. Ethernet (IEEE 802.3) is an open standard with no licensing fees and its specifications are freely available. As a result, it is the most popular data link protocol in use. It is important that the network be redundant and standard Ethernet protocols like RSTP (IEEE 802.1w) provide the fast network fault detection and recovery times that is required today. As Ethernet does continue to evolve, network redundancy is and will be a mixture of technology standards. So it is very important that both end-stations and networking devices be Ethernet (IEEE 802.3) compliant. Then when new technologies, such as the IEEE 802.1aq Shortest Path Bridging protocol, come to market they can be easily deployed in the network without worry

  5. Lessons Learned in Software Testing A Context-Driven Approach

    CERN Document Server

    Kaner, Cem; Pettichord, Bret

    2008-01-01

    Decades of software testing experience condensed into the most important lessons learned.The world's leading software testing experts lend you their wisdom and years of experience to help you avoid the most common mistakes in testing software. Each lesson is an assertion related to software testing, followed by an explanation or example that shows you the how, when, and why of the testing lesson. More than just tips, tricks, and pitfalls to avoid, Lessons Learned in Software Testing speeds you through the critical testing phase of the software development project without the extensive trial an

  6. Simulation-based Testing of Control Software

    Energy Technology Data Exchange (ETDEWEB)

    Ozmen, Ozgur [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Nutaro, James J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Sanyal, Jibonananda [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Olama, Mohammed M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2017-02-10

    It is impossible to adequately test complex software by examining its operation in a physical prototype of the system monitored. Adequate test coverage can require millions of test cases, and the cost of equipment prototypes combined with the real-time constraints of testing with them makes it infeasible to sample more than a small number of these tests. Model based testing seeks to avoid this problem by allowing for large numbers of relatively inexpensive virtual prototypes that operate in simulation time at a speed limited only by the available computing resources. In this report, we describe how a computer system emulator can be used as part of a model based testing environment; specifically, we show that a complete software stack including operating system and application software - can be deployed within a simulated environment, and that these simulations can proceed as fast as possible. To illustrate this approach to model based testing, we describe how it is being used to test several building control systems that act to coordinate air conditioning loads for the purpose of reducing peak demand. These tests involve the use of ADEVS (A Discrete Event System Simulator) and QEMU (Quick Emulator) to host the operational software within the simulation, and a building model developed with the MODELICA programming language using Buildings Library and packaged as an FMU (Functional Mock-up Unit) that serves as the virtual test environment.

  7. Injecting Errors for Testing Built-In Test Software

    Science.gov (United States)

    Gender, Thomas K.; Chow, James

    2010-01-01

    Two algorithms have been conceived to enable automated, thorough testing of Built-in test (BIT) software. The first algorithm applies to BIT routines that define pass/fail criteria based on values of data read from such hardware devices as memories, input ports, or registers. This algorithm simulates effects of errors in a device under test by (1) intercepting data from the device and (2) performing AND operations between the data and the data mask specific to the device. This operation yields values not expected by the BIT routine. This algorithm entails very small, permanent instrumentation of the software under test (SUT) for performing the AND operations. The second algorithm applies to BIT programs that provide services to users application programs via commands or callable interfaces and requires a capability for test-driver software to read and write the memory used in execution of the SUT. This algorithm identifies all SUT code execution addresses where errors are to be injected, then temporarily replaces the code at those addresses with small test code sequences to inject latent severe errors, then determines whether, as desired, the SUT detects the errors and recovers

  8. Program Helps Design Tests Of Developmental Software

    Science.gov (United States)

    Hops, Jonathan

    1994-01-01

    Computer program called "A Formal Test Representation Language and Tool for Functional Test Designs" (TRL) provides automatic software tool and formal language used to implement category-partition method and produce specification of test cases in testing phase of development of software. Category-partition method useful in defining input, outputs, and purpose of test-design phase of development and combines benefits of choosing normal cases having error-exposing properties. Traceability maintained quite easily by creating test design for each objective in test plan. Effort to transform test cases into procedures simplified by use of automatic software tool to create cases based on test design. Method enables rapid elimination of undesired test cases from consideration and facilitates review of test designs by peer groups. Written in C language.

  9. General software design for multisensor data fusion

    Science.gov (United States)

    Zhang, Junliang; Zhao, Yuming

    1999-03-01

    In this paper a general method of software design for multisensor data fusion is discussed in detail, which adopts object-oriented technology under UNIX operation system. The software for multisensor data fusion is divided into six functional modules: data collection, database management, GIS, target display and alarming data simulation etc. Furthermore, the primary function, the components and some realization methods of each modular is given. The interfaces among these functional modular relations are discussed. The data exchange among each functional modular is performed by interprocess communication IPC, including message queue, semaphore and shared memory. Thus, each functional modular is executed independently, which reduces the dependence among functional modules and helps software programing and testing. This software for multisensor data fusion is designed as hierarchical structure by the inheritance character of classes. Each functional modular is abstracted and encapsulated through class structure, which avoids software redundancy and enhances readability.

  10. Radiation-Tolerance Assessment of a Redundant Wireless Device

    Science.gov (United States)

    Huang, Q.; Jiang, J.

    2018-01-01

    This paper presents a method to evaluate radiation-tolerance without physical tests for a commercial off-the-shelf (COTS)-based monitoring device for high level radiation fields, such as those found in post-accident conditions in a nuclear power plant (NPP). This paper specifically describes the analysis of radiation environment in a severe accident, radiation damages in electronics, and the redundant solution used to prolong the life of the system, as well as the evaluation method for radiation protection and the analysis method of system reliability. As a case study, a wireless monitoring device with redundant and diversified channels is evaluated by using the developed method. The study results and system assessment data show that, under the given radiation condition, performance of the redundant device is more reliable and more robust than those non-redundant devices. The developed redundant wireless monitoring device is therefore able to apply in those conditions (up to 10 M Rad (Si)) during a severe accident in a NPP.

  11. Development of Safety Grade PLC (POSAFE-Q) and Performance Test Results

    International Nuclear Information System (INIS)

    Kim, Chang Hwoi; Park, Won Man; Choi, Jong Gyun; Lee, Dong Young; No, Young Hun; Song, Seung Hwan

    2006-01-01

    The safety grade PLC (POSAFE-Q) is being developed in the Korea Nuclear Instrumentation and Control System (KNICS) R and D project. The PLC satisfies Safety Class 1E, Quality Class 1, and Seismic Category I. The software such as the RTOS and firmware are being developed according to the safety critical software life cycle. Especially, the formal method is applied to design the SRS (Software Requirement Spec.) and the SDS (Software Design Specification.) to be error-free. The POSAFE-Q has several modules such as processor module, input and output modules, communication modules, redundant processor module, redundant power modules, etc,. To verify the function and performance, several tests such as CT, IT and ST were performed. And also, the equipment qualification test for environment, EMI and EMC, and seismic ware performed. All tests are satisfied with the requirements and specification for safety grade PLC, and the criteria for safety system in nuclear power plants

  12. Development of Safety Grade PLC (POSAFE-Q) and Performance Test Results

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Chang Hwoi; Park, Won Man; Choi, Jong Gyun; Lee, Dong Young [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of); No, Young Hun; Song, Seung Hwan [POSCON, Seoul (Korea, Republic of)

    2006-07-01

    The safety grade PLC (POSAFE-Q) is being developed in the Korea Nuclear Instrumentation and Control System (KNICS) R and D project. The PLC satisfies Safety Class 1E, Quality Class 1, and Seismic Category I. The software such as the RTOS and firmware are being developed according to the safety critical software life cycle. Especially, the formal method is applied to design the SRS (Software Requirement Spec.) and the SDS (Software Design Specification.) to be error-free. The POSAFE-Q has several modules such as processor module, input and output modules, communication modules, redundant processor module, redundant power modules, etc,. To verify the function and performance, several tests such as CT, IT and ST were performed. And also, the equipment qualification test for environment, EMI and EMC, and seismic ware performed. All tests are satisfied with the requirements and specification for safety grade PLC, and the criteria for safety system in nuclear power plants.

  13. Palpebral redundancy from hypothyroidism.

    Science.gov (United States)

    Wortsman, J; Wavak, P

    1980-01-01

    A patient is described with disabling palpebral edema. Primary hypothyroidism had been previously diagnosed and treated. Testing of thyroid function revealed persistence of the hypothyroidism. Treatment with L-thyroxine produced normalization of the biochemical parameters and resolution of palpebral edema. The search for hypothyrodism in patients with palpebral redundancy is emphasized.

  14. The proposal of a novel software testing framework

    OpenAIRE

    Ahmad, Munib; Bajaber, Fuad; Qureshi, M. Rizwan Jameel

    2014-01-01

    Software testing is normally used to check the validity of a program. Test oracle performs an important role in software testing. The focus in this research is to perform class level test by introducing a testing framework. A technique is developed to generate test oracle for specification-based software testing using Vienna Development Method (VDM++) formal language. A three stage translation process, of VDM++ specifications of container classes to C++ test oracle classes, is described in th...

  15. The NOvA software testing framework

    International Nuclear Information System (INIS)

    Tamsett, M; Group, C

    2015-01-01

    The NOvA experiment at Fermilab is a long-baseline neutrino experiment designed to study vε appearance in a vμ beam. NOvA has already produced more than one million Monte Carlo and detector generated files amounting to more than 1 PB in size. This data is divided between a number of parallel streams such as far and near detector beam spills, cosmic ray backgrounds, a number of data-driven triggers and over 20 different Monte Carlo configurations. Each of these data streams must be processed through the appropriate steps of the rapidly evolving, multi-tiered, interdependent NOvA software framework. In total there are greater than 12 individual software tiers, each of which performs a different function and can be configured differently depending on the input stream. In order to regularly test and validate that all of these software stages are working correctly NOvA has designed a powerful, modular testing framework that enables detailed validation and benchmarking to be performed in a fast, efficient and accessible way with minimal expert knowledge. The core of this system is a novel series of python modules which wrap, monitor and handle the underlying C++ software framework and then report the results to a slick front-end web-based interface. This interface utilises modern, cross-platform, visualisation libraries to render the test results in a meaningful way. They are fast and flexible, allowing for the easy addition of new tests and datasets. In total upwards of 14 individual streams are regularly tested amounting to over 70 individual software processes, producing over 25 GB of output files. The rigour enforced through this flexible testing framework enables NOvA to rapidly verify configurations, results and software and thus ensure that data is available for physics analysis in a timely and robust manner. (paper)

  16. Software Testing An ISTQB-ISEB Foundation Guide

    CERN Document Server

    Hambling, Brian; Morgan, Peter; Samaroo, Angelina; Thompson, Geoff; Williams, Peter

    2010-01-01

    This practical guide provides insight into software testing, explaining the basics of the testing process and how to perform effective tests. It provides an overview of different techniques and how to apply them. It is the bestselling official textbook of the ISTQB - ISEB Foundation Certificate in Software Testing, updated to the 2010 syllabus.

  17. Hardware for Accelerating N-Modular Redundant Systems for High-Reliability Computing

    Science.gov (United States)

    Dobbs, Carl, Sr.

    2012-01-01

    A hardware unit has been designed that reduces the cost, in terms of performance and power consumption, for implementing N-modular redundancy (NMR) in a multiprocessor device. The innovation monitors transactions to memory, and calculates a form of sumcheck on-the-fly, thereby relieving the processors of calculating the sumcheck in software

  18. Methods and characteristics of assembly language software testing

    International Nuclear Information System (INIS)

    Wang Lingfang

    2001-01-01

    Single chip micro-controllers are widely implemented to the controlling and testing products in industrial controlling and national defence embedded controlling systems. The invalidation of the source programs could lead to the unreliability of the whole systems, even to cause fatal results. Therefore, software testing is the necessary measures to reduce the mistakes and to improve the quality of the software. In the paper, the development of the software testing is presented. The distinctions between the assembly language testing and those of the high level languages is introduced. And the essential flow and methods of software testing are discussed in detail

  19. Flexible Procurement of Services with Uncertain Durations using Redundancy

    OpenAIRE

    Stein, S; Gerding, E; Rogers, A; Larson, K; Jennings, NR

    2009-01-01

    Emerging service-oriented technologies allow software agents to automatically procure distributed services to complete complex tasks. However, in many application scenarios, service providers demand financial remuneration, execution times are uncertain and consumers have deadlines for their tasks. In this paper, we address these issues by developing a novel approach that dynamically procures multiple, redundant services over time, in order to ensure success by the deadline. Specifically, we f...

  20. Finite test sets development method for test execution of safety critical software

    International Nuclear Information System (INIS)

    Shin, Sung Min; Kim, Hee Eun; Kang, Hyun Gook; Lee, Sung Jiun

    2014-01-01

    The V and V method has been utilized for this safety critical software, while SRGM has difficulties because of lack of failure occurrence data on developing phase. For the safety critical software, however, failure data cannot be gathered after installation in real plant when we consider the severe consequence. Therefore, to complement the V and V method, the test-based method need to be developed. Some studies on test-based reliability quantification method for safety critical software have been conducted in nuclear field. These studies provide useful guidance on generating test sets. An important concept of the guidance is that the test sets represent 'trajectories' (a series of successive values for the input variables of a program that occur during the operation of the software over time) in the space of inputs to the software.. Actually, the inputs to the software depends on the state of plant at that time, and these inputs form a new internal state of the software by changing values of some variables. In other words, internal state of the software at specific timing depends on the history of past inputs. Here the internal state of the software which can be changed by past inputs is named as Context of Software (CoS). In a certain CoS, a software failure occurs when a fault is triggered by some inputs. To cover the failure occurrence mechanism of a software, preceding researches insist that the inputs should be a trajectory form. However, in this approach, there are two critical problems. One is the length of the trajectory input. Input trajectory should long enough to cover failure mechanism, but the enough length is not clear. What is worse, to cover some accident scenario, one set of input should represent dozen hours of successive values. The other problem is number of tests needed. To satisfy a target reliability with reasonable confidence level, very large number of test sets are required. Development of this number of test sets is a herculean

  1. Writing executable assertions to test flight software

    Science.gov (United States)

    Mahmood, A.; Andrews, D. M.; Mccluskey, E. J.

    1984-01-01

    An executable assertion is a logical statement about the variables or a block of code. If there is no error during execution, the assertion statement results in a true value. Executable assertions can be used for dynamic testing of software. They can be employed for validation during the design phase, and exception and error detection during the operation phase. The present investigation is concerned with the problem of writing executable assertions, taking into account the use of assertions for testing flight software. They can be employed for validation during the design phase, and for exception handling and error detection during the operation phase The digital flight control system and the flight control software are discussed. The considered system provides autopilot and flight director modes of operation for automatic and manual control of the aircraft during all phases of flight. Attention is given to techniques for writing and using assertions to test flight software, an experimental setup to test flight software, and language features to support efficient use of assertions.

  2. Statistics of software vulnerability detection in certification testing

    Science.gov (United States)

    Barabanov, A. V.; Markov, A. S.; Tsirlov, V. L.

    2018-05-01

    The paper discusses practical aspects of introduction of the methods to detect software vulnerability in the day-to-day activities of the accredited testing laboratory. It presents the approval results of the vulnerability detection methods as part of the study of the open source software and the software that is a test object of the certification tests under information security requirements, including software for communication networks. Results of the study showing the allocation of identified vulnerabilities by types of attacks, country of origin, programming languages used in the development, methods for detecting vulnerability, etc. are given. The experience of foreign information security certification systems related to the detection of certified software vulnerabilities is analyzed. The main conclusion based on the study is the need to implement practices for developing secure software in the development life cycle processes. The conclusions and recommendations for the testing laboratories on the implementation of the vulnerability analysis methods are laid down.

  3. On Planning of FTTH Access Networks with and without Redundancy

    DEFF Research Database (Denmark)

    Riaz, M. Tahir; Haraldsson, Gustav; Gutierrez Lopez, Jose Manuel

    2010-01-01

    This paper presents a planning analysis of FTTH access network with and without redundancy. Traditionally, access networks are planned only without redundancy, which is mainly due to lowering the cost of deployment. As fiber optics provide a huge amount of capacity, more and more services are being...... offered on a single fiber connection. As a single point of failure in fiber connection can cause multiple service deprivation therefore redundancy is very crucial. In this work, an automated planning model was used to test different scenarios of implementation. A cost estimation is presented in terms...... of digging and amount of fiber used. Three topologies, including the traditional one “tree topology”, were test with combination of various passive optical technologies....

  4. The impact of the operating environment on the design of redundant configurations

    International Nuclear Information System (INIS)

    Marseguerra, M.; Padovani, E.; Zio, E.

    1999-01-01

    Safety systems are often characterized by substantial redundancy and diversification in safety critical components. In principle, such redundancy and diversification can bring benefits when compared to single-component systems. However, it has also been recognized that the evaluation of these benefits should take into account that redundancies cannot be founded, in practice, on the assumption of complete independence, so that the resulting risk profile is strongly dominated by dependent failures. It is therefore mandatory that the effects of common cause failures be estimated in any probabilistic safety assessment (PSA). Recently, in the Hughes model for hardware failures and in the Eckhardt and Lee models for software failures, it was proposed that the stressfulness of the operating environment affects the probability that a particular type of component will fail. Thus, dependence of component failure behaviors can arise indirectly through the variability of the environment which can directly affect the success of a redundant configuration. In this paper we investigate the impact of indirect component dependence by means of the introduction of a probability distribution which describes the variability of the environment. We show that the variance of the distribution of the number, or times, of system failures can give an indication of the presence of the environment. Further, the impact of the environment is shown to affect the reliability and the design of redundant configurations

  5. Finite test sets development method for test execution of safety critical software

    International Nuclear Information System (INIS)

    El-Bordany Ayman; Yun, Won Young

    2014-01-01

    It reads inputs, computes new states, and updates output for each scan cycle. Korea Nuclear Instrumentation and Control System (KNICS) has recently developed a fully digitalized Reactor Protection System (RPS) based on PLD. As a digital system, this RPS is equipped with a dedicated software. The Reliability of this software is crucial to NPPs safety where its malfunction may cause irreversible consequences and affect the whole system as a Common Cause Failure (CCF). To guarantee the reliability of the whole system, the reliability of this software needs to be quantified. There are three representative methods for software reliability quantification, namely the Verification and Validation (V and V) quality-based method, the Software Reliability Growth Model (SRGM), and the test-based method. An important concept of the guidance is that the test sets represent 'trajectories' (a series of successive values for the input variables of a program that occur during the operation of the software over time) in the space of inputs to the software.. Actually, the inputs to the software depends on the state of plant at that time, and these inputs form a new internal state of the software by changing values of some variables. In other words, internal state of the software at specific timing depends on the history of past inputs. Here the internal state of the software which can be changed by past inputs is named as Context of Software (CoS). In a certain CoS, a software failure occurs when a fault is triggered by some inputs. To cover the failure occurrence mechanism of a software, preceding researches insist that the inputs should be a trajectory form. However, in this approach, there are two critical problems. One is the length of the trajectory input. Input trajectory should long enough to cover failure mechanism, but the enough length is not clear. What is worse, to cover some accident scenario, one set of input should represent dozen hours of successive values

  6. Prediction of software operational reliability using testing environment factor

    International Nuclear Information System (INIS)

    Jung, Hoan Sung

    1995-02-01

    Software reliability is especially important to customers these days. The need to quantify software reliability of safety-critical systems has been received very special attention and the reliability is rated as one of software's most important attributes. Since the software is an intellectual product of human activity and since it is logically complex, the failures are inevitable. No standard models have been established to prove the correctness and to estimate the reliability of software systems by analysis and/or testing. For many years, many researches have focused on the quantification of software reliability and there are many models developed to quantify software reliability. Most software reliability models estimate the reliability with the failure data collected during the test assuming that the test environments well represent the operation profile. User's interest is on the operational reliability rather than on the test reliability, however. The experiences show that the operational reliability is higher than the test reliability. With the assumption that the difference in reliability results from the change of environment, testing environment factor comprising the aging factor and the coverage factor are defined in this work to predict the ultimate operational reliability with the failure data. It is by incorporating test environments applied beyond the operational profile into testing environment factor Test reliability can also be estimated with this approach without any model change. The application results are close to the actual data. The approach used in this thesis is expected to be applicable to ultra high reliable software systems that are used in nuclear power plants, airplanes, and other safety-critical applications

  7. Automated ECU software tests on hardware-in-the-loop test benches; Automatisierte ECU-Software-Tests an Hardware-in-the-Loop-Pruefstaenden

    Energy Technology Data Exchange (ETDEWEB)

    Voegl, R. [AVL List GmbH, Graz (Austria). Abteilung Kalibrierung Ottomotoren; Duerager, Ch. [AVL List GmbH, Graz (Austria). Abteilung Kalibriermethodik; Beer, W.; Martini, E. [AVL List GmbH, Graz (Austria)

    2005-08-01

    Due to the continuous increase in complexity of engine application, AVL List decided to develop methods for automated ECU software commissioning on HiL test benches. This required the intensive co-operation of the departments for calibration methodology, calibration and electrical/electronic engineering. The result is a practical orientated collection of methods that significantly increase the test coverage for a new software version without extending the commissioning time. (orig.)

  8. Introduction to Lean Canvas Transformation Models and Metrics in Software Testing

    Directory of Open Access Journals (Sweden)

    Nidagundi Padmaraj

    2016-05-01

    Full Text Available Software plays a key role nowadays in all fields, from simple up to cutting-edge technologies and most of technology devices now work on software. Software development verification and validation have become very important to produce the high quality software according to business stakeholder requirements. Different software development methodologies have given a new dimension for software testing. In traditional waterfall software development software testing has approached the end point and begins with resource planning, a test plan is designed and test criteria are defined for acceptance testing. In this process most of test plan is well documented and it leads towards the time-consuming processes. For the modern software development methodology such as agile where long test processes and documentations are not followed strictly due to small iteration of software development and testing, lean canvas transformation models can be a solution. This paper provides a new dimension to find out the possibilities of adopting the lean transformation models and metrics in the software test plan to simplify the test process for further use of these test metrics on canvas.

  9. Software diversity: way to enhance safety?

    International Nuclear Information System (INIS)

    Dahll, G.; Bishop, P.

    1990-01-01

    The topic of the paper is the use of diversely produced programs to enhance the safety of computer-based systems applied in safety-critical areas. The paper starts with a survey of scientific investigations on the impact of software redundancy made at various institutions around the world. Main emphasis will, however, be put on the PODS/STEM projects, which have been performed at the OECD Halden Project in cooperation with the Technical Research Center of Finland, the Safety and Reliability Directorate, AEA Technology, UK, and Central Electricity Research Laboratory (now National Power Technology and Environment Centre), UK. In these projects, three program versions were made independently by three different teams, all based on the same specification. The three programs were tested back-to-back with a large amount of test data. The experience and results from this process were carefully logged and used for further analysis. Various strategies for test data selection were compared, with respect to fault finding strategies, as well as to branch and statement coverages of the tested programs. The assumption of independence of failures in diversely produced programs was investigated. A particularly interesting effect, namely failure masking due to program structure, was revealed. Static analysis techniques, software measures, and software reliability estimates were also studied. (author)

  10. Software test attacks to break mobile and embedded devices

    CERN Document Server

    Hagar, Jon Duncan

    2013-01-01

    Address Errors before Users Find Them Using a mix-and-match approach, Software Test Attacks to Break Mobile and Embedded Devices presents an attack basis for testing mobile and embedded systems. Designed for testers working in the ever-expanding world of ""smart"" devices driven by software, the book focuses on attack-based testing that can be used by individuals and teams. The numerous test attacks show you when a software product does not work (i.e., has bugs) and provide you with information about the software product under test. The book guides you step by step starting with the basics. It

  11. Redundancies in Data and their Effect on the Evaluation of Recommendation Systems

    DEFF Research Database (Denmark)

    Basaran, Daniel; Ntoutsi, Eirini; Zimek, Arthur

    2017-01-01

    A collection of datasets crawled from Amazon, “Amazon reviews”, is popular in the evaluation of recommendation systems. These datasets, however, contain redundancies (duplicated recommendations for variants of certain items). These redundancies went unnoticed in earlier use of these datasets...... and thus incurred to a certain extent wrong conclusions in the evaluation of algorithms tested on these datasets. We analyze the nature and amount of these redundancies and their impact on the evaluation of recommendation methods. While the general and obvious conclusion is that redundancies should...

  12. Flight test of a resident backup software system

    Science.gov (United States)

    Deets, Dwain A.; Lock, Wilton P.; Megna, Vincent A.

    1987-01-01

    A new fault-tolerant system software concept employing the primary digital computers as host for the backup software portion has been implemented and flight tested in the F-8 digital fly-by-wire airplane. The system was implemented in such a way that essentially no transients occurred in transferring from primary to backup software. This was accomplished without a significant increase in the complexity of the backup software. The primary digital system was frame synchronized, which provided several advantages in implementing the resident backup software system. Since the time of the flight tests, two other flight vehicle programs have made a commitment to incorporate resident backup software similar in nature to the system described here.

  13. Software testing for evolutionary iterative rapid prototyping

    OpenAIRE

    Davis, Edward V., Jr.

    1990-01-01

    Approved for public release; distribution unlimited. Rapid prototyping is emerging as a promising software development paradigm. It provides a systematic and automatable means of developing a software system under circumstances where initial requirements are not well known or where requirements change frequently during development. To provide high software quality assurance requires sufficient software testing. The unique nature of evolutionary iterative prototyping is not well-suited for ...

  14. Have the Software Testing a Future?

    Directory of Open Access Journals (Sweden)

    Juan A. Godoy

    2012-06-01

    Full Text Available Software testing is directed to a dark future, with greater political isolation management, less funding and poorer overall quality. The hopes of the theory of software quality and test new technologies of the 1990s have been usurped by "tastes" in the development focused on ideas such as "Agile", "Object Oriented", "Cloud” and applications “Mobile” of $ 0.99. The new languages and development methods are designed to allow developers to "throw" code faster and not to improve versions, maintenance, testing and traceability or auditing. The costs of maintenance and development will increase, the budgets for the test will fall and more projects fail. The future of the tests is shade. In this article is analyzed this situation.

  15. Integration testing through reusing representative unit test cases for high-confidence medical software.

    Science.gov (United States)

    Shin, Youngsul; Choi, Yunja; Lee, Woo Jin

    2013-06-01

    As medical software is getting larger-sized, complex, and connected with other devices, finding faults in integrated software modules gets more difficult and time consuming. Existing integration testing typically takes a black-box approach, which treats the target software as a black box and selects test cases without considering internal behavior of each software module. Though it could be cost-effective, this black-box approach cannot thoroughly test interaction behavior among integrated modules and might leave critical faults undetected, which should not happen in safety-critical systems such as medical software. This work anticipates that information on internal behavior is necessary even for integration testing to define thorough test cases for critical software and proposes a new integration testing method by reusing test cases used for unit testing. The goal is to provide a cost-effective method to detect subtle interaction faults at the integration testing phase by reusing the knowledge obtained from unit testing phase. The suggested approach notes that the test cases for the unit testing include knowledge on internal behavior of each unit and extracts test cases for the integration testing from the test cases for the unit testing for a given test criteria. The extracted representative test cases are connected with functions under test using the state domain and a single test sequence to cover the test cases is produced. By means of reusing unit test cases, the tester has effective test cases to examine diverse execution paths and find interaction faults without analyzing complex modules. The produced test sequence can have test coverage as high as the unit testing coverage and its length is close to the length of optimal test sequences. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Mars Science Laboratory Flight Software Internal Testing

    Science.gov (United States)

    Jones, Justin D.; Lam, Danny

    2011-01-01

    The Mars Science Laboratory (MSL) team is sending the rover, Curiosity, to Mars, and therefore is physically and technically complex. During my stay, I have assisted the MSL Flight Software (FSW) team in implementing functional test scripts to ensure that the FSW performs to the best of its abilities. There are a large number of FSW requirements that have been written up for implementation; however I have only been assigned a few sections of these requirements. There are many stages within testing; one of the early stages is FSW Internal Testing (FIT). The FIT team can accomplish this with simulation software and the MSL Test Automation Kit (MTAK). MTAK has the ability to integrate with the Software Simulation Equipment (SSE) and the Mission Processing and Control System (MPCS) software which makes it a powerful tool within the MSL FSW development process. The MSL team must ensure that the rover accomplishes all stages of the mission successfully. Due to the natural complexity of this project there is a strong emphasis on testing, as failure is not an option. The entire mission could be jeopardized if something is overlooked.

  17. A study on design and testing of software module of safety software

    International Nuclear Information System (INIS)

    Sohn, Se Do; Seong, Poong Hyun

    2000-01-01

    The design criteria of the software module were based on complexity of the module and the cohesion of the module. The easiness of detection of a fault in the software module can be an additional candidate for the module design criteria. The module test coverage criteria and test case generation is reviewed from the aspects of module testability, easiness of the fault detection. One of the methods is making the numerical results as output in addition to the logical outputs. With modules designed with high testability, the test case generation and test coverage can be made more effective

  18. Automated Software Testing : A Study of the State of Practice

    OpenAIRE

    Rafi, Dudekula Mohammad; Reddy, Kiran Moses Katam

    2012-01-01

    Context: Software testing is expensive, labor intensive and consumes lot of time in a software development life cycle. There was always a need in software testing to decrease the testing time. This also resulted to focus on Automated Software Testing (AST), because using automated testing, with specific tools, this effort can be dramatically reduced and the costs related with testing can decrease [11]. Manual Testing (MT) requires lot of effort and hard work, if we measure in terms of person ...

  19. Prediction of software operational reliability using testing environment factors

    International Nuclear Information System (INIS)

    Jung, Hoan Sung; Seong, Poong Hyun

    1995-01-01

    A number of software reliability models have been developed to estimate and to predict software reliability. However, there are no established standard models to quantify software reliability. Most models estimate the quality of software in reliability figures such as remaining faults, failure rate, or mean time to next failure at the testing phase, and they consider them ultimate indicators of software reliability. Experience shows that there is a large gap between predicted reliability during development and reliability measured during operation, which means that predicted reliability, or so-called test reliability, is not operational reliability. Customers prefer operational reliability to test reliability. In this study, we propose a method that predicts operational reliability rather than test reliability by introducing the testing environment factor that quantifies the changes in environments

  20. A Framework of the Use of Information in Software Testing

    Science.gov (United States)

    Kaveh, Payman

    2010-01-01

    With the increasing role that software systems play in our daily lives, software quality has become extremely important. Software quality is impacted by the efficiency of the software testing process. There are a growing number of software testing methodologies, models, and initiatives to satisfy the need to improve software quality. The main…

  1. Introduction to adoption of lean canvas in software test architecture design

    Directory of Open Access Journals (Sweden)

    Padmaraj Nidagundi

    2017-01-01

    Full Text Available The growth of the software dependent businesses, as well as the use of electronic devices in daily life, brings new challenges requiring the software to work error free all the time, to achieve this goal software needs to be sufficiently and effectively tested during various development phases. Most software development companies make great efforts in testing; it is even more difficult to reach the error-free software goal. Different software development methodologies (e.g. traditional waterfall, agile brought in a new dimension for both - development and testing - introducing new technologies and tools. In software test automation the test architecture design plays a key role in managing written test cases and effectively executing them. Having the more effective software test automation architecture design in test process saves resources, efforts and reduces the technical depth. This paper provides the new dimension and possibilities of using lean canvas in the design of the software test architecture.

  2. Acceptance test report MICON software exhaust fan control

    International Nuclear Information System (INIS)

    Keck, R.D.

    1998-01-01

    This test procedure specifies instructions for acceptance testing of software for exhaust fan control under Project ESPT (Energy Savings Performance Contract). The software controls the operation of two emergency exhaust fans when there is a power failure. This report details the results of acceptance testing for the MICON software upgrades. One of the modifications is that only one of the emergency fans will operate at all times. If the operating fan shuts off or fails, the other fan will start and the operating fan will be stopped

  3. Testing methodology of embedded software in digital plant protection system

    International Nuclear Information System (INIS)

    Seong, Ah Young; Choi, Bong Joo; Lee, Na Young; Hwang, Il Soon

    2001-01-01

    It is necessary to assure the reliability of software in order to digitalize RPS(Reactor Protection System). Since RPS causes fatal damage on accidental cases, it is classified as Safety 1E class. Therefore we propose the effective testing methodology to assure the reliability of embedded software in the DPPS(Digital Plant Protection System). To test the embedded software effectively in DPPS, our methodology consists of two steps. The first is the re-engineering step that extracts classes from structural source program, and the second is the level of testing step which is composed of unit testing, Integration Testing and System Testing. On each testing step we test the embedded software with selected test cases after the test item identification step. If we use this testing methodology, we can test the embedded software effectively by reducing the cost and the time

  4. Developing a TTCN-3 Test Harness for Legacy Software

    DEFF Research Database (Denmark)

    Okika, Joseph C.; Ravn, Anders Peter; Siddalingaiah, Lokesh

    2006-01-01

    We describe a prototype test harness for an embedded system which is the control software for a modern marine diesel engine. The operations of such control software requires complete certification. We adopt Testing and Test Control Notation (TTCN-3) to define test cases for this purpose. The main...... challenge in developing the test harness is to interface a generic test driver to the legacy software and provide a suitable interface for test engineers. The main contribution of this paper is a demonstration of a suitable design for such a test harness. It includes: a TTCN-3 test driver in C++, the legacy...

  5. How Redundant Are Redundant Color Adjectives? An Efficiency-Based Analysis of Color Overspecification

    OpenAIRE

    Rubio-Fern?ndez, Paula

    2016-01-01

    Color adjectives tend to be used redundantly in referential communication. I propose that redundant color adjectives are often intended to exploit a color contrast in the visual context and hence facilitate object identification, despite not being necessary to establish unique reference. Two language-production experiments investigated two types of factors that may affect the use of redundant color adjectives: factors related to the efficiency of color in the visual context and factors relate...

  6. Technique for unit testing of safety software verification and validation

    International Nuclear Information System (INIS)

    Li Duo; Zhang Liangju; Feng Junting

    2008-01-01

    The key issue arising from digitalization of the reactor protection system for nuclear power plant is how to carry out verification and validation (V and V), to demonstrate and confirm the software that performs reactor safety functions is safe and reliable. One of the most important processes for software V and V is unit testing, which verifies and validates the software coding based on concept design for consistency, correctness and completeness during software development. The paper shows a preliminary study on the technique for unit testing of safety software V and V, focusing on such aspects as how to confirm test completeness, how to establish test platform, how to develop test cases and how to carry out unit testing. The technique discussed here was successfully used in the work of unit testing on safety software of a digital reactor protection system. (authors)

  7. Development of a programming model for radiation-resistant software

    International Nuclear Information System (INIS)

    Eichhorn, G.; Piercey, R.B.

    1984-01-01

    The adverse effects of ionizing radiation on microelectronic systems include cumulative dosage effects, single-event upsets (SEU's) and latch-up. Most frequent, especially when the radiation environment includes heavy ions, are SEU's. Unfortunately SEU's are difficult to detect since they can be read (in RAM or ROM) as valid addresses. They can however be handled in software by proper techniques. The authors refer to their method as MRS - Maximally Redundant Software. The MRS programming model which the authors are developing uses multiply redundant boot blocks, majority voting, periodic refresh, and error recovery techniques to minimize the deleterious effects of SEU's. 1 figure

  8. Common Data Acquisition Systems (DAS) Software Development for Rocket Propulsion Test (RPT) Test Facilities

    Science.gov (United States)

    Hebert, Phillip W., Sr.; Davis, Dawn M.; Turowski, Mark P.; Holladay, Wendy T.; Hughes, Mark S.

    2012-01-01

    The advent of the commercial space launch industry and NASA's more recent resumption of operation of Stennis Space Center's large test facilities after thirty years of contractor control resulted in a need for a non-proprietary data acquisition systems (DAS) software to support government and commercial testing. The software is designed for modularity and adaptability to minimize the software development effort for current and future data systems. An additional benefit of the software's architecture is its ability to easily migrate to other testing facilities thus providing future commonality across Stennis. Adapting the software to other Rocket Propulsion Test (RPT) Centers such as MSFC, White Sands, and Plumbrook Station would provide additional commonality and help reduce testing costs for NASA. Ultimately, the software provides the government with unlimited rights and guarantees privacy of data to commercial entities. The project engaged all RPT Centers and NASA's Independent Verification & Validation facility to enhance product quality. The design consists of a translation layer which provides the transparency of the software application layers to underlying hardware regardless of test facility location and a flexible and easily accessible database. This presentation addresses system technical design, issues encountered, and the status of Stennis development and deployment.

  9. Functional Testing Protocols for Commercial Building Efficiency Baseline Modeling Software

    Energy Technology Data Exchange (ETDEWEB)

    Jump, David; Price, Phillip N.; Granderson, Jessica; Sohn, Michael

    2013-09-06

    This document describes procedures for testing and validating proprietary baseline energy modeling software accuracy in predicting energy use over the period of interest, such as a month or a year. The procedures are designed according to the methodology used for public domain baselining software in another LBNL report that was (like the present report) prepared for Pacific Gas and Electric Company: ?Commercial Building Energy Baseline Modeling Software: Performance Metrics and Method Testing with Open Source Models and Implications for Proprietary Software Testing Protocols? (referred to here as the ?Model Analysis Report?). The test procedure focuses on the quality of the software?s predictions rather than on the specific algorithms used to predict energy use. In this way the software vendor is not required to divulge or share proprietary information about how their software works, while enabling stakeholders to assess its performance.

  10. A Cause-Consequence Chart of a Redundant Protection System

    DEFF Research Database (Denmark)

    Nielsen, Dan Sandvik; Platz, O.; Runge, B.

    1975-01-01

    A cause-consequence chart is applied in analysing failures of a redundant protection system (a core spray system in a nuclear power plant). It is shown how the diagram provides a basis for calculating two probability measures for malfunctioning of the protection system. The test policy of compone...... of components is taken into account. The possibility of using parameter variation as a basis for the choice of test policy is indicated.......A cause-consequence chart is applied in analysing failures of a redundant protection system (a core spray system in a nuclear power plant). It is shown how the diagram provides a basis for calculating two probability measures for malfunctioning of the protection system. The test policy...

  11. Testing digital safety system software with a testability measure based on a software fault tree

    International Nuclear Information System (INIS)

    Sohn, Se Do; Hyun Seong, Poong

    2006-01-01

    Using predeveloped software, a digital safety system is designed that meets the quality standards of a safety system. To demonstrate the quality, the design process and operating history of the product are reviewed along with configuration management practices. The application software of the safety system is developed in accordance with the planned life cycle. Testing, which is a major phase that takes a significant time in the overall life cycle, can be optimized if the testability of the software can be evaluated. The proposed testability measure of the software is based on the entropy of the importance of basic statements and the failure probability from a software fault tree. To calculate testability, a fault tree is used in the analysis of a source code. With a quantitative measure of testability, testing can be optimized. The proposed testability can also be used to demonstrate whether the test cases based on uniform partitions, such as branch coverage criteria, result in homogeneous partitions that is known to be more effective than random testing. In this paper, the testability measure is calculated for the modules of a nuclear power plant's safety software. The module testing with branch coverage criteria required fewer test cases if the module has higher testability. The result shows that the testability measure can be used to evaluate whether partitions have homogeneous characteristics

  12. Artificial intelligence and expert systems in-flight software testing

    Science.gov (United States)

    Demasie, M. P.; Muratore, J. F.

    1991-01-01

    The authors discuss the introduction of advanced information systems technologies such as artificial intelligence, expert systems, and advanced human-computer interfaces directly into Space Shuttle software engineering. The reconfiguration automation project (RAP) was initiated to coordinate this move towards 1990s software technology. The idea behind RAP is to automate several phases of the flight software testing procedure and to introduce AI and ES into space shuttle flight software testing. In the first phase of RAP, conventional tools to automate regression testing have already been developed or acquired. There are currently three tools in use.

  13. The Development of Synchronization Function for Triple Redundancy System Based on SCADE

    Directory of Open Access Journals (Sweden)

    Moupeng

    2015-07-01

    Full Text Available Redundancy technique is an effective approach to improve the reliability and security of flight control system, synchronization function of redundancy system is the key technology of redundancy management. The flight control computer synchronization model is developed by graphical modeling method in the SCADE development environment, the automatic code generation technology is used to generate high level reliable embedded real-time code for synchronization function, omitting the code test process, shorten the development cycle. In the practical application, the program can accomplish the functional synchronization, and lay a well foundation for the redundancy system.

  14. Working memory capacity and redundant information processing efficiency.

    Science.gov (United States)

    Endres, Michael J; Houpt, Joseph W; Donkin, Chris; Finn, Peter R

    2015-01-01

    Working memory capacity (WMC) is typically measured by the amount of task-relevant information an individual can keep in mind while resisting distraction or interference from task-irrelevant information. The current research investigated the extent to which differences in WMC were associated with performance on a novel redundant memory probes (RMP) task that systematically varied the amount of to-be-remembered (targets) and to-be-ignored (distractor) information. The RMP task was designed to both facilitate and inhibit working memory search processes, as evidenced by differences in accuracy, response time, and Linear Ballistic Accumulator (LBA) model estimates of information processing efficiency. Participants (N = 170) completed standard intelligence tests and dual-span WMC tasks, along with the RMP task. As expected, accuracy, response-time, and LBA model results indicated memory search and retrieval processes were facilitated under redundant-target conditions, but also inhibited under mixed target/distractor and redundant-distractor conditions. Repeated measures analyses also indicated that, while individuals classified as high (n = 85) and low (n = 85) WMC did not differ in the magnitude of redundancy effects, groups did differ in the efficiency of memory search and retrieval processes overall. Results suggest that redundant information reliably facilitates and inhibits the efficiency or speed of working memory search, and these effects are independent of more general limits and individual differences in the capacity or space of working memory.

  15. Redundant actuator development study. [flight control systems for supersonic transport aircraft

    Science.gov (United States)

    Ryder, D. R.

    1973-01-01

    Current and past supersonic transport configurations are reviewed to assess redundancy requirements for future airplane control systems. Secondary actuators used in stability augmentation systems will probably be the most critical actuator application and require the highest level of redundancy. Two methods of actuator redundancy mechanization have been recommended for further study. Math models of the recommended systems have been developed for use in future computer simulations. A long range plan has been formulated for actuator hardware development and testing in conjunction with the NASA Flight Simulator for Advanced Aircraft.

  16. A review of software project testing

    Directory of Open Access Journals (Sweden)

    Jose Calvo-Manzano Villalón

    2016-03-01

    Full Text Available In this article a review of software projects based on a taxonomy project is established, allowing the development team or testing personnel to identify the tests to which the project must be subjected for validation. The taxonomy is focused on identifying software projects according to their technology. To establish the taxonomy, a development method comprised of 5 phases was applied. The developed taxonomy is comprised of 10 categories and 35 subcategories and was validated by a group of information technology (IT managers and professionals in the field of IT through the use of a survey. The results obtained from the survey are subjected to the Mann-Whitney U test, which indicates that the taxonomy is validated. The taxonomy can be implemented in development organizations with or without a testing team that provides a classification for technology projects.

  17. Development of a flight software testing methodology

    Science.gov (United States)

    Mccluskey, E. J.; Andrews, D. M.

    1985-01-01

    The research to develop a testing methodology for flight software is described. An experiment was conducted in using assertions to dynamically test digital flight control software. The experiment showed that 87% of typical errors introduced into the program would be detected by assertions. Detailed analysis of the test data showed that the number of assertions needed to detect those errors could be reduced to a minimal set. The analysis also revealed that the most effective assertions tested program parameters that provided greater indirect (collateral) testing of other parameters. In addition, a prototype watchdog task system was built to evaluate the effectiveness of executing assertions in parallel by using the multitasking features of Ada.

  18. Flat H Redundant Frangible Joint Development

    Science.gov (United States)

    Brown, Chris

    2016-01-01

    Orion and Commercial Crew Program (CCP) Partners have chosen to use frangible joints for certain separation events. The joints currently available are zero failure tolerant and will be used in mission safety applications. The goal is to further develop a NASA designed redundant frangible joint that will lower flight risk and increase reliability. FY16 testing revealed a successful design in subscale straight test specimens that gained efficiency and supports Orion load requirements. Approach / Innovation A design constraint is that the redundant joint must fit within the current Orion architecture, without the need for additional vehicle modification. This limitation required a design that changed the orientation of the expanding tube assemblies (XTAs), by rotating them 90deg from the standard joint configuration. The change is not trivial and affects the fracture mechanism and structural load paths. To address these changes, the design incorporates cantilevered arms on the break plate. The shock transmission and expansion of the XTA applies force to these arms and creates a prying motion to push the plate walls outward to the point of structural failure at the notched section. The 2014 test design revealed that parts could slip during functioning wasting valuable energy needed to separate the structure with only a single XTA functioning. Dual XTA functioning fully separated the assembly showing a discrepancy can be backed up with redundancy. Work on other fully redundant systems outside NASA is limited to a few patents that have not been subjected to functionality testing Design changes to prevent unwanted slippage (with ICA funding in 2015) showed success with a single XTA. The main goal for FY 2016 was to send the new Flat H RFJ to WSTF where single XTA test failures occurred back in 2014. The plan was to gain efficiency in this design by separating the Flat H RFJ with thicker ligaments with dimensions baselined in 2014. Other modifications included geometry

  19. Cassini's Test Methodology for Flight Software Verification and Operations

    Science.gov (United States)

    Wang, Eric; Brown, Jay

    2007-01-01

    The Cassini spacecraft was launched on 15 October 1997 on a Titan IV-B launch vehicle. The spacecraft is comprised of various subsystems, including the Attitude and Articulation Control Subsystem (AACS). The AACS Flight Software (FSW) and its development has been an ongoing effort, from the design, development and finally operations. As planned, major modifications to certain FSW functions were designed, tested, verified and uploaded during the cruise phase of the mission. Each flight software upload involved extensive verification testing. A standardized FSW testing methodology was used to verify the integrity of the flight software. This paper summarizes the flight software testing methodology used for verifying FSW from pre-launch through the prime mission, with an emphasis on flight experience testing during the first 2.5 years of the prime mission (July 2004 through January 2007).

  20. 78 FR 47011 - Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants

    Science.gov (United States)

    2013-08-02

    ... NUCLEAR REGULATORY COMMISSION [NRC-2012-0195] Software Unit Testing for Digital Computer Software... revised regulatory guide (RG), revision 1 of RG 1.171, ``Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants.'' This RG endorses American National Standards...

  1. 77 FR 50722 - Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants

    Science.gov (United States)

    2012-08-22

    ... NUCLEAR REGULATORY COMMISSION [NRC-2012-0195] Software Unit Testing for Digital Computer Software...) is issuing for public comment draft regulatory guide (DG), DG-1208, ``Software Unit Testing for Digital Computer Software used in Safety Systems of Nuclear Power Plants.'' The DG-1208 is proposed...

  2. A Method to Select Test Input Cases for Safety-critical Software

    International Nuclear Information System (INIS)

    Kim, Heeeun; Kang, Hyungook; Son, Hanseong

    2013-01-01

    This paper proposes a new testing methodology for effective and realistic quantification of RPS software failure probability. Software failure probability quantification is important factor in digital system safety assessment. In this study, the method for software test case generation is briefly described. The test cases generated by this method reflect the characteristics of safety-critical software and past inputs. Furthermore, the number of test cases can be reduced, but it is possible to perform exhaustive test. Aspect of software also can be reflected as failure data, so the final failure data can include the failure of software itself and external influences. Software reliability is generally accepted as the key factor in software quality since it quantifies software failures which can make a powerful system inoperative. In the KNITS (Korea Nuclear Instrumentation and Control Systems) project, the software for the fully digitalized reactor protection system (RPS) was developed under a strict procedure including unit testing and coverage measurement. Black box testing is one type of Verification and validation (V and V), in which given input values are entered and the resulting output values are compared against the expected output values. Programmable logic controllers (PLCs) were used in implementing critical systems and function block diagram (FBD) is a commonly used implementation language for PLC

  3. Managing the Testing Process Practical Tools and Techniques for Managing Hardware and Software Testing

    CERN Document Server

    Black, Rex

    2011-01-01

    New edition of one of the most influential books on managing software and hardware testing In this new edition of his top-selling book, Rex Black walks you through the steps necessary to manage rigorous testing programs of hardware and software. The preeminent expert in his field, Mr. Black draws upon years of experience as president of both the International and American Software Testing Qualifications boards to offer this extensive resource of all the standards, methods, and tools you'll need. The book covers core testing concepts and thoroughly examines the best test management practices

  4. Automated Search-Based Robustness Testing for Autonomous Vehicle Software

    Directory of Open Access Journals (Sweden)

    Kevin M. Betts

    2016-01-01

    Full Text Available Autonomous systems must successfully operate in complex time-varying spatial environments even when dealing with system faults that may occur during a mission. Consequently, evaluating the robustness, or ability to operate correctly under unexpected conditions, of autonomous vehicle control software is an increasingly important issue in software testing. New methods to automatically generate test cases for robustness testing of autonomous vehicle control software in closed-loop simulation are needed. Search-based testing techniques were used to automatically generate test cases, consisting of initial conditions and fault sequences, intended to challenge the control software more than test cases generated using current methods. Two different search-based testing methods, genetic algorithms and surrogate-based optimization, were used to generate test cases for a simulated unmanned aerial vehicle attempting to fly through an entryway. The effectiveness of the search-based methods in generating challenging test cases was compared to both a truth reference (full combinatorial testing and the method most commonly used today (Monte Carlo testing. The search-based testing techniques demonstrated better performance than Monte Carlo testing for both of the test case generation performance metrics: (1 finding the single most challenging test case and (2 finding the set of fifty test cases with the highest mean degree of challenge.

  5. A review on software testing approaches for cloud applications

    Directory of Open Access Journals (Sweden)

    Tamanna Siddiqui

    2016-09-01

    Full Text Available Cloud computing has actually been invented to be the latest computing standard that will work several distinctive research areas, such as software testing. Testing cloud applications will keep its unique characteristics that involve more recent testing techniques. Software testing helps to reduce the need for hardware and software services and also provide adaptable and valuable cloud platform. Testing within the cloud platform is easily manageable based on new test models and criteria. Prioritization approach is made responsive to build much better relationship between test cases. These test cases are clustered dependent on priority level. The resources can be used properly by applying load balancing algorithm. Cloud guarantees maximum usage of existing resources. But, security defined as a primary problem in cloud. At the present time, organizations are progressively moving excited about deploying and making use of ready-prepared business applications, with particular short-term to the marketplace. The possible lack of capital budgets for software planning and on principle deployments, along with the swift progression of cloud these are the reasons why one should make the interest on business application. However, these are the interests that help make the SaaS based business application on-demand. In this paper different approaches has been discussed that will help to extend the cloud environment. Also, the study of several well-known software testing approaches.

  6. Redundancy of Redundancy in Justifications of Verdicts of Polish The Constitutional Tribuna

    Directory of Open Access Journals (Sweden)

    Jan Winczorek

    2016-09-01

    Full Text Available The results of an empirical study of 150 justifications of verdicts of the Polish Constitutional Tribunal (CT are discussed. CT justifies its decisions mostly on authoritative references to previous decisions and other doxa- type arguments. It thus does not convince the audience of a decision's validity, but rather documents it. Further, the methodology changes depending on features of the case. The results are analysed using a conceptual    framework    of sociological systems theory. It is shown that CT's justification methodology ignores the redundancy (excess of references and dependencies of the legal system, finding redundancy redundant. This is a risky strategy of decision- making, enabling political influence.

  7. Automation of Flight Software Regression Testing

    Science.gov (United States)

    Tashakkor, Scott B.

    2016-01-01

    NASA is developing the Space Launch System (SLS) to be a heavy lift launch vehicle supporting human and scientific exploration beyond earth orbit. SLS will have a common core stage, an upper stage, and different permutations of boosters and fairings to perform various crewed or cargo missions. Marshall Space Flight Center (MSFC) is writing the Flight Software (FSW) that will operate the SLS launch vehicle. The FSW is developed in an incremental manner based on "Agile" software techniques. As the FSW is incrementally developed, testing the functionality of the code needs to be performed continually to ensure that the integrity of the software is maintained. Manually testing the functionality on an ever-growing set of requirements and features is not an efficient solution and therefore needs to be done automatically to ensure testing is comprehensive. To support test automation, a framework for a regression test harness has been developed and used on SLS FSW. The test harness provides a modular design approach that can compile or read in the required information specified by the developer of the test. The modularity provides independence between groups of tests and the ability to add and remove tests without disturbing others. This provides the SLS FSW team a time saving feature that is essential to meeting SLS Program technical and programmatic requirements. During development of SLS FSW, this technique has proved to be a useful tool to ensure all requirements have been tested, and that desired functionality is maintained, as changes occur. It also provides a mechanism for developers to check functionality of the code that they have developed. With this system, automation of regression testing is accomplished through a scheduling tool and/or commit hooks. Key advantages of this test harness capability includes execution support for multiple independent test cases, the ability for developers to specify precisely what they are testing and how, the ability to add

  8. Testing existing software for safety-related applications. Revision 7.1

    International Nuclear Information System (INIS)

    Scott, J.A.; Lawrence, J.D.

    1995-12-01

    The increasing use of commercial off-the-shelf (COTS) software products in digital safety-critical applications is raising concerns about the safety, reliability, and quality of these products. One of the factors involved in addressing these concerns is product testing. A tester's knowledge of the software product will vary, depending on the information available from the product vendor. In some cases, complete source listings, program structures, and other information from the software development may be available. In other cases, only the complete hardware/software package may exist, with the tester having no knowledge of the internal structure of the software. The type of testing that can be used will depend on the information available to the tester. This report describes six different types of testing, which differ in the information used to create the tests, the results that may be obtained, and the limitations of the test types. An Annex contains background information on types of faults encountered in testing, and a Glossary of pertinent terms is also included. This study is pertinent for safety-related software at reactors

  9. Overview of software development at the parabolic dish test site

    Science.gov (United States)

    Miyazono, C. K.

    1985-01-01

    The development history of the data acquisition and data analysis software is discussed. The software development occurred between 1978 and 1984 in support of solar energy module testing at the Jet Propulsion Laboratory's Parabolic Dish Test Site, located within Edwards Test Station. The development went through incremental stages, starting with a simple single-user BASIC set of programs, and progressing to the relative complex multi-user FORTRAN system that was used until the termination of the project. Additional software in support of testing is discussed including software in support of a meteorological subsystem and the Test Bed Concentrator Control Console interface. Conclusions and recommendations for further development are discussed.

  10. How redundant are redundant color adjectives? An efficiency-based analysis of color overspecification

    Directory of Open Access Journals (Sweden)

    Paula eRubio-Fernández

    2016-02-01

    Full Text Available Color adjectives tend to be used redundantly in referential communication. I propose that redundant color adjectives are often intended to exploit a color contrast in the visual context and hence facilitate object identification, despite not being necessary to establish unique reference. Two language-production experiments investigated two types of factors that may affect the use of redundant color adjectives: factors related to the efficiency of color in the visual context and factors related to the semantic category of the noun. The results of Experiment 1 confirmed that people produce redundant color adjectives when color may facilitate object recognition; e.g., they do so more often in polychrome displays than in monochrome displays, and more often in English (pre-nominal position than in Spanish (post-nominal position. Redundant color adjectives are also used when color is a central property of the object category; e.g., people referred to the color of clothes more often than to the color of geometrical figures (Experiment 1, and they overspecified atypical colors more often than variable and stereotypical colors (Experiment 2. These results are relevant for pragmatic models of referential communication based on Gricean pragmatics and informativeness. An alternative analysis is proposed, which focuses on the efficiency and pertinence of color in a given referential situation.

  11. Toxicity Estimation Software Tool (TEST)

    Science.gov (United States)

    The Toxicity Estimation Software Tool (TEST) was developed to allow users to easily estimate the toxicity of chemicals using Quantitative Structure Activity Relationships (QSARs) methodologies. QSARs are mathematical models used to predict measures of toxicity from the physical c...

  12. Motion control of musculoskeletal systems with redundancy.

    Science.gov (United States)

    Park, Hyunjoo; Durand, Dominique M

    2008-12-01

    Motion control of musculoskeletal systems for functional electrical stimulation (FES) is a challenging problem due to the inherent complexity of the systems. These include being highly nonlinear, strongly coupled, time-varying, time-delayed, and redundant. The redundancy in particular makes it difficult to find an inverse model of the system for control purposes. We have developed a control system for multiple input multiple output (MIMO) redundant musculoskeletal systems with little prior information. The proposed method separates the steady-state properties from the dynamic properties. The dynamic control uses a steady-state inverse model and is implemented with both a PID controller for disturbance rejection and an artificial neural network (ANN) feedforward controller for fast trajectory tracking. A mechanism to control the sum of the muscle excitation levels is also included. To test the performance of the proposed control system, a two degree of freedom ankle-subtalar joint model with eight muscles was used. The simulation results show that separation of steady-state and dynamic control allow small output tracking errors for different reference trajectories such as pseudo-step, sinusoidal and filtered random signals. The proposed control method also demonstrated robustness against system parameter and controller parameter variations. A possible application of this control algorithm is FES control using multiple contact cuff electrodes where mathematical modeling is not feasible and the redundancy makes the control of dynamic movement difficult.

  13. Automation software for a materials testing laboratory

    Science.gov (United States)

    Mcgaw, Michael A.; Bonacuse, Peter J.

    1990-01-01

    The software environment in use at the NASA-Lewis Research Center's High Temperature Fatigue and Structures Laboratory is reviewed. This software environment is aimed at supporting the tasks involved in performing materials behavior research. The features and capabilities of the approach to specifying a materials test include static and dynamic control mode switching, enabling multimode test control; dynamic alteration of the control waveform based upon events occurring in the response variables; precise control over the nature of both command waveform generation and data acquisition; and the nesting of waveform/data acquisition strategies so that material history dependencies may be explored. To eliminate repetitive tasks in the coventional research process, a communications network software system is established which provides file interchange and remote console capabilities.

  14. Software engineers and nuclear engineers: teaming up to do testing

    International Nuclear Information System (INIS)

    Kelly, D.; Cote, N.; Shepard, T.

    2007-01-01

    The software engineering community has traditionally paid little attention to the specific needs of engineers and scientists who develop their own software. Recently there has been increased recognition that specific software engineering techniques need to be found for this group of developers. In this case study, a software engineering group teamed with a nuclear engineering group to develop a software testing strategy. This work examines the types of testing that proved to be useful and examines what each discipline brings to the table to improve the quality of the software product. (author)

  15. Prediction of safety critical software operational reliability from test reliability using testing environment factors

    International Nuclear Information System (INIS)

    Jung, Hoan Sung; Seong, Poong Hyun

    1999-01-01

    It has been a critical issue to predict the safety critical software reliability in nuclear engineering area. For many years, many researches have focused on the quantification of software reliability and there have been many models developed to quantify software reliability. Most software reliability models estimate the reliability with the failure data collected during the test assuming that the test environments well represent the operation profile. User's interest is however on the operational reliability rather than on the test reliability. The experiences show that the operational reliability is higher than the test reliability. With the assumption that the difference in reliability results from the change of environment, from testing to operation, testing environment factors comprising the aging factor and the coverage factor are developed in this paper and used to predict the ultimate operational reliability with the failure data in testing phase. It is by incorporating test environments applied beyond the operational profile into testing environment factors. The application results show that the proposed method can estimate the operational reliability accurately. (Author). 14 refs., 1 tab., 1 fig

  16. Techniques to maximize software reliability in radiation fields

    International Nuclear Information System (INIS)

    Eichhorn, G.; Piercey, R.B.

    1986-01-01

    Microprocessor system failures due to memory corruption by single event upsets (SEUs) and/or latch-up in RAM or ROM memory are common in environments where there is high radiation flux. Traditional methods to harden microcomputer systems against SEUs and memory latch-up have usually involved expensive large scale hardware redundancy. Such systems offer higher reliability, but they tend to be more complex and non-standard. At the Space Astronomy Laboratory the authors have developed general programming techniques for producing software which is resistant to such memory failures. These techniques, which may be applied to standard off-the-shelf hardware, as well as custom designs, include an implementation of Maximally Redundant Software (MRS) model, error detection algorithms and memory verification and management

  17. Language as an information system: redundancy and optimization

    Directory of Open Access Journals (Sweden)

    Irina Mikhaylovna Nekipelova

    2015-11-01

    Full Text Available The paper is devoted to research of the language system as an information system. The distinguishing feature of any natural living language system is redundant of elements of its structure. Redundancy, broken terms of universality peculiar to artificial information systems, makes language mobile in time and in space. It should be marked out informational redundancy of two types: language redundancy, when information overlay of language units within the system occurs and speech redundancy when condense of information into syntagmatic level occurs. Language redundancy is potential and speech redundancy is actual. In general, it should be noted that the language redundancy is necessary for language: complicating the relationships between language units, language redundancy creates in language situation of choice, leading to a disorder of language system, increasing of entropy and, as a result, the appearing of the information that can be accepted or cannot be by language system. Language redundancy is one of the reasons for growth of information in language. In addition, the information redundancy in language is one of the factors of language system development.

  18. Test rig overview for validation and reliability testing of shutdown system software

    International Nuclear Information System (INIS)

    Zhao, M.; McDonald, A.; Dick, P.

    2007-01-01

    The test rig for Validation and Reliability Testing of shutdown system software has been upgraded from the AECL Windows-based test rig previously used for CANDU6 stations. It includes a Virtual Trip Computer, which is a software simulation of the functional specification of the trip computer, and a real-time trip computer simulator in a separate chassis, which is used during the preparation of trip computer test cases before the actual trip computers are available. This allows preparation work for Validation and Reliability Testing to be performed in advance of delivery of actual trip computers to maintain a project schedule. (author)

  19. Software/firmware design specification for 10-MWe solar-thermal central-receiver pilot plant

    Energy Technology Data Exchange (ETDEWEB)

    Ladewig, T.D.

    1981-03-01

    The software and firmware employed for the operation of the Barstow Solar Pilot Plant are completely described. The systems allow operator control of up to 2048 heliostats, and include the capability of operator-commanded control, graphic displays, status displays, alarm generation, system redundancy, and interfaces to the Operational Control System, the Data Acquisition System, and the Beam Characterization System. The requirements are decomposed into eleven software modules for execution in the Heliostat Array Controller computer, one firmware module for execution in the Heliostat Field Controller microprocessor, and one firmware module for execution in the Heliostat Controller microprocessor. The design of the modules to satisfy requirements, the interfaces between the computers, the software system structure, and the computers in which the software and firmware will execute are detailed. The testing sequence for validation of the software/firmware is described. (LEW)

  20. The Effects of Race Conditions When Implementing Single-Source Redundant Clock Trees in Triple Modular Redundant Synchronous Architectures

    Science.gov (United States)

    Berg, Melanie D.; Kim, Hak S.; Phan, Anthony M.; Seidleck, Christina M.; Label, Kenneth A.; Pellish, Jonathan A.; Campola, Michael J.

    2016-01-01

    We present the challenges that arise when using redundant clock domains due to their time-skew. Radiation data show that a singular clock domain provides an improved triple modular redundant (TMR) scheme over redundant clocks.

  1. Modeling Student Software Testing Processes: Attitudes, Behaviors, Interventions, and Their Effects

    Science.gov (United States)

    Buffardi, Kevin John

    2014-01-01

    Effective software testing identifies potential bugs and helps correct them, producing more reliable and maintainable software. As software development processes have evolved, incremental testing techniques have grown in popularity, particularly with introduction of test-driven development (TDD). However, many programmers struggle to adopt TDD's…

  2. Prediction of software operational reliability using testing environment factors

    International Nuclear Information System (INIS)

    Jung, Hoan Sung; Seong, Poong Hyun

    1995-01-01

    For many years, many researches have focused on the quantification of software reliability and there are many models developed to quantify software reliability. Most software reliability models estimate the reliability with the failure data collected during the test assuming that the test environments well represent the operation profile. The experiences show that the operational reliability is higher than the test reliability User's interest is on the operational reliability rather than on the test reliability, however. With the assumption that the difference in reliability results from the change of environment, testing environment factors comprising the aging factor and the coverage factor are defined in this study to predict the ultimate operational reliability with the failure data. It is by incorporating test environments applied beyond the operational profile into testing environment factors. The application results are close to the actual data

  3. Software testing using Visual Studio 2012

    CERN Document Server

    Subashni, S

    2013-01-01

    We will be setting up a sample test scenario, then we'll walk through the features available to deploy tests.This book is for developers and testers who want to get to grips with Visual Studio 2012 and Test Manager for all testing activities and managing tests and results in Team Foundation Server. It requires a minimal understanding of testing practices and the software development life cycle; also, some coding skills would help in customizing and updating the code generated from the web UI testing.

  4. Development of DCC software dynamic test facility: past and future

    International Nuclear Information System (INIS)

    McDonald, A.M.; Thai, N.D.; Buijs, W.J.

    1996-01-01

    This paper describes a test facility for future dynamic testing of DCC software used in the control computers of CANDU nuclear power stations. It is a network of three computers: the DCC emulator, the dynamic CANDU plant simulator and the testing computer. Shared network files are used for input/output data exchange between computers. The DCC emulator runs directly on the binary image of the DCC software. The dynamic CANDU plant simulator accepts control signals from the DCC emulator and returns realistic plant behaviour. The testing computer accepts test scripts written in AECL Test Language. Both dynamic test and static tests may be performed on the DCC software to verify control program outputs and dynamic responses. (author)

  5. Determination of the number of software tests using probabilistic safety assessment

    International Nuclear Information System (INIS)

    Kang, H. K.; Seong, T. Y.; Lee, K. Y.

    2000-01-01

    The broader usage of digital equipment in nuclear power plants gives rise to the safety problems of software. The field test should be performed before the software is used in critical applications because it is well known that software shows non-linear response when it is applied to different target systems in different environment. In the case of safety-critical applications, the result of tests contains usually zero failure case and the satisfiable number of tests is hard to be determined. In this paper, we suggests the method to determine the number of software tests without failure using the probabilistic safety assessment. From the result of the probabilistic safety assessment on total system, the desirable unavailability of software is calculated and the number of tests is determined

  6. Software Testing and Verification in Climate Model Development

    Science.gov (United States)

    Clune, Thomas L.; Rood, RIchard B.

    2011-01-01

    Over the past 30 years most climate models have grown from relatively simple representations of a few atmospheric processes to a complex multi-disciplinary system. Computer infrastructure over that period has gone from punch card mainframes to modem parallel clusters. Model implementations have become complex, brittle, and increasingly difficult to extend and maintain. Existing verification processes for model implementations rely almost exclusively upon some combination of detailed analysis of output from full climate simulations and system-level regression tests. In additional to being quite costly in terms of developer time and computing resources, these testing methodologies are limited in terms of the types of defects that can be detected, isolated and diagnosed. Mitigating these weaknesses of coarse-grained testing with finer-grained "unit" tests has been perceived as cumbersome and counter-productive. In the commercial software sector, recent advances in tools and methodology have led to a renaissance for systematic fine-grained testing. We discuss the availability of analogous tools for scientific software and examine benefits that similar testing methodologies could bring to climate modeling software. We describe the unique challenges faced when testing complex numerical algorithms and suggest techniques to minimize and/or eliminate the difficulties.

  7. Mars Science Laboratory Flight Software Boot Robustness Testing Project Report

    Science.gov (United States)

    Roth, Brian

    2011-01-01

    On the surface of Mars, the Mars Science Laboratory will boot up its flight computers every morning, having charged the batteries through the night. This boot process is complicated, critical, and affected by numerous hardware states that can be difficult to test. The hardware test beds do not facilitate testing a long duration of back-to-back unmanned automated tests, and although the software simulation has provided the necessary functionality and fidelity for this boot testing, there has not been support for the full flexibility necessary for this task. Therefore to perform this testing a framework has been build around the software simulation that supports running automated tests loading a variety of starting configurations for software and hardware states. This implementation has been tested against the nominal cases to validate the methodology, and support for configuring off-nominal cases is ongoing. The implication of this testing is that the introduction of input configurations that have yet proved difficult to test may reveal boot scenarios worth higher fidelity investigation, and in other cases increase confidence in the robustness of the flight software boot process.

  8. Fuzzy modeling of analytical redundancy for sensor failure detection

    International Nuclear Information System (INIS)

    Tsai, T.M.; Chou, H.P.

    1991-01-01

    Failure detection and isolation (FDI) in dynamic systems may be accomplished by testing the consistency of the system via analytically redundant relations. The redundant relation is basically a mathematical model relating system inputs and dissimilar sensor outputs from which information is extracted and subsequently examined for the presence of failure signatures. Performance of the approach is often jeopardized by inherent modeling error and noise interference. To mitigate such effects, techniques such as Kalman filtering, auto-regression-moving-average (ARMA) modeling in conjunction with probability tests are often employed. These conventional techniques treat the stochastic nature of uncertainties in a deterministic manner to generate best-estimated model and sensor outputs by minimizing uncertainties. In this paper, the authors present a different approach by treating the effect of uncertainties with fuzzy numbers. Coefficients in redundant relations derived from first-principle physical models are considered as fuzzy parameters and on-line updated according to system behaviors. Failure detection is accomplished by examining the possibility that a sensor signal occurred in an estimated fuzzy domain. To facilitate failure isolation, individual FDI monitors are designed for each interested sensor

  9. Neural redundancy applied to the parity space for signal validation

    International Nuclear Information System (INIS)

    Mol, Antonio Carlos de Abreu; Pereira, Claudio Marcio Nascimento Abreu; Martinez, Aquilino Senra

    2005-01-01

    The objective of signal validation is to provide more reliable information from the plant sensor data The method presented in this work introduces the concept of neural redundancy and applies it to the space parity method [1] to overcome an inherent deficiency of this method - the determination of the best estimative of the redundant measures when they are inconsistent. The concept of neural redundancy consists on the calculation of a redundancy through neural networks based on the time series of the own state variable. Therefore, neural networks, dynamically trained with the time series, will estimate the current value of the own measure, which will be used as referee of the redundant measures in the parity space. For this purpose the neural network should have the capacity to supply the neural redundancy in real time and with maximum error corresponding to the group deviation. The historical series should be enough to allow the estimate of the next value, during transients and at the same time, it should be optimized to facilitate the retraining of the neural network to each acquisition. In order to have the capacity to reproduce the tendency of the time series even under accident condition, the dynamic training of the neural network privileges the recent points of the time series. The tests accomplished with simulated data of a nuclear plant, demonstrated that this method applied on the parity space method improves the signal validation process. (author)

  10. Neural redundancy applied to the parity space for signal validation

    Energy Technology Data Exchange (ETDEWEB)

    Mol, Antonio Carlos de Abreu; Pereira, Claudio Marcio Nascimento Abreu [Instituto de Engenharia Nuclear (IEN), Rio de Janeiro, RJ (Brazil)]. E-mail: cmnap@ien.gov.br; Martinez, Aquilino Senra [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia]. E-mail: aquilino@lmp.br

    2005-07-01

    The objective of signal validation is to provide more reliable information from the plant sensor data The method presented in this work introduces the concept of neural redundancy and applies it to the space parity method [1] to overcome an inherent deficiency of this method - the determination of the best estimative of the redundant measures when they are inconsistent. The concept of neural redundancy consists on the calculation of a redundancy through neural networks based on the time series of the own state variable. Therefore, neural networks, dynamically trained with the time series, will estimate the current value of the own measure, which will be used as referee of the redundant measures in the parity space. For this purpose the neural network should have the capacity to supply the neural redundancy in real time and with maximum error corresponding to the group deviation. The historical series should be enough to allow the estimate of the next value, during transients and at the same time, it should be optimized to facilitate the retraining of the neural network to each acquisition. In order to have the capacity to reproduce the tendency of the time series even under accident condition, the dynamic training of the neural network privileges the recent points of the time series. The tests accomplished with simulated data of a nuclear plant, demonstrated that this method applied on the parity space method improves the signal validation process. (author)

  11. Path generation algorithm for UML graphic modeling of aerospace test software

    Science.gov (United States)

    Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Chen, Chao

    2018-03-01

    Aerospace traditional software testing engineers are based on their own work experience and communication with software development personnel to complete the description of the test software, manual writing test cases, time-consuming, inefficient, loopholes and more. Using the high reliability MBT tools developed by our company, the one-time modeling can automatically generate test case documents, which is efficient and accurate. UML model to describe the process accurately express the need to rely on the path is reached, the existing path generation algorithm are too simple, cannot be combined into a path and branch path with loop, or too cumbersome, too complicated arrangement generates a path is meaningless, for aerospace software testing is superfluous, I rely on our experience of ten load space, tailor developed a description of aerospace software UML graphics path generation algorithm.

  12. IHE cross-enterprise document sharing for imaging: interoperability testing software

    Directory of Open Access Journals (Sweden)

    Renaud Bérubé

    2010-09-01

    Full Text Available Abstract Background With the deployments of Electronic Health Records (EHR, interoperability testing in healthcare is becoming crucial. EHR enables access to prior diagnostic information in order to assist in health decisions. It is a virtual system that results from the cooperation of several heterogeneous distributed systems. Interoperability between peers is therefore essential. Achieving interoperability requires various types of testing. Implementations need to be tested using software that simulates communication partners, and that provides test data and test plans. Results In this paper we describe a software that is used to test systems that are involved in sharing medical images within the EHR. Our software is used as part of the Integrating the Healthcare Enterprise (IHE testing process to test the Cross Enterprise Document Sharing for imaging (XDS-I integration profile. We describe its architecture and functionalities; we also expose the challenges encountered and discuss the elected design solutions. Conclusions EHR is being deployed in several countries. The EHR infrastructure will be continuously evolving to embrace advances in the information technology domain. Our software is built on a web framework to allow for an easy evolution with web technology. The testing software is publicly available; it can be used by system implementers to test their implementations. It can also be used by site integrators to verify and test the interoperability of systems, or by developers to understand specifications ambiguities, or to resolve implementations difficulties.

  13. Design of LabVIEW based test system software for MDC electronics

    International Nuclear Information System (INIS)

    Xue Lin; Huazhong Normal Univ., Wuhan; Huang Guangming; Zhang Hongyu; Jiang Xiaoshan; Sheng Huayi; Zhuang Baoan

    2006-01-01

    This paper presents the design of Test System Software for MDC Electronics. The highly modular software, developed in LabVIEW and VC ++ 6.0, has been applied in hardware debugging and performance test. LabVIEW and its DLL calling mechanism are introduced briefly. Testing functions of the software, as well as its user interfaces, are described in detail. (authors)

  14. EQ3/6 software test and verification report 9/94

    International Nuclear Information System (INIS)

    Kishi, T.

    1996-02-01

    This document is the Software Test and Verification Report (STVR) for the EQ3/6 suite of codes as stipulated in the Individual Software Plan for Initial Qualification of EQ3/6 (ISP-NF-07, Revision 1, 11/25/92). The software codes, EQPT, EQ3NR, EQ6, and the software library EQLIB constitute the EQ3/6 software package. This software test and verification project for EQ3/6 was started under the requirements of the LLNL Yucca Mountain Project Software Quality Assurance Plan (SQAP), Revision 0, December 14, 1989, but QP 3.2, Revision 2, June 21, 1994 is now the operative controlling procedure. This is a ''V and V'' report in the language of QP 3.2, Revision 2. Because the author of this report does not have a background in geochemistry, other technical sources were consulted in order to acquire some familiarity with geochemisty, the terminology minology involved, and to review comparable computational methods especially, geochemical aqueous speciation-solubility calculations. The software for the EQ3/6 package consists of approximately 47,000 lines of FORTRAN77 source code and nine on platforms ranging from workstations to supercomputers. The physical control of EQ3/6 software package and documentation is on a SUN SPARC station. Walkthroughs of each principal software packages, EQPT, EQ3NR, and EQ6 were conducted in order to understand the computational procedures involved, to determine any commonality in procedures, and then to establish a plan for the test and verification of EQ3/6. It became evident that all three phases depended upon solving an n x n matrix by the Newton-Raphson Method. Thus, a great deal of emphasis on the test and verification of this procedure was carried out on the first code in the software package EQPT

  15. EQ3/6 software test and verification report 9/94

    Energy Technology Data Exchange (ETDEWEB)

    Kishi, T.

    1996-02-01

    This document is the Software Test and Verification Report (STVR) for the EQ3/6 suite of codes as stipulated in the Individual Software Plan for Initial Qualification of EQ3/6 (ISP-NF-07, Revision 1, 11/25/92). The software codes, EQPT, EQ3NR, EQ6, and the software library EQLIB constitute the EQ3/6 software package. This software test and verification project for EQ3/6 was started under the requirements of the LLNL Yucca Mountain Project Software Quality Assurance Plan (SQAP), Revision 0, December 14, 1989, but QP 3.2, Revision 2, June 21, 1994 is now the operative controlling procedure. This is a ``V and V`` report in the language of QP 3.2, Revision 2. Because the author of this report does not have a background in geochemistry, other technical sources were consulted in order to acquire some familiarity with geochemisty, the terminology minology involved, and to review comparable computational methods especially, geochemical aqueous speciation-solubility calculations. The software for the EQ3/6 package consists of approximately 47,000 lines of FORTRAN77 source code and nine on platforms ranging from workstations to supercomputers. The physical control of EQ3/6 software package and documentation is on a SUN SPARC station. Walkthroughs of each principal software packages, EQPT, EQ3NR, and EQ6 were conducted in order to understand the computational procedures involved, to determine any commonality in procedures, and then to establish a plan for the test and verification of EQ3/6. It became evident that all three phases depended upon solving an n x n matrix by the Newton-Raphson Method. Thus, a great deal of emphasis on the test and verification of this procedure was carried out on the first code in the software package EQPT.

  16. Development of a test rig and its application for validation and reliability testing of safety-critical software

    Energy Technology Data Exchange (ETDEWEB)

    Thai, N D; McDonald, A M [Atomic Energy of Canada Ltd., Mississauga, ON (Canada)

    1996-12-31

    This paper describes a versatile test rig developed by AECL for functional testing of safety-critical software used in the process trip computers of the Wolsong CANDU stations. The description covers the hardware and software aspects of the test rig, the test language and its interpreter, and other major testing software utilities such as the test oracle, sampler and profiler. The paper also discusses the application of the rig in the final stages of testing of the process trip computer software, namely validation and reliability tests. It shows how random test cases are generated, test scripts prepared and automatically run on the test rig. The versatility of the rig is further demonstrated in other types of testing such as sub-system tests, verification of the test oracle, testing of newly-developed test script, self-test and calibration. (author). 5 tabs., 10 figs.

  17. Development of a test rig and its application for validation and reliability testing of safety-critical software

    International Nuclear Information System (INIS)

    Thai, N.D.; McDonald, A.M.

    1995-01-01

    This paper describes a versatile test rig developed by AECL for functional testing of safety-critical software used in the process trip computers of the Wolsong CANDU stations. The description covers the hardware and software aspects of the test rig, the test language and its interpreter, and other major testing software utilities such as the test oracle, sampler and profiler. The paper also discusses the application of the rig in the final stages of testing of the process trip computer software, namely validation and reliability tests. It shows how random test cases are generated, test scripts prepared and automatically run on the test rig. The versatility of the rig is further demonstrated in other types of testing such as sub-system tests, verification of the test oracle, testing of newly-developed test script, self-test and calibration. (author). 5 tabs., 10 figs

  18. Leveraging the wisdom of the crowd in software testing

    CERN Document Server

    Sharma, Mukesh

    2015-01-01

    Its scale, flexibility, cost effectiveness, and fast turnaround are just a few reasons why crowdsourced testing has received so much attention lately. While there are a few online resources that explain what crowdsourced testing is all about, there's been a need for a book that covers best practices, case studies, and the future of this technique.Filling this need, Leveraging the Wisdom of the Crowd in Software Testing shows you how to leverage the wisdom of the crowd in your software testing process. Its comprehensive coverage includes the history of crowdsourcing and crowdsourced testing, im

  19. Acceptance Test Plan for ANSYS Software

    International Nuclear Information System (INIS)

    CREA, B.A.

    2000-01-01

    This plan governs the acceptance testing of the ANSYS software (Full Mechanical Release 5.5) for use on Project Word Management Contract (PHMC) computer systems (either UNIX or Microsoft Windows/NT). There are two phases to the acceptance testing covered by this test plan: program execution in accordance with the guidance provided in installation manuals; and ensuring results of the execution are consistent with the expected physical behavior of the system being modeled

  20. A theoretical basis for the analysis of multiversion software subject to coincident errors

    Science.gov (United States)

    Eckhardt, D. E., Jr.; Lee, L. D.

    1985-01-01

    Fundamental to the development of redundant software techniques (known as fault-tolerant software) is an understanding of the impact of multiple joint occurrences of errors, referred to here as coincident errors. A theoretical basis for the study of redundant software is developed which: (1) provides a probabilistic framework for empirically evaluating the effectiveness of a general multiversion strategy when component versions are subject to coincident errors, and (2) permits an analytical study of the effects of these errors. An intensity function, called the intensity of coincident errors, has a central role in this analysis. This function describes the propensity of programmers to introduce design faults in such a way that software components fail together when executing in the application environment. A condition under which a multiversion system is a better strategy than relying on a single version is given.

  1. Reliability optimization of series–parallel systems with mixed redundancy strategy in subsystems

    International Nuclear Information System (INIS)

    Abouei Ardakan, Mostafa; Zeinal Hamadani, Ali

    2014-01-01

    Traditionally in redundancy allocation problem (RAP), it is assumed that the redundant components are used based on a predefined active or standby strategies. Recently, some studies consider the situation that both active and standby strategies can be used in a specific system. However, these researches assume that the redundancy strategy for each subsystem can be either active or standby and determine the best strategy for these subsystems by using a proper mathematical model. As an extension to this assumption, a novel strategy, that is a combination of traditional active and standby strategies, is introduced. The new strategy is called mixed strategy which uses both active and cold-standby strategies in one subsystem simultaneously. Therefore, the problem is to determine the component type, redundancy level, number of active and cold-standby units for each subsystem in order to maximize the system reliability. To have a more practical model, the problem is formulated with imperfect switching of cold-standby redundant components and k-Erlang time-to-failure (TTF) distribution. As the optimization of RAP belongs to NP-hard class of problems, a genetic algorithm (GA) is developed. The new strategy and proposed GA are implemented on a well-known test problem in the literature which leads to interesting results. - Highlights: • In this paper the redundancy allocation problem (RAP) for a series–parallel system is considered. • Traditionally there are two main strategies for redundant component namely active and standby. • In this paper a new redundancy strategy which is called “Mixed” redundancy strategy is introduced. • Computational experiments demonstrate that implementing the new strategy lead to interesting results

  2. The theory of diversity and redundancy in information system security : LDRD final report.

    Energy Technology Data Exchange (ETDEWEB)

    Mayo, Jackson R. (Sandia National Laboratories, Livermore, CA); Torgerson, Mark Dolan; Walker, Andrea Mae; Armstrong, Robert C. (Sandia National Laboratories, Livermore, CA); Allan, Benjamin A. (Sandia National Laboratories, Livermore, CA); Pierson, Lyndon George

    2010-10-01

    The goal of this research was to explore first principles associated with mixing of diverse implementations in a redundant fashion to increase the security and/or reliability of information systems. Inspired by basic results in computer science on the undecidable behavior of programs and by previous work on fault tolerance in hardware and software, we have investigated the problem and solution space for addressing potentially unknown and unknowable vulnerabilities via ensembles of implementations. We have obtained theoretical results on the degree of security and reliability benefits from particular diverse system designs, and mapped promising approaches for generating and measuring diversity. We have also empirically studied some vulnerabilities in common implementations of the Linux operating system and demonstrated the potential for diversity to mitigate these vulnerabilities. Our results provide foundational insights for further research on diversity and redundancy approaches for information systems.

  3. The development of test software for the inadequate core cooling monitoring system

    International Nuclear Information System (INIS)

    Lee, Soon Sung.

    1996-06-01

    The test software including the ICCMS simulator which is necessary for dynamic test for the ICCMS software in PWR is developed. The developed dynamic test software consists of the module test simulator, the integration test simulator, and the test result analyser. The simulator was programmed by C language according to the same algorithm requirements for the FORTRAN version ICCMS software, and also for the Factory Acceptance Test (FAT). And the simulator can be used as training tool for the reactor operator and system development tool for the performance improvement. (author). 4 tabs., 8 figs., 11 refs

  4. Small-scale fixed wing airplane software verification flight test

    Science.gov (United States)

    Miller, Natasha R.

    The increased demand for micro Unmanned Air Vehicles (UAV) driven by military requirements, commercial use, and academia is creating a need for the ability to quickly and accurately conduct low Reynolds Number aircraft design. There exist several open source software programs that are free or inexpensive that can be used for large scale aircraft design, but few software programs target the realm of low Reynolds Number flight. XFLR5 is an open source, free to download, software program that attempts to take into consideration viscous effects that occur at low Reynolds Number in airfoil design, 3D wing design, and 3D airplane design. An off the shelf, remote control airplane was used as a test bed to model in XFLR5 and then compared to flight test collected data. Flight test focused on the stability modes of the 3D plane, specifically the phugoid mode. Design and execution of the flight tests were accomplished for the RC airplane using methodology from full scale military airplane test procedures. Results from flight test were not conclusive in determining the accuracy of the XFLR5 software program. There were several sources of uncertainty that did not allow for a full analysis of the flight test results. An off the shelf drone autopilot was used as a data collection device for flight testing. The precision and accuracy of the autopilot is unknown. Potential future work should investigate flight test methods for small scale UAV flight.

  5. What We Know about Software Test Maturity and Test Process Improvement

    NARCIS (Netherlands)

    Garousi, Vahid; Felderer, Michael; Hacaloglu, Tuna

    2018-01-01

    In many companies, software testing practices and processes are far from mature and are usually conducted in an ad hoc fashion. Such immature practices lead to negative outcomes - for example, testing that doesn't detect all the defects or that incurs cost and schedule overruns. To conduct test

  6. Mediated Instruction and Redundancy Remediation in Sciences in ...

    African Journals Online (AJOL)

    The data were analyzed using t-test statistics. Data analysis revealed that use of mediated instruction significantly removed redundancy for science students also the use of mediated instruction influenced academic achievement of science students in secondary schools. Some of the recommendations include that science ...

  7. Performance testing of 3D point cloud software

    Science.gov (United States)

    Varela-González, M.; González-Jorge, H.; Riveiro, B.; Arias, P.

    2013-10-01

    LiDAR systems are being used widely in recent years for many applications in the engineering field: civil engineering, cultural heritage, mining, industry and environmental engineering. One of the most important limitations of this technology is the large computational requirements involved in data processing, especially for large mobile LiDAR datasets. Several software solutions for data managing are available in the market, including open source suites, however, users often unknown methodologies to verify their performance properly. In this work a methodology for LiDAR software performance testing is presented and four different suites are studied: QT Modeler, VR Mesh, AutoCAD 3D Civil and the Point Cloud Library running in software developed at the University of Vigo (SITEGI). The software based on the Point Cloud Library shows better results in the loading time of the point clouds and CPU usage. However, it is not as strong as commercial suites in working set and commit size tests.

  8. Performance testing of 3D point cloud software

    Directory of Open Access Journals (Sweden)

    M. Varela-González

    2013-10-01

    Full Text Available LiDAR systems are being used widely in recent years for many applications in the engineering field: civil engineering, cultural heritage, mining, industry and environmental engineering. One of the most important limitations of this technology is the large computational requirements involved in data processing, especially for large mobile LiDAR datasets. Several software solutions for data managing are available in the market, including open source suites, however, users often unknown methodologies to verify their performance properly. In this work a methodology for LiDAR software performance testing is presented and four different suites are studied: QT Modeler, VR Mesh, AutoCAD 3D Civil and the Point Cloud Library running in software developed at the University of Vigo (SITEGI. The software based on the Point Cloud Library shows better results in the loading time of the point clouds and CPU usage. However, it is not as strong as commercial suites in working set and commit size tests.

  9. Human genetics of infectious diseases: Unique insights into immunological redundancy.

    Science.gov (United States)

    Casanova, Jean-Laurent; Abel, Laurent

    2018-04-01

    For almost any given human-tropic virus, bacterium, fungus, or parasite, the clinical outcome of primary infection is enormously variable, ranging from asymptomatic to lethal infection. This variability has long been thought to be largely determined by the germline genetics of the human host, and this is increasingly being demonstrated to be the case. The number and diversity of known inborn errors of immunity is continually increasing, and we focus here on autosomal and X-linked recessive traits underlying complete deficiencies of the encoded protein. Schematically, four types of infectious phenotype have been observed in individuals with such deficiencies, each providing information about the redundancy of the corresponding human gene, in terms of host defense in natural conditions. The lack of a protein can confer vulnerability to a broad range of microbes in most, if not all patients, through the disruption of a key immunological component. In such cases, the gene concerned is of low redundancy. However, the lack of a protein may also confer vulnerability to a narrow range of microbes, sometimes a single pathogen, and not necessarily in all patients. In such cases, the gene concerned is highly redundant. Conversely, the deficiency may be apparently neutral, conferring no detectable predisposition to infection in any individual. In such cases, the gene concerned is completely redundant. Finally, the lack of a protein may, paradoxically, be advantageous to the host, conferring resistance to one or more infections. In such cases, the gene is considered to display beneficial redundancy. These findings reflect the current state of evolution of humans and microbes, and should not be considered predictive of redundancy, or of a lack of redundancy, in the distant future. Nevertheless, these observations are of potential interest to present-day biologists testing immunological hypotheses experimentally and physicians managing patients with immunological or infectious

  10. Creating a simulation model of software testing using Simulink package

    Directory of Open Access Journals (Sweden)

    V. M. Dubovoi

    2016-12-01

    Full Text Available The determination of the solution model of software testing that allows prediction both the whole process and its specific stages is actual for IT-industry. The article focuses on solving this problem. The aim of the article is prediction the time and improvement the quality of software testing. The analysis of the software testing process shows that it can be attributed to the branched cyclic technological processes because it is cyclical with decision-making on control operations. The investigation uses authors' previous works andsoftware testing process method based on Markov model. The proposed method enables execution the prediction for each software module, which leads to better decision-making of each controlled suboperation of all processes. Simulink simulation model shows implementation and verification of results of proposed technique. Results of the research have practically implemented in the IT-industry.

  11. Common Data Acquisition Systems (DAS) Software Development for Rocket Propulsion Test (RPT) Test Facilities - A General Overview

    Science.gov (United States)

    Hebert, Phillip W., Sr.; Hughes, Mark S.; Davis, Dawn M.; Turowski, Mark P.; Holladay, Wendy T.; Marshall, PeggL.; Duncan, Michael E.; Morris, Jon A.; Franzl, Richard W.

    2012-01-01

    The advent of the commercial space launch industry and NASA's more recent resumption of operation of Stennis Space Center's large test facilities after thirty years of contractor control resulted in a need for a non-proprietary data acquisition system (DAS) software to support government and commercial testing. The software is designed for modularity and adaptability to minimize the software development effort for current and future data systems. An additional benefit of the software's architecture is its ability to easily migrate to other testing facilities thus providing future commonality across Stennis. Adapting the software to other Rocket Propulsion Test (RPT) Centers such as MSFC, White Sands, and Plumbrook Station would provide additional commonality and help reduce testing costs for NASA. Ultimately, the software provides the government with unlimited rights and guarantees privacy of data to commercial entities. The project engaged all RPT Centers and NASA's Independent Verification & Validation facility to enhance product quality. The design consists of a translation layer which provides the transparency of the software application layers to underlying hardware regardless of test facility location and a flexible and easily accessible database. This presentation addresses system technical design, issues encountered, and the status of Stennis' development and deployment.

  12. Florida alternative NTCIP testing software (ANTS) for actuated signal controllers.

    Science.gov (United States)

    2009-01-01

    The scope of this research project did include the development of a software tool to test devices for NTCIP compliance. Development of the Florida Alternative NTCIP Testing Software (ANTS) was developed by the research team due to limitations found w...

  13. Rules of thumb to increase the software quality through testing

    Science.gov (United States)

    Buttu, M.; Bartolini, M.; Migoni, C.; Orlati, A.; Poppi, S.; Righini, S.

    2016-07-01

    The software maintenance typically requires 40-80% of the overall project costs, and this considerable variability mostly depends on the software internal quality: the more the software is designed and implemented to constantly welcome new changes, the lower will be the maintenance costs. The internal quality is typically enforced through testing, which in turn also affects the development and maintenance costs. This is the reason why testing methodologies have become a major concern for any company that builds - or is involved in building - software. Although there is no testing approach that suits all contexts, we infer some general guidelines learned during the Development of the Italian Single-dish COntrol System (DISCOS), which is a project aimed at producing the control software for the three INAF radio telescopes (the Medicina and Noto dishes, and the newly-built SRT). These guidelines concern both the development and the maintenance phases, and their ultimate goal is to maximize the DISCOS software quality through a Behavior-Driven Development (BDD) workflow beside a continuous delivery pipeline. We consider different topics and patterns; they involve the proper apportion of the tests (from end-to-end to low-level tests), the choice between hardware simulators and mockers, why and how to apply TDD and the dependency injection to increase the test coverage, the emerging technologies available for test isolation, bug fixing, how to protect the system from the external resources changes (firmware updating, hardware substitution, etc.) and, eventually, how to accomplish BDD starting from functional tests and going through integration and unit tests. We discuss pros and cons of each solution and point out the motivations of our choices either as a general rule or narrowed in the context of the DISCOS project.

  14. Claire, a simulation and testing tool for critical softwares

    International Nuclear Information System (INIS)

    Gassino, J.; Henry, J.Y.

    1996-01-01

    The CEA and IPSN (Institute of Nuclear Protection and Safety) needs concerning the testing of critical softwares, have led to the development of the CLAIRE tool which is able to test the softwares without modification. This tool allows to graphically model the system and its environment and to include components into the model which observe and do not modify the behaviour of the system to be tested. The executable codes are integrated in the model. The tool uses target machine simulators (microprocessors). The technique used (the event simulation) allows to associate actions with events such as the execution of an instruction, the access to a variable etc.. The simulation results are exploited using graphic, states research and test cover measurement tools. In particular, this tool can give help to the evaluation of critical softwares with pre-existing components. (J.S.)

  15. Nonlinear Redundancy Analysis. Research Report 88-1.

    Science.gov (United States)

    van der Burg, Eeke; de Leeuw, Jan

    A non-linear version of redundancy analysis is introduced. The technique is called REDUNDALS. It is implemented within the computer program for canonical correlation analysis called CANALS. The REDUNDALS algorithm is of an alternating least square (ALS) type. The technique is defined as minimization of a squared distance between criterion…

  16. A Method to Select Software Test Cases in Consideration of Past Input Sequence

    International Nuclear Information System (INIS)

    Kim, Hee Eun; Kim, Bo Gyung; Kang, Hyun Gook

    2015-01-01

    In the Korea Nuclear I and C Systems (KNICS) project, the software for the fully-digitalized reactor protection system (RPS) was developed under a strict procedure. Even though the behavior of the software is deterministic, the randomness of input sequence produces probabilistic behavior of software. A software failure occurs when some inputs to the software occur and interact with the internal state of the digital system to trigger a fault that was introduced into the software during the software lifecycle. In this paper, the method to select test set for software failure probability estimation is suggested. This test set reflects past input sequence of software, which covers all possible cases. In this study, the method to select test cases for software failure probability quantification was suggested. To obtain profile of paired state variables, relationships of the variables need to be considered. The effect of input from human operator also have to be considered. As an example, test set of PZR-PR-Lo-Trip logic was examined. This method provides framework for selecting test cases of safety-critical software

  17. Imaging Sensor Flight and Test Equipment Software

    Science.gov (United States)

    Freestone, Kathleen; Simeone, Louis; Robertson, Byran; Frankford, Maytha; Trice, David; Wallace, Kevin; Wilkerson, DeLisa

    2007-01-01

    The Lightning Imaging Sensor (LIS) is one of the components onboard the Tropical Rainfall Measuring Mission (TRMM) satellite, and was designed to detect and locate lightning over the tropics. The LIS flight code was developed to run on a single onboard digital signal processor, and has operated the LIS instrument since 1997 when the TRMM satellite was launched. The software provides controller functions to the LIS Real-Time Event Processor (RTEP) and onboard heaters, collects the lightning event data from the RTEP, compresses and formats the data for downlink to the satellite, collects housekeeping data and formats the data for downlink to the satellite, provides command processing and interface to the spacecraft communications and data bus, and provides watchdog functions for error detection. The Special Test Equipment (STE) software was designed to operate specific test equipment used to support the LIS hardware through development, calibration, qualification, and integration with the TRMM spacecraft. The STE software provides the capability to control instrument activation, commanding (including both data formatting and user interfacing), data collection, decompression, and display and image simulation. The LIS STE code was developed for the DOS operating system in the C programming language. Because of the many unique data formats implemented by the flight instrument, the STE software was required to comprehend the same formats, and translate them for the test operator. The hardware interfaces to the LIS instrument using both commercial and custom computer boards, requiring that the STE code integrate this variety into a working system. In addition, the requirement to provide RTEP test capability dictated the need to provide simulations of background image data with short-duration lightning transients superimposed. This led to the development of unique code used to control the location, intensity, and variation above background for simulated lightning strikes

  18. In-flight performance optimization for rotorcraft with redundant controls

    Science.gov (United States)

    Ozdemir, Gurbuz Taha

    A conventional helicopter has limits on performance at high speeds because of the limitations of main rotor, such as compressibility issues on advancing side or stall issues on retreating side. Auxiliary lift and thrust components have been suggested to improve performance of the helicopter substantially by reducing the loading on the main rotor. Such a configuration is called the compound rotorcraft. Rotor speed can also be varied to improve helicopter performance. In addition to improved performance, compound rotorcraft and variable RPM can provide a much larger degree of control redundancy. This additional redundancy gives the opportunity to further enhance performance and handling qualities. A flight control system is designed to perform in-flight optimization of redundant control effectors on a compound rotorcraft in order to minimize power required and extend range. This "Fly to Optimal" (FTO) control law is tested in simulation using the GENHEL model. A model of the UH-60, a compound version of the UH-60A with lifting wing and vectored thrust ducted propeller (VTDP), and a generic compound version of the UH-60A with lifting wing and propeller were developed and tested in simulation. A model following dynamic inversion controller is implemented for inner loop control of roll, pitch, yaw, heave, and rotor RPM. An outer loop controller regulates airspeed and flight path during optimization. A Golden Section search method was used to find optimal rotor RPM on a conventional helicopter, where the single redundant control effector is rotor RPM. The FTO builds off of the Adaptive Performance Optimization (APO) method of Gilyard by performing low frequency sweeps on a redundant control for a fixed wing aircraft. A method based on the APO method was used to optimize trim on a compound rotorcraft with several redundant control effectors. The controller can be used to optimize rotor RPM and compound control effectors through flight test or simulations in order to

  19. Information theory and artificial grammar learning: inferring grammaticality from redundancy.

    Science.gov (United States)

    Jamieson, Randall K; Nevzorova, Uliana; Lee, Graham; Mewhort, D J K

    2016-03-01

    In artificial grammar learning experiments, participants study strings of letters constructed using a grammar and then sort novel grammatical test exemplars from novel ungrammatical ones. The ability to distinguish grammatical from ungrammatical strings is often taken as evidence that the participants have induced the rules of the grammar. We show that judgements of grammaticality are predicted by the local redundancy of the test strings, not by grammaticality itself. The prediction holds in a transfer test in which test strings involve different letters than the training strings. Local redundancy is usually confounded with grammaticality in stimuli widely used in the literature. The confounding explains why the ability to distinguish grammatical from ungrammatical strings has popularized the idea that participants have induced the rules of the grammar, when they have not. We discuss the judgement of grammaticality task in terms of attribute substitution and pattern goodness. When asked to judge grammaticality (an inaccessible attribute), participants answer an easier question about pattern goodness (an accessible attribute).

  20. Software test and validation of wireless sensor nodes used in nuclear power plant

    International Nuclear Information System (INIS)

    Deng Changjian; Chen Dongyi; Zhang Heng

    2015-01-01

    The software test and validation of wireless sensor nodes is one of the key approaches to improve or guarantee the reliability of wireless network application in nuclear power plants (NPPs). At first, to validate the software test, some concepts are defined quantitatively, for example the robustness of software, the reliability of software, and the security of software. Then the development tools and simulators of discrete event drive operating system are compared, in order to present robustness, reliability and security of software test approach based on input-output function. Some simple preliminary test results are given to show that different development software can obtain almost same measurement and communication results although the software of special application may be different than normal application. (author)

  1. Balancing technical and regulatory concerns related to testing and control of performance assessment software

    International Nuclear Information System (INIS)

    Seitz, R.R.; Matthews, S.D.; Kostelnik, K.M.

    1990-01-01

    What activities are required to assure that a performance assessment (PA) computer code operates as it is intended? Answers to this question will vary depending on the individual's area of expertise. Different perspectives on testing and control of PA software are discussed based on interpretations of the testing and control process associated with the different involved parties. This discussion leads into the presentation of a general approach to software testing and control that address regulatory requirements. Finally, the need for balance between regulatory and scientific concerns is illustrated through lessons learned in previous implementations of software testing and control programs. Configuration control and software testing are required to provide assurance that a computer code performs as intended. Configuration control provides traceability and reproducibility of results produced with PA software and provides a system to assure that users have access to the current version of the software. Software testing is conducted to assure that the computer code has been written properly, solution techniques have been properly implemented, and the software is capable of representing the behavior of the specific system to be modeled. Comprehensive software testing includes: software analysis, verification testing, benchmark testing, and site-specific calibration/validation testing

  2. Controlatron Neutron Tube Test Suite Software Manual - Operation Manual (V2.2)

    CERN Document Server

    Noel, W P; Hertrich, R J; Martinez, M L; Wallace, D L

    2002-01-01

    The Controlatron Software Suite is a custom built application to perform automated testing of Controlatron neutron tubes. The software package was designed to allowing users to design tests and to run a series of test suites on a tube. The data is output to ASCII files of a pre-defined format for data analysis and viewing with the Controlatron Data Viewer Application. This manual discusses the operation of the Controlatron Test Suite Software and a brief discussion of state machine theory, as state machine is the functional basis of the software.

  3. TMACS test procedure TP012: Panalarm software bridge

    International Nuclear Information System (INIS)

    Washburn, S.J.

    1994-01-01

    This Test Procedure addresses the testing of the functionality of the Tank Monitor and Control System (TMACS) Panalarm bridge software. The features to be tested are: Bridge Initialization Options; Bridge Communication; Bridge Performance; Testing Checksum Errors; and Testing Command Reject Errors. Only the first three could be tested; the last two have been deferred to a later date

  4. Southern California Seismic Network: New Design and Implementation of Redundant and Reliable Real-time Data Acquisition Systems

    Science.gov (United States)

    Saleh, T.; Rico, H.; Solanki, K.; Hauksson, E.; Friberg, P.

    2005-12-01

    The Southern California Seismic Network (SCSN) handles more than 2500 high-data rate channels from more than 380 seismic stations distributed across southern California. These data are imported real-time from dataloggers, earthworm hubs, and partner networks. The SCSN also exports data to eight different partner networks. Both the imported and exported data are critical for emergency response and scientific research. Previous data acquisition systems were complex and difficult to operate, because they grew in an ad hoc fashion to meet the increasing needs for distributing real-time waveform data. To maximize reliability and redundancy, we apply best practices methods from computer science for implementing the software and hardware configurations for import, export, and acquisition of real-time seismic data. Our approach makes use of failover software designs, methods for dividing labor diligently amongst the network nodes, and state of the art networking redundancy technologies. To facilitate maintenance and daily operations we seek to provide some separation between major functions such as data import, export, acquisition, archiving, real-time processing, and alarming. As an example, we make waveform import and export functions independent by operating them on separate servers. Similarly, two independent servers provide waveform export, allowing data recipients to implement their own redundancy. The data import is handled differently by using one primary server and a live backup server. These data import servers, run fail-over software that allows automatic role switching in case of failure from primary to shadow. Similar to the classic earthworm design, all the acquired waveform data are broadcast onto a private network, which allows multiple machines to acquire and process the data. As we separate data import and export away from acquisition, we are also working on new approaches to separate real-time processing and rapid reliable archiving of real-time data

  5. Real-Time Extended Interface Automata for Software Testing Cases Generation

    Science.gov (United States)

    Yang, Shunkun; Xu, Jiaqi; Man, Tianlong; Liu, Bin

    2014-01-01

    Testing and verification of the interface between software components are particularly important due to the large number of complex interactions, which requires the traditional modeling languages to overcome the existing shortcomings in the aspects of temporal information description and software testing input controlling. This paper presents the real-time extended interface automata (RTEIA) which adds clearer and more detailed temporal information description by the application of time words. We also establish the input interface automaton for every input in order to solve the problems of input controlling and interface covering nimbly when applied in the software testing field. Detailed definitions of the RTEIA and the testing cases generation algorithm are provided in this paper. The feasibility and efficiency of this method have been verified in the testing of one real aircraft braking system. PMID:24892080

  6. Formal Testing of Correspondence Carrying Software

    NARCIS (Netherlands)

    Bujorianu, M.C.; Bujorianu, L.M.; Maharaj, S.

    2008-01-01

    Nowadays formal software development is characterised by use of multitude formal specification languages. Test case generation from formal specifications depends in general on a specific language, and, moreover, there are competing methods for each language. There is a need for a generic approach to

  7. Safety review on unit testing of safety system software of nuclear power plant

    International Nuclear Information System (INIS)

    Liu Le; Zhang Qi

    2013-01-01

    Software unit testing has an important place in the testing of safety system software of nuclear power plants, and in the wider scope of the verification and validation. It is a comprehensive, systematic process, and its documentation shall meet the related requirements. When reviewing software unit testing, attention should be paid to the coverage of software safety requirements, the coverage of software internal structure, and the independence of the work. (authors)

  8. Prototype Software for Automated Structural Analysis of Systems

    DEFF Research Database (Denmark)

    Jørgensen, A.; Izadi-Zamanabadi, Roozbeh; Kristensen, M.

    2004-01-01

    In this paper we present a prototype software tool that is developed to analyse the structural model of automated systems in order to identify redundant information that is hence utilized for Fault detection and Isolation (FDI) purposes. The dedicated algorithms in this software tool use a tri......-partite graph that represents the structural model of the system. A component-based approach has been used to address issues such as system complexity and reconfigurability possibilities....

  9. Sustainable Modular Adaptive Redundancy Technique Emphasizing Partial Reconfiguration for Reduced Power Consumption

    Directory of Open Access Journals (Sweden)

    R. Al-Haddad

    2011-01-01

    Full Text Available As reconfigurable devices' capacities and the complexity of applications that use them increase, the need for self-reliance of deployed systems becomes increasingly prominent. Organic computing paradigms have been proposed for fault-tolerant systems because they promote behaviors that allow complex digital systems to adapt and survive in demanding environments. In this paper, we develop a sustainable modular adaptive redundancy technique (SMART composed of a two-layered organic system. The hardware layer is implemented on a Xilinx Virtex-4 Field Programmable Gate Array (FPGA to provide self-repair using a novel approach called reconfigurable adaptive redundancy system (RARS. The software layer supervises the organic activities on the FPGA and extends the self-healing capabilities through application-independent, intrinsic, and evolutionary repair techniques that leverage the benefits of dynamic partial reconfiguration (PR. SMART was evaluated using a Sobel edge-detection application and was shown to tolerate stressful sequences of injected transient and permanent faults while reducing dynamic power consumption by 30% compared to conventional triple modular redundancy (TMR techniques, with nominal impact on the fault-tolerance capabilities. Moreover, PR is employed to keep the system on line while under repair and also to reduce repair time. Experiments have shown a 27.48% decrease in repair time when PR is employed compared to the full bitstream configuration case.

  10. Potential Errors and Test Assessment in Software Product Line Engineering

    Directory of Open Access Journals (Sweden)

    Hartmut Lackner

    2015-04-01

    Full Text Available Software product lines (SPL are a method for the development of variant-rich software systems. Compared to non-variable systems, testing SPLs is extensive due to an increasingly amount of possible products. Different approaches exist for testing SPLs, but there is less research for assessing the quality of these tests by means of error detection capability. Such test assessment is based on error injection into correct version of the system under test. However to our knowledge, potential errors in SPL engineering have never been systematically identified before. This article presents an overview over existing paradigms for specifying software product lines and the errors that can occur during the respective specification processes. For assessment of test quality, we leverage mutation testing techniques to SPL engineering and implement the identified errors as mutation operators. This allows us to run existing tests against defective products for the purpose of test assessment. From the results, we draw conclusions about the error-proneness of the surveyed SPL design paradigms and how quality of SPL tests can be improved.

  11. Real-Time Extended Interface Automata for Software Testing Cases Generation

    Directory of Open Access Journals (Sweden)

    Shunkun Yang

    2014-01-01

    Full Text Available Testing and verification of the interface between software components are particularly important due to the large number of complex interactions, which requires the traditional modeling languages to overcome the existing shortcomings in the aspects of temporal information description and software testing input controlling. This paper presents the real-time extended interface automata (RTEIA which adds clearer and more detailed temporal information description by the application of time words. We also establish the input interface automaton for every input in order to solve the problems of input controlling and interface covering nimbly when applied in the software testing field. Detailed definitions of the RTEIA and the testing cases generation algorithm are provided in this paper. The feasibility and efficiency of this method have been verified in the testing of one real aircraft braking system.

  12. Software Unit Testing during the Development of Digital Reactor Protection System of HTR-PM

    International Nuclear Information System (INIS)

    Guo Chao; Xiong Huasheng; Li Duo; Zhou Shuqiao; Li Jianghai

    2014-01-01

    Reactor Protection System (RPS) of High Temperature Gas-Cooled Reactor - Pebble bed Module (HTR-PM) is the first digital RPS designed and to be operated in the Nuclear Power Plant (NPP) of China, and its development process has receives a lot of concerns around the world. As a 1E-level safety system, the RPS has to be designed and developed following a series of nuclear laws and technical disciplines including software verification and validation (software V&V). Software V&V process demonstrates whether all stages during the software development are performed correctly, completely, accurately, and consistently, and the results of each stage are testable. Software testing is one of the most significant and time-consuming effort during software V&V. In this paper, we give a comprehensive introduction to the software unit testing during the development of RPS in HTR-PM. We first introduce the objective of the testing for our project in the aspects of static testing, black-box testing, and white-box testing. Then the testing techniques, including static testing and dynamic testing, are explained, and the testing strategy we employed is also introduced. We then introduce the principles of three kinds of coverage criteria we used including statement coverage, branch coverage, and the modified condition/decision coverage. As a 1E-level safety software, testing coverage needs to be up to 100% mandatorily. Then we talk the details of safety software testing during software development in HTR-PM, including the organization, methods and tools, testing stages, and testing report. The test result and experiences are shared and finally we draw a conclusion for the unit testing process. The introduction of this paper can contribute to improve the process of unit testing and software development for other digital instrumentation and control systems in NPPs. (author)

  13. Using Fuzz Testing for Searching Software Vulnerabilities

    Directory of Open Access Journals (Sweden)

    Bogdan Leonidovich Kozirsky

    2014-12-01

    Full Text Available This article deals with fuzz testing (fuzzing, a software testing and vulnerability searching technique based on providing inputs of programs with random data and further analysis of their behavior. The basics of implementing cmdline argument fuzzer, environment variable fuzzer and syscall fuzzer in any UNIX-like OS have been closely investigated.

  14. Software Testing Requires Variability

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak

    2003-01-01

    Software variability is the ability of a software system or artefact to be changed, customized or configured for use in a particular context. Variability in software systems is important from a number of perspectives. Some perspectives rightly receive much attention due to their direct economic...... impact in software production. As is also apparent from the call for papers these perspectives focus on qualities such as reuse, adaptability, and maintainability....

  15. Pragmatic Software Testing Becoming an Effective and Efficient Test Professional

    CERN Document Server

    Black, Rex

    2011-01-01

    A hands-on guide to testing techniques that deliver reliable software and systemsTesting even a simple system can quickly turn into a potentially infinite task. Faced with tight costs and schedules, testers need to have a toolkit of practical techniques combined with hands-on experience and the right strategies in order to complete a successful project. World-renowned testing expert Rex Black provides you with the proven methods and concepts that test professionals must know. He presents you with the fundamental techniques for testing and clearly shows you how to select and apply successful st

  16. CATS, continuous automated testing of seismological, hydroacoustic, and infrasound (SHI) processing software.

    Science.gov (United States)

    Brouwer, Albert; Brown, David; Tomuta, Elena

    2017-04-01

    To detect nuclear explosions, waveform data from over 240 SHI stations world-wide flows into the International Data Centre (IDC) of the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO), located in Vienna, Austria. A complex pipeline of software applications processes this data in numerous ways to form event hypotheses. The software codebase comprises over 2 million lines of code, reflects decades of development, and is subject to frequent enhancement and revision. Since processing must run continuously and reliably, software changes are subjected to thorough testing before being put into production. To overcome the limitations and cost of manual testing, the Continuous Automated Testing System (CATS) has been created. CATS provides an isolated replica of the IDC processing environment, and is able to build and test different versions of the pipeline software directly from code repositories that are placed under strict configuration control. Test jobs are scheduled automatically when code repository commits are made. Regressions are reported. We present the CATS design choices and test methods. Particular attention is paid to how the system accommodates the individual testing of strongly interacting software components that lack test instrumentation.

  17. Redundancy in Nigerian Business Organizations: Alternatives ...

    African Journals Online (AJOL)

    This theoretical discourse examined the incidence of work redundancy in Nigerian organizations as to offer alternative options. Certainly, some redundancy exercises may be necessary for the survival of the organizations but certain variables may influence employees' reactions to the exercises and thus influence the ...

  18. DYNAMIC PROGRAMMING APPROACH TO TESTING RESOURCE ALLOCATION PROBLEM FOR MODULAR SOFTWARE

    Directory of Open Access Journals (Sweden)

    P.K. Kapur

    2003-02-01

    Full Text Available Testing phase of a software begins with module testing. During this period modules are tested independently to remove maximum possible number of faults within a specified time limit or testing resource budget. This gives rise to some interesting optimization problems, which are discussed in this paper. Two Optimization models are proposed for optimal allocation of testing resources among the modules of a Software. In the first model, we maximize the total fault removal, subject to budgetary Constraint. In the second model, additional constraint representing aspiration level for fault removals for each module of the software is added. These models are solved using dynamic programming technique. The methods have been illustrated through numerical examples.

  19. Gas characterization system software acceptance test procedure

    International Nuclear Information System (INIS)

    Vo, C.V.

    1996-01-01

    This document details the Software Acceptance Testing of gas characterization systems. The gas characterization systems will be used to monitor the vapor spaces of waste tanks known to contain measurable concentrations of flammable gases

  20. Analysis and implementation of software testing in an agile development methodology

    OpenAIRE

    Pinheiro, Sérgio Agostinho Machado

    2015-01-01

    Dissertação de mestrado em Engenharia de Sistemas Nesta dissertação é apresentado o estudo e implementação de testes de software em desenvolvimento ágil. Os testes de software têm cada vez mais importância para as empresas que desenvolvem software, devido à natural evolução das exigências do cliente. Face à necessidade de cumprir as expetativas do cliente, a F3M Information Systems, SA sentiu que devia melhorar as suas práticas de testes. Com base na metodologia de des...

  1. Prototype Software for Automated Structural Analysis of Systems

    DEFF Research Database (Denmark)

    Jørgensen, A.; Izadi-Zamanabadi, Roozbeh; Kristensen, M.

    2004-01-01

    In this paper we present a prototype software tool that is developed to analyse the structural model of automated systems in order to identify redundant information that is hence utilized for Fault detection and Isolation (FDI) purposes. The dedicated algorithms in this software tool use a tri......-partite graph that represents the structural model of the system. A component-based approach has been used to address issues such as system complexity and recon¯gurability possibilities....

  2. Learners misperceive the benefits of redundant text in multimedia learning.

    Science.gov (United States)

    Fenesi, Barbara; Kim, Joseph A

    2014-01-01

    Research on metacognition has consistently demonstrated that learners fail to endorse instructional designs that produce benefits to memory, and often prefer designs that actually impair comprehension. Unlike previous studies in which learners were only exposed to a single multimedia design, the current study used a within-subjects approach to examine whether exposure to both redundant text and non-redundant text multimedia presentations improved learners' metacognitive judgments about presentation styles that promote better understanding. A redundant text multimedia presentation containing narration paired with verbatim on-screen text (Redundant) was contrasted with two non-redundant text multimedia presentations: (1) narration paired with images and minimal text (Complementary) or (2) narration paired with minimal text (Sparse). Learners watched presentation pairs of either Redundant + Complementary, or Redundant + Sparse. Results demonstrate that Complementary and Sparse presentations produced highest overall performance on the final comprehension assessment, but the Redundant presentation produced highest perceived understanding and engagement ratings. These findings suggest that learners misperceive the benefits of redundant text, even after direct exposure to a non-redundant, effective presentation.

  3. NASA Data Acquisition System Software Development for Rocket Propulsion Test Facilities

    Science.gov (United States)

    Herbert, Phillip W., Sr.; Elliot, Alex C.; Graves, Andrew R.

    2015-01-01

    Current NASA propulsion test facilities include Stennis Space Center in Mississippi, Marshall Space Flight Center in Alabama, Plum Brook Station in Ohio, and White Sands Test Facility in New Mexico. Within and across these centers, a diverse set of data acquisition systems exist with different hardware and software platforms. The NASA Data Acquisition System (NDAS) is a software suite designed to operate and control many critical aspects of rocket engine testing. The software suite combines real-time data visualization, data recording to a variety formats, short-term and long-term acquisition system calibration capabilities, test stand configuration control, and a variety of data post-processing capabilities. Additionally, data stream conversion functions exist to translate test facility data streams to and from downstream systems, including engine customer systems. The primary design goals for NDAS are flexibility, extensibility, and modularity. Providing a common user interface for a variety of hardware platforms helps drive consistency and error reduction during testing. In addition, with an understanding that test facilities have different requirements and setups, the software is designed to be modular. One engine program may require real-time displays and data recording; others may require more complex data stream conversion, measurement filtering, or test stand configuration management. The NDAS suite allows test facilities to choose which components to use based on their specific needs. The NDAS code is primarily written in LabVIEW, a graphical, data-flow driven language. Although LabVIEW is a general-purpose programming language; large-scale software development in the language is relatively rare compared to more commonly used languages. The NDAS software suite also makes extensive use of a new, advanced development framework called the Actor Framework. The Actor Framework provides a level of code reuse and extensibility that has previously been difficult

  4. Quantum redundancies and local realism

    International Nuclear Information System (INIS)

    Horodecki, R.; Horodecki, P.

    1994-01-01

    The basic properties of quantum redundancies are presented. The previous definitions of the informationally coherent quantum (ICQ) system are generalized in terms of the redundancies. The ICQ systems are also considered in the context of local realism in terms of the information integrity factor η. The classical region η≤qslant[1]/[2] for the two classes of mixed, nonfactorizable states admitting the local hidden variable model is found. ((orig.))

  5. Gas characterization system software acceptance test report

    International Nuclear Information System (INIS)

    Vo, C.V.

    1996-01-01

    This document details the results of software acceptance testing of gas characterization systems. The gas characterization systems will be used to monitor the vapor spaces of waste tanks known to contain measurable concentrations of flammable gases

  6. Redundancy in Nigerian Business Organizations: Alternatives (Pp ...

    African Journals Online (AJOL)

    FIRST LADY

    Redundancy in Nigerian Business Organizations: Alternatives (Pp. ... When business downturns ... The galloping pace of information technologies is a harbinger of profound ... Redundant staff in public departments can also be retained as.

  7. Fabrication and Testing of Durable Redundant and Fluted-Core Joints for Composite Sandwich Structures

    Science.gov (United States)

    Lin, Shih-Yung; Splinter, Scott C.; Tarkenton, Chris; Paddock, David A.; Smeltzer, Stanley S.; Ghose, Sayata; Guzman, Juan C.; Stukus, Donald J.; McCarville, Douglas A.

    2013-01-01

    The development of durable bonded joint technology for assembling composite structures is an essential component of future space technologies. While NASA is working toward providing an entirely new capability for human space exploration beyond low Earth orbit, the objective of this project is to design, fabricate, analyze, and test a NASA patented durable redundant joint (DRJ) and a NASA/Boeing co-designed fluted-core joint (FCJ). The potential applications include a wide range of sandwich structures for NASA's future launch vehicles. Three types of joints were studied -- splice joint (SJ, as baseline), DRJ, and FCJ. Tests included tension, after-impact tension, and compression. Teflon strips were used at the joint area to increase failure strength by shifting stress concentration to a less sensitive area. Test results were compared to those of pristine coupons fabricated utilizing the same methods. Tensile test results indicated that the DRJ design was stiffer, stronger, and more impact resistant than other designs. The drawbacks of the DRJ design were extra mass and complex fabrication processes. The FCJ was lighter than the DRJ but less impact resistant. With barely visible but detectable impact damages, all three joints showed no sign of tensile strength reduction. No compression test was conducted on any impact-damaged sample due to limited scope and resource. Failure modes and damage propagation were also studied to support progressive damage modeling of the SJ and the DRJ.

  8. Redundant correlation effect on personalized recommendation

    Science.gov (United States)

    Qiu, Tian; Han, Teng-Yue; Zhong, Li-Xin; Zhang, Zi-Ke; Chen, Guang

    2014-02-01

    The high-order redundant correlation effect is investigated for a hybrid algorithm of heat conduction and mass diffusion (HHM), through both heat conduction biased (HCB) and mass diffusion biased (MDB) correlation redundancy elimination processes. The HCB and MDB algorithms do not introduce any additional tunable parameters, but keep the simple character of the original HHM. Based on two empirical datasets, the Netflix and MovieLens, the HCB and MDB are found to show better recommendation accuracy for both the overall objects and the cold objects than the HHM algorithm. Our work suggests that properly eliminating the high-order redundant correlations can provide a simple and effective approach to accurate recommendation.

  9. Formal Verification of Digital Protection Logic and Automatic Testing Software

    Energy Technology Data Exchange (ETDEWEB)

    Cha, S. D.; Ha, J. S.; Seo, J. S. [KAIST, Daejeon (Korea, Republic of)

    2008-06-15

    - Technical aspect {center_dot} It is intended that digital I and C software have safety and reliability. Project results help the software to acquire license. Software verification technique, which results in this project, can be to use for digital NPP(Nuclear power plant) in the future. {center_dot} This research introduces many meaningful results of verification on digital protection logic and suggests I and C software testing strategy. These results apply to verify nuclear fusion device, accelerator, nuclear waste management and nuclear medical device that require dependable software and high-reliable controller. Moreover, These can be used for military, medical or aerospace-related software. - Economical and industrial aspect {center_dot} Since safety of digital I and C software is highly import, It is essential for the software to be verified. But verification and licence acquisition related to digital I and C software face high cost. This project gives economic profit to domestic economy by using introduced verification and testing technique instead of foreign technique. {center_dot} The operation rate of NPP will rise, when NPP safety critical software is verified with intellectual V and V tool. It is expected that these software substitute safety-critical software that wholly depend on foreign. Consequently, the result of this project has high commercial value and the recognition of the software development works will be able to be spread to the industrial circles. - Social and cultural aspect People expect that nuclear power generation contributes to relieving environmental problems because that does not emit more harmful air pollution source than other power generations. To give more trust and expectation about nuclear power generation to our society, we should make people to believe that NPP is highly safe system. In that point of view, we can present high-reliable I and C proofed by intellectual V and V technique as evidence

  10. N + 1 redundancy on ATCA instrumentation for Nuclear Fusion

    Energy Technology Data Exchange (ETDEWEB)

    Correia, Miguel, E-mail: miguelfc@ipfn.ist.utl.pt [Associação EURATOM/IST, Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico – Universidade Técnica de Lisboa, Lisboa (Portugal); Sousa, Jorge; Rodrigues, António P.; Batista, António J.N.; Combo, Álvaro; Carvalho, Bernardo B.; Santos, Bruno; Carvalho, Paulo F.; Gonçalves, Bruno [Associação EURATOM/IST, Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico – Universidade Técnica de Lisboa, Lisboa (Portugal); Correia, Carlos M.B.A. [Centro de Instrumentação, Departamento de Física, Universidade de Coimbra, Coimbra (Portugal); Varandas, Carlos A.F. [Associação EURATOM/IST, Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico – Universidade Técnica de Lisboa, Lisboa (Portugal)

    2013-10-15

    Highlights: ► In Nuclear Fusion, demanding security and high-availability requirements call for redundancy to be available. ► ATCA standard features desirable redundancy features for Fusion instrumentation. ► The developed control and data acquisition hardware modules support additional redundancy schemes. ► Implementation of N + 1 redundancy of host processor and I/O data modules. -- Abstract: The role of redundancy on control and data acquisition systems has gained a significant importance in the case of Nuclear Fusion, as demanding security and high-availability requirements call for redundancy to be available. IPFN's control and data acquisition system hardware is based on an Advanced Telecommunications Computing Architecture (ATCA) set of I/O (DAC/ADC endpoints) and data/timing switch modules, which handle data and timing from all I/O endpoints. Modules communicate through Peripheral Component Interconnect Express (PCIe), established over the ATCA backplane and controlled by one or more external hosts. The developed hardware modules were designed to take advantage of ATCA specification's redundancy features, namely at the hardware management level, including support of: (i) multiple host operation with N + 1 redundancy – in which a designated failover host takes over data previously assigned to a suddenly malfunctioning host and (ii) N + 1 redundancy of I/O and data/timing switch modules. This paper briefly describes IPFN's control and data acquisition system, which is being developed for ITER fast plant system controller (FPSC), and analyses the hardware implementation of its supported redundancy features.

  11. Processing bimodal stimulus information under alcohol: is there a risk to being redundant?

    Science.gov (United States)

    Fillmore, Mark T

    2010-10-01

    The impairing effects of alcohol are especially pronounced in environments that involve dividing attention across two or more stimuli. However, studies in cognitive psychology have identified circumstances in which the presentation of multiple stimuli can actually facilitate performance. The "redundant signal effect" (RSE) refers to the observation that individuals respond more quickly when information is presented as redundant, bimodal stimuli (e.g., aurally and visually), rather than as a single stimulus presented to either modality alone. The present study tested the hypothesis that the response facilitation attributed to RSE could reduce the degree to which alcohol slows information processing. Two experiments are reported. Experiment 1 demonstrated the validity of a reaction time model of RSE by showing that adults (N = 15) responded more quickly to redundant, bimodal stimuli (visual + aural) versus either stimuli presented individually. Experiment 2 used the RSE model to test the reaction time performance of 20 adults following three alcohol doses (0.0 g/kg, 0.45 g/kg, and 0.65 g/kg). Results showed that alcohol slowed reaction time in a general dose-dependent manner in all three stimulus conditions with the reaction time (RT) speed-advantage of the redundant signal being maintained, even under the highest dose of alcohol. Evidence for an RT advantage to bimodal stimuli under alcohol challenges the general assumption that alcohol impairment is intensified in multistimulus environments. The current study provides a useful model to investigate how drug effects on behavior might be altered in contexts that involve redundant response signals.

  12. Performance testing of LiDAR exploitation software

    Science.gov (United States)

    Varela-González, M.; González-Jorge, H.; Riveiro, B.; Arias, P.

    2013-04-01

    Mobile LiDAR systems are being used widely in recent years for many applications in the field of geoscience. One of most important limitations of this technology is the large computational requirements involved in data processing. Several software solutions for data processing are available in the market, but users are often unknown about the methodologies to verify their performance accurately. In this work a methodology for LiDAR software performance testing is presented and six different suites are studied: QT Modeler, AutoCAD Civil 3D, Mars 7, Fledermaus, Carlson and TopoDOT (all of them in x64). Results depict as QTModeler, TopoDOT and AutoCAD Civil 3D allow the loading of large datasets, while Fledermaus, Mars7 and Carlson do not achieve these powerful performance. AutoCAD Civil 3D needs large loading time in comparison with the most powerful softwares such as QTModeler and TopoDOT. Carlson suite depicts the poorest results among all the softwares under study, where point clouds larger than 5 million points cannot be loaded and loading time is very large in comparison with the other suites even for the smaller datasets. AutoCAD Civil 3D, Carlson and TopoDOT show more threads than other softwares like QTModeler, Mars7 and Fledermaus.

  13. Experimental analysis of specification language impact on NPP software diversity

    International Nuclear Information System (INIS)

    Yoo, Chang Sik; Seong, Poong Hyun

    1998-01-01

    When redundancy and diversity is applied in NPP digital computer system, diversification of system software may be a critical point for the entire system dependability. As the means of enhancing software diversity, specification language diversity is suggested in this study. We set up a simple hypothesis for the specification language impact on common errors, and an experiment based on NPP protection system application was performed. Experiment result showed that this hypothesis could be justified and specification language diversity is effective in overcoming software common mode failure problem

  14. Increasing The Dexterity Of Redundant Robots

    Science.gov (United States)

    Seraji, Homayoun

    1990-01-01

    Redundant coordinates used to define additional tasks. Configuration control emerging as effective way to control motions of robot having more degrees of freedom than necessary to define trajectory of end effector and/or of object to be manipulated. Extra or redundant degrees of freedom used to give robot humanlike dexterity and versatility.

  15. Quantifying Net Synergy/Redundancy of Spontaneous Variability Regulation via Predictability and Transfer Entropy Decomposition Frameworks.

    Science.gov (United States)

    Porta, Alberto; Bari, Vlasta; De Maria, Beatrice; Takahashi, Anielle C M; Guzzetti, Stefano; Colombo, Riccardo; Catai, Aparecida M; Raimondi, Ferdinando; Faes, Luca

    2017-11-01

    Objective: Indexes assessing the balance between redundancy and synergy were hypothesized to be helpful in characterizing cardiovascular control from spontaneous beat-to-beat variations of heart period (HP), systolic arterial pressure (SAP), and respiration (R). Methods: Net redundancy/synergy indexes were derived according to predictability and transfer entropy decomposition strategies via a multivariate linear regression approach. Indexes were tested in two protocols inducing modifications of the cardiovascular regulation via baroreflex loading/unloading (i.e., head-down tilt at -25° and graded head-up tilt at 15°, 30°, 45°, 60°, 75°, and 90°, respectively). The net redundancy/synergy of SAP and R to HP and of HP and R to SAP were estimated over stationary sequences of 256 successive values. Results: We found that: 1) regardless of the target (i.e., HP or SAP) redundancy was prevalent over synergy and this prevalence was independent of type and magnitude of the baroreflex challenge; 2) the prevalence of redundancy disappeared when decoupling inputs from output via a surrogate approach; 3) net redundancy was under autonomic control given that it varied in proportion to the vagal withdrawal during graded head-up tilt; and 4) conclusions held regardless of the decomposition strategy. Conclusion: Net redundancy indexes can monitor changes of cardiovascular control from a perspective completely different from that provided by more traditional univariate and multivariate methods. Significance: Net redundancy measures might provide a practical tool to quantify the reservoir of effective cardiovascular regulatory mechanisms sharing causal influences over a target variable. Objective: Indexes assessing the balance between redundancy and synergy were hypothesized to be helpful in characterizing cardiovascular control from spontaneous beat-to-beat variations of heart period (HP), systolic arterial pressure (SAP), and respiration (R). Methods: Net redundancy

  16. Factors to Consider When Implementing Automated Software Testing

    Science.gov (United States)

    2016-11-10

    development and integration is a continuous process throughout the acquisition life cycle . Automated Software Testing can improve testing capabilities...requires a lab, conference room, or both, and whether it should be located in- house or an external facility. 2. Ensure space is adequate to support team

  17. Subsystem software for TSTA [Tritium Systems Test Assembly

    International Nuclear Information System (INIS)

    Mann, L.W.; Claborn, G.W.; Nielson, C.W.

    1987-01-01

    The Subsystem Control Software at the Tritium System Test Assembly (TSTA) must control sophisticated chemical processes through the physical operation of valves, motor controllers, gas sampling devices, thermocouples, pressure transducers, and similar devices. Such control software has to be capable of passing stringent quality assurance (QA) criteria to provide for the safe handling of significant amounts of tritium on a routine basis. Since many of the chemical processes and physical components are experimental, the control software has to be flexible enough to allow for trial/error learning curve, but still protect the environment and personnel from exposure to unsafe levels of radiation. The software at TSTA is implemented in several levels as described in a preceding paper in these proceedings. This paper depends on information given in the preceding paper for understanding. The top level is the Subsystem Control level

  18. Redundant interferometric calibration as a complex optimization problem

    Science.gov (United States)

    Grobler, T. L.; Bernardi, G.; Kenyon, J. S.; Parsons, A. R.; Smirnov, O. M.

    2018-05-01

    Observations of the redshifted 21 cm line from the epoch of reionization have recently motivated the construction of low-frequency radio arrays with highly redundant configurations. These configurations provide an alternative calibration strategy - `redundant calibration' - and boost sensitivity on specific spatial scales. In this paper, we formulate calibration of redundant interferometric arrays as a complex optimization problem. We solve this optimization problem via the Levenberg-Marquardt algorithm. This calibration approach is more robust to initial conditions than current algorithms and, by leveraging an approximate matrix inversion, allows for further optimization and an efficient implementation (`redundant STEFCAL'). We also investigated using the preconditioned conjugate gradient method as an alternative to the approximate matrix inverse, but found that its computational performance is not competitive with respect to `redundant STEFCAL'. The efficient implementation of this new algorithm is made publicly available.

  19. On Redundancy in Describing Linguistic Systems

    Directory of Open Access Journals (Sweden)

    Vladimir Borissov Pericliev

    2015-12-01

    Full Text Available On Redundancy in Describing Linguistic Systems The notion of system of linguistic elements figures prominently in most post-Saussurian linguistics up to the present. A “system” is the network of the contrastive (or, distinctive features each element in the system bears to the remaining elements. The meaning (valeur of each element in the system is the set of features that are necessary and jointly sufficient to distinguish this element from all others. The paper addresses the problems of “redundancy”, i.e. the occurrence of features that are not strictly necessary in describing an element in a system. Redundancy is shown to smuggle into the description of linguistic systems, this infelicitous practice illustrated with some examples from the literature (e.g. the classical phonemic analysis of Russian by Cherry, Halle, and Jakobson, 1953. The logic and psychology of the occurrence of redundancy are briefly sketched and it is shown that, in addition to some other problems, redundancy leads to a huge and unresolvable ambiguity of descriptions of linguistic systems (the Buridan’s ass problem.

  20. Command and Data Handling Flight Software test framework: A Radiation Belt Storm Probes practice

    Science.gov (United States)

    Hill, T. A.; Reid, W. M.; Wortman, K. A.

    During the Radiation Belt Storm Probes (RBSP) mission, a test framework was developed by the Embedded Applications Group in the Space Department at the Johns Hopkins Applied Physics Laboratory (APL). The test framework is implemented for verification of the Command and Data Handling (C& DH) Flight Software. The RBSP C& DH Flight Software consists of applications developed for use with Goddard Space Flight Center's core Flight Executive (cFE) architecture. The test framework's initial concept originated with tests developed for verification of the Autonomy rules that execute with the Autonomy Engine application of the RBSP C& DH Flight Software. The test framework was adopted and expanded for system and requirements verification of the RBSP C& DH Flight Software. During the evolution of the RBSP C& DH Flight Software test framework design, a set of script conventions and a script library were developed. The script conventions and library eased integration of system and requirements verification tests into a comprehensive automated test suite. The comprehensive test suite is currently being used to verify releases of the RBSP C& DH Flight Software. In addition to providing the details and benefits of the test framework, the discussion will include several lessons learned throughout the verification process of RBSP C& DH Flight Software. Our next mission, Solar Probe Plus (SPP), will use the cFE architecture for the C& DH Flight Software. SPP also plans to use the same ground system as RBSP. Many of the RBSP C& DH Flight Software applications are reusable on the SPP mission, therefore there is potential for test design and test framework reuse for system and requirements verification.

  1. Optimal redundant systems for works with random processing time

    International Nuclear Information System (INIS)

    Chen, M.; Nakagawa, T.

    2013-01-01

    This paper studies the optimal redundant policies for a manufacturing system processing jobs with random working times. The redundant units of the parallel systems and standby systems are subject to stochastic failures during the continuous production process. First, a job consisting of only one work is considered for both redundant systems and the expected cost functions are obtained. Next, each redundant system with a random number of units is assumed for a single work. The expected cost functions and the optimal expected numbers of units are derived for redundant systems. Subsequently, the production processes of N tandem works are introduced for parallel and standby systems, and the expected cost functions are also summarized. Finally, the number of works is estimated by a Poisson distribution for the parallel and standby systems. Numerical examples are given to demonstrate the optimization problems of redundant systems

  2. Integrating Testing into Software Engineering Courses Supported by a Collaborative Learning Environment

    Science.gov (United States)

    Clarke, Peter J.; Davis, Debra; King, Tariq M.; Pava, Jairo; Jones, Edward L.

    2014-01-01

    As software becomes more ubiquitous and complex, the cost of software bugs continues to grow at a staggering rate. To remedy this situation, there needs to be major improvement in the knowledge and application of software validation techniques. Although there are several software validation techniques, software testing continues to be one of the…

  3. TEST (Toxicity Estimation Software Tool) Ver 4.1

    Science.gov (United States)

    The Toxicity Estimation Software Tool (T.E.S.T.) has been developed to allow users to easily estimate toxicity and physical properties using a variety of QSAR methodologies. T.E.S.T allows a user to estimate toxicity without requiring any external programs. Users can input a chem...

  4. Operability test procedure for TRUSAF assayer software upgrade

    International Nuclear Information System (INIS)

    Cejka, C.C.

    1995-01-01

    This OTP is to be used to ensure the operability of the Transuranic Waste Assay System (TRUWAS). The system was upgraded and requires a retest to assure satisfactory operation. The upgrade consists of an AST 486 computer to replace the IBM-PC/XT, and a software upgrade (CNEUT). The software calculations are performed in the same manner as in the previous system (NEUT), however, the new software is written in C Assembly Language. CNEUT is easier to use and far more powerful than the previous program. The TRUWAS is used to verify the TRU content of waste packages sent for storage in the Transuranic Storage and Assay Facility (TRUSAF). The TRUSAF is part of Westinghouse Hanford's certification program for waste to be shipped to the Waste Isolation Pilot Plant (WIPP) in New Mexico. The Transuranic Waste Assayer uses a combination passive-active neutron interrogation system to determine the TRU content of 55-gallon waste drums. The system consists of a shielded assay chamber; Deuterium-Tritium neutron generator; Helium-3 proportional counters; drum handling system; electronics including preamplifier, amplifier, and discriminator for each of the counter packages; and an AST 486 computer/printer system for data acquisition and analysis. The system can detect down to TRU levels of 10 nCi/g in the waste matrix. The equipment to be tested is: Assay Chamber Door Drum Turntable and Automatic Loading Platform Interlocks Assayer Software; and IBM computer/printer software. The objective of the test is to verify that the system is operational with the AST 486 computer, the software used in the new computer system correctly calculates TRU levels, and the new computer system is capable of storing and retrieving data

  5. EMERIS: an advanced information system for a materials testing reactor

    International Nuclear Information System (INIS)

    Adorjan, F.; Buerger, L.; Lux, I.; Mesko, L.; Szabo, K.; Vegh, J.; Ivanov, V.V.; Mozhaev, A.A.; Yakovlev, V.V.

    1990-06-01

    The basic features of the Materials Testing Reactor of IAE, Moscow (MR) Information System (EMERIS) are outlined. The purpose of the system is to support reactor and experimental test loop operators by a flexible, fully computerized and user-friendly tool for the aquisition, analysis, archivation and presentation of data obtained during operation of the experimental facility. High availability of EMERIS services is ensured by redundant hardware and software components, and by automatic configuration procedure. A novel software feature of the system is the automatic Disturbance Analysis package, which is aimed to discover primary causes of irregularities occurred in the technology. (author) 2 refs.; 2 figs

  6. System Quality Management in Software Testing Laboratory that Chooses Accreditation

    Directory of Open Access Journals (Sweden)

    Yanet Brito R.

    2013-12-01

    Full Text Available The evaluation of software products will reach full maturity when executed by the scheme and provides third party certification. For the validity of the certification, the independent laboratory must be accredited for that function, using internationally recognized standards. This brings with it a challenge for the Industrial Laboratory Testing Software (LIPS, responsible for testing the products developed in Cuban Software Industry, define strategies that will permit it to offer services with a high level of quality. Therefore it is necessary to establish a system of quality management according to NC-ISO/IEC 17025: 2006 to continuously improve the operational capacity and technical competence of the laboratory, with a view to future accreditation of tests performed. This article discusses the process defined in the LIPS for the implementation of a Management System of Quality, from the current standards and trends, as a necessary step to opt for the accreditation of the tests performed.

  7. A Redundancy Mechanism Design for Hall-Based Electronic Current Transformers

    Directory of Open Access Journals (Sweden)

    Kun-Long Chen

    2017-03-01

    Full Text Available Traditional current transformers (CTs suffer from DC and AC saturation and remanent magnetization in many industrial applications. Moreover, the drawbacks of traditional CTs, such as closed iron cores, bulky volume, and heavy weight, further limit the development of an intelligent power protection system. In order to compensate for these drawbacks, we proposed a novel current measurement method by using Hall sensors, which is called the Hall-effect current transformer (HCT. The existing commercial Hall sensors are electronic components, so the reliability of the HCT is normally worse than that of the traditional CT. Therefore, our study proposes a redundancy mechanism for the HCT to strengthen its reliability. With multiple sensor modules, the method has the ability to improve the accuracy of the HCT as well. Additionally, the proposed redundancy mechanism monitoring system provides a condition-based maintenance for the HCT. We verify our method with both simulations and an experimental test. The results demonstrate that the proposed HCT with a redundancy mechanism can almost achieve Class 0.2 for measuring CTs according to IEC Standard 60044-8.

  8. Development of a smart-antenna test-bed, demonstrating software defined digital beamforming

    NARCIS (Netherlands)

    Kluwer, T.; Slump, Cornelis H.; Schiphorst, Roelof; Hoeksema, F.W.

    2001-01-01

    This paper describes a smart-antenna test-bed consisting of ‘common of the shelf’ (COTS) hardware and software defined radio components. The use of software radio components enables a flexible platform to implement and test mobile communication systems as a real-world system. The test-bed is

  9. V & V Within Reuse-Based Software Engineering

    Science.gov (United States)

    Addy, Edward A.

    1996-01-01

    Verification and validation (V&V) is used to increase the level of assurance of critical software, particularly that of safety-critical and mission critical software. This paper describes the working group's success in identifying V&V tasks that could be performed in the domain engineering and transition levels of reuse-based software engineering. The primary motivation for V&V at the domain level is to provide assurance that the domain requirements are correct and that the domain artifacts correctly implement the domain requirements. A secondary motivation is the possible elimination of redundant V&V activities at the application level. The group also considered the criteria and motivation for performing V&V in domain engineering.

  10. Simulator of a fail detector system for redundant sensors

    International Nuclear Information System (INIS)

    Assumpcao Filho, E.O.; Nakata, H.

    1990-01-01

    A failure detection and isolation system (FDI) simulation program has been developed for IBM-PC microcomputers. The program, based on the sequencial likelihood ratio testing method developed by A. Wald, was implemented with Monte-Carlo technique. The calculated failure detection rate was favorably compared against the wind-tunnel experimental redundant temperature sensors. (author)

  11. Software Development and Testing Approach and Challenges in a distributed HEP Collaboration

    CERN Document Server

    Burckhart-Chromek, Doris

    2007-01-01

    In developing the ATLAS [1] Trigger and Data Acquisition (TDAQ) software, the team is applying the iterative waterfall model, evolutionary process management, formal software inspection, and lightweight review techniques. The long preparation phase, with a geographically widespread development team required that the standard techniques be adapted to this HEP environment. The testing process is receiving special attention. Unit tests and check targets in nightly project builds form the basis for the subsequent software project release testing. The integrated software is then being run on computing farms that give further opportunites for gaining experience, fault finding, and acquiring ideas for improvement. Dedicated tests on a farm of up to 1000 nodes address the large-scale aspect of the project. Integration test activities on the experimental site include the special purpose-built event readout hardware. Deployment in detector commissioning starts the countdown towards running the final ATLAS experiment. T...

  12. Wellbore inertial navigation system (WINS) software development and test results

    Energy Technology Data Exchange (ETDEWEB)

    Wardlaw, R. Jr.

    1982-09-01

    The structure and operation of the real-time software developed for the Wellbore Inertial Navigation System (WINS) application are described. The procedure and results of a field test held in a 7000-ft well in the Nevada Test Site are discussed. Calibration and instrumentation error compensation are outlined, as are design improvement areas requiring further test and development. Notes on Kalman filtering and complete program listings of the real-time software are included in the Appendices. Reference is made to a companion document which describes the downhole instrumentation package.

  13. A Modular Approach to Redundant Robot Control

    International Nuclear Information System (INIS)

    Anderson, R.J.

    1997-12-01

    This paper describes a modular approach for computing redundant robot kinematics. First some conventional redundant control methods are presented and shown to be 'passive control laws', i.e. they can be represented by a network consisting of passive elements. These networks are then put into modular form by applying scattering operator techniques. Additional subnetwork modules can then be added to further shape the motion. Modules for obstacle detection, joint limit avoidance, proximity sensing, and for imposing nonlinear velocity constraints are presented. The resulting redundant robot control system is modular, flexible and robust

  14. Using Knowledge Management to Revise Software-Testing Processes

    Science.gov (United States)

    Nogeste, Kersti; Walker, Derek H. T.

    2006-01-01

    Purpose: This paper aims to use a knowledge management (KM) approach to effectively revise a utility retailer's software testing process. This paper presents a case study of how the utility organisation's customer services IT production support group improved their test planning skills through applying the American Productivity and Quality Center…

  15. Redundant and fault-tolerant algorithms for real-time measurement and control systems for weapon equipment.

    Science.gov (United States)

    Li, Dan; Hu, Xiaoguang

    2017-03-01

    Because of the high availability requirements from weapon equipment, an in-depth study has been conducted on the real-time fault-tolerance of the widely applied Compact PCI (CPCI) bus measurement and control system. A redundancy design method that uses heartbeat detection to connect the primary and alternate devices has been developed. To address the low successful execution rate and relatively large waste of time slices in the primary version of the task software, an improved algorithm for real-time fault-tolerant scheduling is proposed based on the Basic Checking available time Elimination idle time (BCE) algorithm, applying a single-neuron self-adaptive proportion sum differential (PSD) controller. The experimental validation results indicate that this system has excellent redundancy and fault-tolerance, and the newly developed method can effectively improve the system availability. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Reliability Analysis Multiple Redundancy Controller for Nuclear Safety Systems

    International Nuclear Information System (INIS)

    Son, Gwangseop; Kim, Donghoon; Son, Choulwoong

    2013-01-01

    This controller is configured for multiple modular redundancy (MMR) composed of dual modular redundancy (DMR) and triple modular redundancy (TMR). The architecture of MRC is briefly described, and the Markov model is developed. Based on the model, the reliability and Mean Time To Failure (MTTF) are analyzed. In this paper, the architecture of MRC for nuclear safety systems is described. The MRC is configured for multiple modular redundancy (MMR) composed of dual modular redundancy (DMR) and triple modular redundancy (TMR). Markov models for MRC architecture was developed, and then the reliability was analyzed by using the model. From the reliability analyses for the MRC, it is obtained that the failure rate of each module in the MRC should be less than 2 Χ 10 -4 /hour and the MTTF average increase rate depending on FCF increment, i. e. ΔMTTF/ΔFCF, is 4 months/0.1

  17. Multi-objective reliability optimization of series-parallel systems with a choice of redundancy strategies

    International Nuclear Information System (INIS)

    Safari, Jalal

    2012-01-01

    This paper proposes a variant of the Non-dominated Sorting Genetic Algorithm (NSGA-II) to solve a novel mathematical model for multi-objective redundancy allocation problems (MORAP). Most researchers about redundancy allocation problem (RAP) have focused on single objective optimization, while there has been some limited research which addresses multi-objective optimization. Also all mathematical multi-objective models of general RAP assume that the type of redundancy strategy for each subsystem is predetermined and known a priori. In general, active redundancy has traditionally received greater attention; however, in practice both active and cold-standby redundancies may be used within a particular system design. The choice of redundancy strategy then becomes an additional decision variable. Thus, the proposed model and solution method are to select the best redundancy strategy, type of components, and levels of redundancy for each subsystem that maximizes the system reliability and minimize total system cost under system-level constraints. This problem belongs to the NP-hard class. This paper presents a second-generation Multiple-Objective Evolutionary Algorithm (MOEA), named NSGA-II to find the best solution for the given problem. The proposed algorithm demonstrates the ability to identify a set of optimal solutions (Pareto front), which provides the Decision Maker (DM) with a complete picture of the optimal solution space. After finding the Pareto front, a procedure is used to select the best solution from the Pareto front. Finally, the advantages of the presented multi-objective model and of the proposed algorithm are illustrated by solving test problems taken from the literature and the robustness of the proposed NSGA-II is discussed.

  18. The 1st VIPEX Software Testing Report (Version 3.1)

    International Nuclear Information System (INIS)

    Choi, Sun Yeong; Jung, Woo Sik; Seo, Jae Seung

    2009-09-01

    The purposes of this report are (1) to perform a Verification and Validation (V and V) test for the VIPEX(Vital-area Identification Package EXpert) software and (2) to improve a software quality through the V and V test. The VIPEX was developed in Korea Atomic Energy Research Institute (KAERI) for the Vital Area Identification (VAI) of nuclear power plants. The version of the VIPEX which was distributed is 3.1. The VIPEX was revised based on the first V and V test and the second V and V test will be performed. We have performed the following tasks for the first V and V test on Windows XP and VISTA operating systems: - Testing basic function including fault tree editing - Writing formal reports

  19. Applications of Logic Coverage Criteria and Logic Mutation to Software Testing

    Science.gov (United States)

    Kaminski, Garrett K.

    2011-01-01

    Logic is an important component of software. Thus, software logic testing has enjoyed significant research over a period of decades, with renewed interest in the last several years. One approach to detecting logic faults is to create and execute tests that satisfy logic coverage criteria. Another approach to detecting faults is to perform mutation…

  20. The Birth and Death of Redundancy in Decoherence and Quantum Darwinism

    Science.gov (United States)

    Riedel, Charles; Zurek, Wojciech; Zwolak, Michael

    2012-02-01

    Understanding the quantum-classical transition and the identification of a preferred classical domain through quantum Darwinism is based on recognizing high-redundancy states as both ubiquitous and exceptional. They are produced ubiquitously during decoherence, as has been demonstrated by the recent identification of very general conditions under which high-redundancy states develop. They are exceptional in that high-redundancy states occupy a very narrow corner of the global Hilbert space; states selected at random are overwelming likely to exhibit zero redundancy. In this letter, we examine the conditions and time scales for the transition from high-redundancy states to zero-redundancy states in many-body dynamics. We identify sufficient condition for the development of redundancy from product states and show that the destruction of redundancy can be accomplished even with highly constrained interactions.

  1. Software testing an ISTQB-BCS certified tester foundation guide

    CERN Document Server

    Hambling, Brian; Samaroo, Angelina; Thompson, Geoff; Williams, Peter; Hambling, Brian

    2015-01-01

    This practical guide provides insight into software testing, explaining the basics of the testing process and how to perform effective tests. It provides an overview of different techniques and how to apply them. It is the best-selling official textbook of the ISTQB-BCS Certified Tester Foundation Level.

  2. [Confirming the Utility of RAISUS Antifungal Susceptibility Testing by New-Software].

    Science.gov (United States)

    Ono, Tomoko; Suematsu, Hiroyuki; Sawamura, Haruki; Yamagishi, Yuka; Mikamo, Hiroshige

    2017-08-15

    Clinical and Laboratory Standards Institute (CLSI) methods for susceptibility tests of yeast are used in Japan. On the other hand, the methods have some disadvantage; 1) reading at 24 and 48 h, 2) using unclear scale, approximately 50% inhibition, to determine MICs, 3) calculating trailing growth and paradoxical effects. These makes it difficult to test the susuceptibility for yeasts. Old software of RAISUS, Ver. 6.0 series, resolved problem 1) and 2) but did not resolve problem 3). Recently, new software of RAISUS, Ver. 7.0 series, resolved problem 3). We confirmed that using the new software made it clear whether all these issue were settled or not. Eighty-four Candida isolated from Aichi Medical University was used in this study. We compared the MICs obtained by using RAISUS antifungal susceptibility testing of yeasts RSMY1, RSMY1, with those obtained by using ASTY. The concordance rates (±four-fold of MICs) between the MICs obtained by using ASTY and RSMY1 with the new software were more than 90%, except for miconazole (MCZ). The rate of MCZ was low, but MICs obtained by using CLSI methods and Yeast-like Fungus DP 'EIKEN' methods, E-DP, showed equivalent MICs of RSMY1 using the new software. The frequency of skip effects on RSMY1 using the new software markedly decreased relative to RSMY1 using the old software. In case of showing trailing growth, the new software of RAISUS made it possible to choice the correct MICs and to put up the sign of trailing growth on the result screen. New software of RAISUS enhances its usability and the accuracy of MICs. Using automatic instrument to determine MICs is useful to obtain objective results easily.

  3. A rule-based software test data generator

    Science.gov (United States)

    Deason, William H.; Brown, David B.; Chang, Kai-Hsiung; Cross, James H., II

    1991-01-01

    Rule-based software test data generation is proposed as an alternative to either path/predicate analysis or random data generation. A prototype rule-based test data generator for Ada programs is constructed and compared to a random test data generator. Four Ada procedures are used in the comparison. Approximately 2000 rule-based test cases and 100,000 randomly generated test cases are automatically generated and executed. The success of the two methods is compared using standard coverage metrics. Simple statistical tests showing that even the primitive rule-based test data generation prototype is significantly better than random data generation are performed. This result demonstrates that rule-based test data generation is feasible and shows great promise in assisting test engineers, especially when the rule base is developed further.

  4. Fuzzy/Neural Software Estimates Costs of Rocket-Engine Tests

    Science.gov (United States)

    Douglas, Freddie; Bourgeois, Edit Kaminsky

    2005-01-01

    The Highly Accurate Cost Estimating Model (HACEM) is a software system for estimating the costs of testing rocket engines and components at Stennis Space Center. HACEM is built on a foundation of adaptive-network-based fuzzy inference systems (ANFIS) a hybrid software concept that combines the adaptive capabilities of neural networks with the ease of development and additional benefits of fuzzy-logic-based systems. In ANFIS, fuzzy inference systems are trained by use of neural networks. HACEM includes selectable subsystems that utilize various numbers and types of inputs, various numbers of fuzzy membership functions, and various input-preprocessing techniques. The inputs to HACEM are parameters of specific tests or series of tests. These parameters include test type (component or engine test), number and duration of tests, and thrust level(s) (in the case of engine tests). The ANFIS in HACEM are trained by use of sets of these parameters, along with costs of past tests. Thereafter, the user feeds HACEM a simple input text file that contains the parameters of a planned test or series of tests, the user selects the desired HACEM subsystem, and the subsystem processes the parameters into an estimate of cost(s).

  5. Large scale and performance tests of the ATLAS online software

    International Nuclear Information System (INIS)

    Alexandrov; Kotov, V.; Mineev, M.; Roumiantsev, V.; Wolters, H.; Amorim, A.; Pedro, L.; Ribeiro, A.; Badescu, E.; Caprini, M.; Burckhart-Chromek, D.; Dobson, M.; Jones, R.; Kazarov, A.; Kolos, S.; Liko, D.; Lucio, L.; Mapelli, L.; Nassiakou, M.; Schweiger, D.; Soloviev, I.; Hart, R.; Ryabov, Y.; Moneta, L.

    2001-01-01

    One of the sub-systems of the Trigger/DAQ system of the future ATLAS experiment is the Online Software system. It encompasses the functionality needed to configure, control and monitor the DAQ. Its architecture is based on a component structure described in the ATLAS Trigger/DAQ technical proposal. Regular integration tests ensure its smooth operation in test beam setups during its evolutionary development towards the final ATLAS online system. Feedback is received and returned into the development process. Studies of the system behavior have been performed on a set of up to 111 PCs on a configuration which is getting closer to the final size. Large scale and performance test of the integrated system were performed on this setup with emphasis on investigating the aspects of the inter-dependence of the components and the performance of the communication software. Of particular interest were the run control state transitions in various configurations of the run control hierarchy. For the purpose of the tests, the software from other Trigger/DAQ sub-systems has been emulated. The author presents a brief overview of the online system structure, its components and the large scale integration tests and their results

  6. Irreducible Tests for Space Mission Sequencing Software

    Science.gov (United States)

    Ferguson, Lisa

    2012-01-01

    As missions extend further into space, the modeling and simulation of their every action and instruction becomes critical. The greater the distance between Earth and the spacecraft, the smaller the window for communication becomes. Therefore, through modeling and simulating the planned operations, the most efficient sequence of commands can be sent to the spacecraft. The Space Mission Sequencing Software is being developed as the next generation of sequencing software to ensure the most efficient communication to interplanetary and deep space mission spacecraft. Aside from efficiency, the software also checks to make sure that communication during a specified time is even possible, meaning that there is not a planet or moon preventing reception of a signal from Earth or that two opposing commands are being given simultaneously. In this way, the software not only models the proposed instructions to the spacecraft, but also validates the commands as well.To ensure that all spacecraft communications are sequenced properly, a timeline is used to structure the data. The created timelines are immutable and once data is as-signed to a timeline, it shall never be deleted nor renamed. This is to prevent the need for storing and filing the timelines for use by other programs. Several types of timelines can be created to accommodate different types of communications (activities, measurements, commands, states, events). Each of these timeline types requires specific parameters and all have options for additional parameters if needed. With so many combinations of parameters available, the robustness and stability of the software is a necessity. Therefore a baseline must be established to ensure the full functionality of the software and it is here where the irreducible tests come into use.

  7. Testing of the assisting software for radiologists analysing head CT images: lessons learned.

    Science.gov (United States)

    Martynov, Petr; Mitropolskii, Nikolai; Kukkola, Katri; Gretsch, Monika; Koivisto, Vesa-Matti; Lindgren, Ilkka; Saunavaara, Jani; Reponen, Jarmo; Mäkynen, Anssi

    2017-12-11

    Assessing a plan for user testing and evaluation of the assisting software developed for radiologists. Test plan was assessed in experimental testing, where users performed reporting on head computed tomography studies with the aid of the software developed. The user testing included usability tests, questionnaires, and interviews. In addition, search relevance was assessed on the basis of user opinions. The testing demonstrated weaknesses in the initial plan and enabled improvements. Results showed that the software has acceptable usability level but some minor fixes are needed before larger-scale pilot testing. The research also proved that it is possible even for radiologists with under a year's experience to perform reporting of non-obvious cases when assisted by the software developed. Due to the small number of test users, it was impossible to assess effects on diagnosis quality. The results of the tests performed showed that the test plan designed is useful, and answers to the key research questions should be forthcoming after testing with more radiologists. The preliminary testing revealed opportunities to improve test plan and flow, thereby illustrating that arranging preliminary test sessions prior to any complex scenarios is beneficial.

  8. Application of a path sensitizing method on automated generation of test specifications for control software

    International Nuclear Information System (INIS)

    Morimoto, Yuuichi; Fukuda, Mitsuko

    1995-01-01

    An automated generation method for test specifications has been developed for sequential control software in plant control equipment. Sequential control software can be represented as sequential circuits. The control software implemented in a control equipment is designed from these circuit diagrams. In logic tests of VLSI's, path sensitizing methods are widely used to generate test specifications. But the method generates test specifications at a single time only, and can not be directly applied to sequential control software. The basic idea of the proposed method is as follows. Specifications of each logic operator in the diagrams are defined in the software design process. Therefore, test specifications of each operator in the control software can be determined from these specifications, and validity of software can be judged by inspecting all of the operators in the logic circuit diagrams. Candidates for sensitized paths, on which test data for each operator propagates, can be generated by the path sensitizing method. To confirm feasibility of the method, it was experimentally applied to control software in digital control equipment. The program could generate test specifications exactly, and feasibility of the method was confirmed. (orig.) (3 refs., 7 figs.)

  9. Prioritising Redundant Network Component for HOWBAN Survivability Using FMEA

    Directory of Open Access Journals (Sweden)

    Cheong Loong Chan

    2017-01-01

    Full Text Available Deploying redundant component is the ubiquitous approach to improve the reliability and survivability of a hybrid optical wireless broadband access network (HOWBAN. Much work has been done to study the cost and impact of deploying redundant component in the network but no formal tools have been used to enable the evaluation and decision to prioritise the deployment of redundant facilities in the network. In this paper we show how FMEA (Failure Mode Effect and Analysis technique can be adapted to identify the critical segment in the network and prioritise the redundant component to be deployed to ensure network survivability. Our result showed that priority must be given to redundancy to mitigate grid power outage particularly in less developed countries which is poised for rapid expansion in broadband services.

  10. Software Sub-system in Loading Automatic Test System for the Measurement of Power Line Filters

    Directory of Open Access Journals (Sweden)

    Yu Bo

    2017-01-01

    Full Text Available The loading automatic test system for measurement of power line filters are in urgent demand. So the software sub-system of the whole test system was proposed. Methods: structured the test system based on the virtual instrument framework, which consisted of lower and up computer and adopted the top down approach of design to perform the system and its modules, according to the measurement principle of the test system. Results: The software sub-system including human machine interface, data analysis and process software, expert system, communication software, control software in lower computer, etc. had been designed. Furthermore, it had been integrated into the entire test system. Conclusion: This sub-system provided a fiendly software platform for the whole test system, and had many advantages such as strong functions, high performances, low prices. It not only raises the test efficiency of EMI filters, but also renders some creativities.

  11. Reliability optimization of a redundant system with failure dependencies

    Energy Technology Data Exchange (ETDEWEB)

    Yu Haiyang [Institute Charles Delaunay (ICD, FRE CNRS 2848), Troyes University of Technology, Rue Marie Curie, BP 2060, 10010 Troyes (France)]. E-mail: Haiyang.YU@utt.fr; Chu Chengbin [Institute Charles Delaunay (ICD, FRE CNRS 2848), Troyes University of Technology, Rue Marie Curie, BP 2060, 10010 Troyes (France); Management School, Hefei University of Technology, 193 Tunxi Road, Hefei (China); Chatelet, Eric [Institute Charles Delaunay (ICD, FRE CNRS 2848), Troyes University of Technology, Rue Marie Curie, BP 2060, 10010 Troyes (France); Yalaoui, Farouk [Institute Charles Delaunay (ICD, FRE CNRS 2848), Troyes University of Technology, Rue Marie Curie, BP 2060, 10010 Troyes (France)

    2007-12-15

    In a multi-component system, the failure of one component can reduce the system reliability in two aspects: loss of the reliability contribution of this failed component, and the reconfiguration of the system, e.g., the redistribution of the system loading. The system reconfiguration can be triggered by the component failures as well as by adding redundancies. Hence, dependency is essential for the design of a multi-component system. In this paper, we study the design of a redundant system with the consideration of a specific kind of failure dependency, i.e., the redundant dependency. The dependence function is introduced to quantify the redundant dependency. With the dependence function, the redundant dependencies are further classified as independence, weak, linear, and strong dependencies. In addition, this classification is useful in that it facilitates the optimization resolution of the system design. Finally, an example is presented to illustrate the concept of redundant dependency and its application in system design. This paper thus conveys the significance of failure dependencies in the reliability optimization of systems.

  12. Reliability optimization of a redundant system with failure dependencies

    International Nuclear Information System (INIS)

    Yu Haiyang; Chu Chengbin; Chatelet, Eric; Yalaoui, Farouk

    2007-01-01

    In a multi-component system, the failure of one component can reduce the system reliability in two aspects: loss of the reliability contribution of this failed component, and the reconfiguration of the system, e.g., the redistribution of the system loading. The system reconfiguration can be triggered by the component failures as well as by adding redundancies. Hence, dependency is essential for the design of a multi-component system. In this paper, we study the design of a redundant system with the consideration of a specific kind of failure dependency, i.e., the redundant dependency. The dependence function is introduced to quantify the redundant dependency. With the dependence function, the redundant dependencies are further classified as independence, weak, linear, and strong dependencies. In addition, this classification is useful in that it facilitates the optimization resolution of the system design. Finally, an example is presented to illustrate the concept of redundant dependency and its application in system design. This paper thus conveys the significance of failure dependencies in the reliability optimization of systems

  13. Software architecture for the ORNL large-coil test facility data system

    International Nuclear Information System (INIS)

    Blair, E.T.; Baylor, L.R.

    1986-01-01

    The VAX-based data-acquisition system for the International Fusion Superconducting Magnet Test Facility (IFSMTF) at Oak Ridge National Laboratory (ORNL) is a second-generation system that evolved from a PDP-11/60-based system used during the initial phase of facility testing. The VAX-based software represents a layered implementation that provides integrated access to all of the data sources within the system, decoupling end-user data retrieval from various front-end data sources through a combination of software architecture and instrumentation data bases. Independent VAX processes manage the various front-end data sources, each being responsible for controlling, monitoring, acquiring, and disposing data and control parameters for access from the data retrieval software. This paper describes the software architecture and the functionality incorporated into the various layers of the data system

  14. Software architecture for the ORNL large coil test facility data system

    International Nuclear Information System (INIS)

    Blair, E.T.; Baylor, L.R.

    1986-01-01

    The VAX-based data acquisition system for the International Fusion Superconducting Magnet Test Facility (IFSMTF) at Oak Ridge National Laboratory (ORNL) is a second-generation system that evolved from a PDP-11/60-based system used during the initial phase of facility testing. The VAX-based software represents a layered implementation that provides integrated access to all of the data sources within the system, deoupling end-user data retrieval from various front-end data sources through a combination of software architecture and instrumentation data bases. Independent VAX processes manage the various front-end data sources, each being responsible for controlling, monitoring, acquiring and disposing data and control parameters for access from the data retrieval software. This paper describes the software architecture and the functionality incorporated into the various layers of the data system

  15. Development of Hardware and Software for Automated Ultrasonic Testing

    International Nuclear Information System (INIS)

    Choi, Sung Nam; Lee, Hee Jong; Yang, Seung Ok

    2012-01-01

    Nondestructive testing (NDT) for the construction and operating of NPPs plays an important role in confirming the integrity of the NPPs. Especially, Automated ultrasonic testing (AUT) is one of the primary nondestructive examination methods for in-service inspection of the welding parts in major components in NPPs. AUT is a reliable nondestructive testing because the data of AUT are saved and reviewed with other examiners. Korea Hydro and Nuclear Power-Central Research Institute (KHNP-CRI) has developed an automated ultrasonic testing (AUT) system based on a high speed pulser-receiver. In combination with the designed software and hardware architecture, this new system permits user configurations for a wide range of user-specific applications through fully automated inspections using compact portable systems with up to eight channels. This paper gives an overview of hardware (H/W) and software (S/W) for the AUT system to inspect welds in NPPs

  16. Exploiting Redundancy in an OFDM SDR Receiver

    Directory of Open Access Journals (Sweden)

    Tomas Palenik

    2009-01-01

    Full Text Available Common OFDM system contains redundancy necessary to mitigate interblock interference and allows computationally effective single-tap frequency domain equalization in receiver. Assuming the system implements an outer error correcting code and channel state information is available in the receiver, we show that it is possible to understand the cyclic prefix insertion as a weak inner ECC encoding and exploit the introduced redundancy to slightly improve error performance of such a system. In this paper, an easy way to implement modification to an existing SDR OFDM receiver is presented. This modification enables the utilization of prefix redundancy, while preserving full compatibility with existing OFDM-based communication standards.

  17. Does plant species richness guarantee the resilience of local medical systems? A perspective from utilitarian redundancy.

    Directory of Open Access Journals (Sweden)

    Flávia Rosa Santoro

    Full Text Available Resilience is related to the ability of a system to adjust to disturbances. The Utilitarian Redundancy Model has emerged as a tool for investigating the resilience of local medical systems. The model determines the use of species richness for the same therapeutic function as a facilitator of the maintenance of these systems. However, predictions generated from this model have not yet been tested, and a lack of variables exists for deeper analyses of resilience. This study aims to address gaps in the Utilitarian Redundancy Model and to investigate the resilience of two medical systems in the Brazilian semi-arid zone. As a local illness is not always perceived in the same way that biomedicine recognizes, the term "therapeutic targets" is used for perceived illnesses. Semi-structured interviews with local experts were conducted using the free-listing technique to collect data on known medicinal plants, usage preferences, use of redundant species, characteristics of therapeutic targets, and the perceived severity for each target. Additionally, participatory workshops were conducted to determine the frequency of targets. The medical systems showed high species richness but low levels of species redundancy. However, if redundancy was present, it was the primary factor responsible for the maintenance of system functions. Species richness was positively associated with therapeutic target frequencies and negatively related to target severity. Moreover, information about redundant species seems to be largely idiosyncratic; this finding raises questions about the importance of redundancy for resilience. We stress the Utilitarian Redundancy Model as an interesting tool to be used in studies of resilience, but we emphasize that it must consider the distribution of redundancy in terms of the treatment of important illnesses and the sharing of information. This study has identified aspects of the higher and lower vulnerabilities of medical systems, adding

  18. The heuristic value of redundancy models of aging.

    Science.gov (United States)

    Boonekamp, Jelle J; Briga, Michael; Verhulst, Simon

    2015-11-01

    Molecular studies of aging aim to unravel the cause(s) of aging bottom-up, but linking these mechanisms to organismal level processes remains a challenge. We propose that complementary top-down data-directed modelling of organismal level empirical findings may contribute to developing these links. To this end, we explore the heuristic value of redundancy models of aging to develop a deeper insight into the mechanisms causing variation in senescence and lifespan. We start by showing (i) how different redundancy model parameters affect projected aging and mortality, and (ii) how variation in redundancy model parameters relates to variation in parameters of the Gompertz equation. Lifestyle changes or medical interventions during life can modify mortality rate, and we investigate (iii) how interventions that change specific redundancy parameters within the model affect subsequent mortality and actuarial senescence. Lastly, as an example of data-directed modelling and the insights that can be gained from this, (iv) we fit a redundancy model to mortality patterns observed by Mair et al. (2003; Science 301: 1731-1733) in Drosophila that were subjected to dietary restriction and temperature manipulations. Mair et al. found that dietary restriction instantaneously reduced mortality rate without affecting aging, while temperature manipulations had more transient effects on mortality rate and did affect aging. We show that after adjusting model parameters the redundancy model describes both effects well, and a comparison of the parameter values yields a deeper insight in the mechanisms causing these contrasting effects. We see replacement of the redundancy model parameters by more detailed sub-models of these parameters as a next step in linking demographic patterns to underlying molecular mechanisms. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. A Description of the Software Element of the NASA EME Flight Tests

    Science.gov (United States)

    Koppen, Sandra V.

    1996-01-01

    In support of NASA's Fly-By-Light/Power-By-Wire (FBL/PBW) program, a series of flight tests were conducted by NASA Langley Research Center in February, 1995. The NASA Boeing 757 was flown past known RF transmitters to measure both external and internal radiated fields. The aircraft was instrumented with strategically located sensors for acquiring data on shielding effectiveness and internal coupling. The data are intended to support computational and statistical modeling codes used to predict internal field levels of an electromagnetic environment (EME) on aircraft. The software was an integral part of the flight tests, as well as the data reduction process. The software, which provided flight test instrument control, data acquisition, and a user interface, executes on a Hewlett Packard (HP) 300 series workstation and uses BP VEEtest development software and the C programming language. Software tools were developed for data processing and analysis, and to provide a database organized by frequency bands, test runs, and sensors. This paper describes the data acquisition system on board the aircraft and concentrates on the software portion. Hardware and software interfaces are illustrated and discussed. Particular attention is given to data acquisition and data format. The data reduction process is discussed in detail to provide insight into the characteristics, quality, and limitations of the data. An analysis of obstacles encountered during the data reduction process is presented.

  20. Redundancy and Reliability for an HPC Data Centre

    OpenAIRE

    Erhan Yılmaz

    2012-01-01

    Defining a level of redundancy is a strategic question when planning a new data centre, as it will directly impact the entire design of the building as well as the construction and operational costs. It will also affect how to integrate future extension plans into the design. Redundancy is also a key strategic issue when upgrading or retrofitting an existing facility. Redundancy is a central strategic question to any business that relies on data centres for its operation. In th...

  1. Study of evaluation techniques of software testing and V and V in Nuclear Power Plants

    Energy Technology Data Exchange (ETDEWEB)

    Youn, Cheong; Baek, Y. W.; Kim, H. C.; Shin, C. Y.; Park, N. J. [Chungnam Nationl Univ., Taejon (Korea, Republic of)

    2000-03-15

    The study of activities to solve software safety and quality must be executed in base of establishing software development process for digitalized nuclear plant. Especially study of software testing and verification and validation must executed. For this purpose methodologies and tools which can improve software qualities are evaluated and software testing and V and V which can be applied to software life cycle are investigated. This study establish a guideline that can assure software safety and reliability requirements in digitalized nuclear plant systems and can be used as a guidebook of software development process to assure software quality many software development organization.

  2. Power, Avionics and Software - Phase 1.0:. [Subsystem Integration Test Report

    Science.gov (United States)

    Ivancic, William D.; Sands, Obed S.; Bakula, Casey J.; Oldham, Daniel R.; Wright, Ted; Bradish, Martin A.; Klebau, Joseph M.

    2014-01-01

    This report describes Power, Avionics and Software (PAS) 1.0 subsystem integration testing and test results that occurred in August and September of 2013. This report covers the capabilities of each PAS assembly to meet integration test objectives for non-safety critical, non-flight, non-human-rated hardware and software development. This test report is the outcome of the first integration of the PAS subsystem and is meant to provide data for subsequent designs, development and testing of the future PAS subsystems. The two main objectives were to assess the ability of the PAS assemblies to exchange messages and to perform audio testing of both inbound and outbound channels. This report describes each test performed, defines the test, the data, and provides conclusions and recommendations.

  3. Flexible test automation a software framework for easily developing measurement applications

    CERN Document Server

    Arpaia, Pasquale; De Matteis, Ernesto

    2014-01-01

    In laboratory management of an industrial test division, a test laboratory, or a research center, one of the main activities is producing suitable software for automatic benches by satisfying a given set of requirements. This activity is particularly costly and burdensome when test requirements are variable over time. If the batches of objects have small size and frequent occurrence, the activity of measurement automation becomes predominating with respect to the test execution. Flexible Test Automation shows the development of a software framework as a useful solution to satisfy this exigency. The framework supports the user in producing measurement applications for a wide range of requirements with low effort and development time.

  4. Taking advantage of ground data systems attributes to achieve quality results in testing software

    Science.gov (United States)

    Sigman, Clayton B.; Koslosky, John T.; Hageman, Barbara H.

    1994-01-01

    During the software development life cycle process, basic testing starts with the development team. At the end of the development process, an acceptance test is performed for the user to ensure that the deliverable is acceptable. Ideally, the delivery is an operational product with zero defects. However, the goal of zero defects is normally not achieved but is successful to various degrees. With the emphasis on building low cost ground support systems while maintaining a quality product, a key element in the test process is simulator capability. This paper reviews the Transportable Payload Operations Control Center (TPOCC) Advanced Spacecraft Simulator (TASS) test tool that is used in the acceptance test process for unmanned satellite operations control centers. The TASS is designed to support the development, test and operational environments of the Goddard Space Flight Center (GSFC) operations control centers. The TASS uses the same basic architecture as the operations control center. This architecture is characterized by its use of distributed processing, industry standards, commercial off-the-shelf (COTS) hardware and software components, and reusable software. The TASS uses much of the same TPOCC architecture and reusable software that the operations control center developer uses. The TASS also makes use of reusable simulator software in the mission specific versions of the TASS. Very little new software needs to be developed, mainly mission specific telemetry communication and command processing software. By taking advantage of the ground data system attributes, successful software reuse for operational systems provides the opportunity to extend the reuse concept into the test area. Consistency in test approach is a major step in achieving quality results.

  5. Usability Testing for Developing Effective Interactive Multimedia Software: Concepts, Dimensions, and Procedures

    Directory of Open Access Journals (Sweden)

    Sung Heum Lee

    1999-04-01

    Full Text Available Usability testing is a dynamic process that can be used throughout the process of developing interactive multimedia software. The purpose of usability testing is to find problems and make recommendations to improve the utility of a product during its design and development. For developing effective interactive multimedia software, dimensions of usability testing were classified into the general categories of: learnability; performance effectiveness; flexibility; error tolerance and system integrity; and user satisfaction. In the process of usability testing, evaluation experts consider the nature of users and tasks, tradeoffs supported by the iterative design paradigm, and real world constraints to effectively evaluate and improve interactive multimedia software. Different methods address different purposes and involve a combination of user and usability testing, however, usability practitioners follow the seven general procedures of usability testing for effective multimedia development. As the knowledge about usability testing grows, evaluation experts will be able to choose more effective and efficient methods and techniques that are appropriate to their goals.

  6. Sensory redundancy management: The development of a design methodology for determining threshold values through a statistical analysis of sensor output data

    Science.gov (United States)

    Scalzo, F.

    1983-01-01

    Sensor redundancy management (SRM) requires a system which will detect failures and reconstruct avionics accordingly. A probability density function to determine false alarm rates, using an algorithmic approach was generated. Microcomputer software was developed which will print out tables of values for the cummulative probability of being in the domain of failure; system reliability; and false alarm probability, given a signal is in the domain of failure. The microcomputer software was applied to the sensor output data for various AFT1 F-16 flights and sensor parameters. Practical recommendations for further research were made.

  7. ES 1010 software for testing CAMAC modules

    International Nuclear Information System (INIS)

    Ableev, V.G.; Basiladze, S.G.; Zaporozhets, S.A.; Piskunov, N.M.; Ryabtsov, V.D.; Sitnik, I.M.; Strokovskij, E.A.; Sharov, V.I.

    1977-01-01

    Test programs for digital and analog-digital CAMAC modules applied in physical experiments are described. Algorithms were written in FORTRAN-4 language for testing, data acquisition, processing and data control. ASSEMBLER ES 1010 subroutines were used for data acquisition and CAMAC module control. This allowed one to take advantages of a high level language for data processing and display, as well as for achieving an interface with the CAMAC hardware. Software applied enables one to improve considerably adjustment of CAMAC modules and to obtain their operational characteristics

  8. Software Verification and Validation Test Report for the HEPA filter Differential Pressure Fan Interlock System

    International Nuclear Information System (INIS)

    ERMI, A.M.

    2000-01-01

    The HEPA Filter Differential Pressure Fan Interlock System PLC ladder logic software was tested using a Software Verification and Validation (VandV) Test Plan as required by the ''Computer Software Quality Assurance Requirements''. The purpose of his document is to report on the results of the software qualification

  9. Optimizing infrastructure for software testing using virtualization

    International Nuclear Information System (INIS)

    Khalid, O.; Shaikh, A.; Copy, B.

    2012-01-01

    Virtualization technology and cloud computing have brought a paradigm shift in the way we utilize, deploy and manage computer resources. They allow fast deployment of multiple operating system as containers on physical machines which can be either discarded after use or check-pointed for later re-deployment. At European Organization for Nuclear Research (CERN), we have been using virtualization technology to quickly setup virtual machines for our developers with pre-configured software to enable them to quickly test/deploy a new version of a software patch for a given application. This paper reports both on the techniques that have been used to setup a private cloud on a commodity hardware and also presents the optimization techniques we used to remove deployment specific performance bottlenecks. (authors)

  10. Self-Healing Networks: Redundancy and Structure

    Science.gov (United States)

    Quattrociocchi, Walter; Caldarelli, Guido; Scala, Antonio

    2014-01-01

    We introduce the concept of self-healing in the field of complex networks modelling; in particular, self-healing capabilities are implemented through distributed communication protocols that exploit redundant links to recover the connectivity of the system. We then analyze the effect of the level of redundancy on the resilience to multiple failures; in particular, we measure the fraction of nodes still served for increasing levels of network damages. Finally, we study the effects of redundancy under different connectivity patterns—from planar grids, to small-world, up to scale-free networks—on healing performances. Small-world topologies show that introducing some long-range connections in planar grids greatly enhances the resilience to multiple failures with performances comparable to the case of the most resilient (and least realistic) scale-free structures. Obvious applications of self-healing are in the important field of infrastructural networks like gas, power, water, oil distribution systems. PMID:24533065

  11. Motion compensation via redundant-wavelet multihypothesis.

    Science.gov (United States)

    Fowler, James E; Cui, Suxia; Wang, Yonghui

    2006-10-01

    Multihypothesis motion compensation has been widely used in video coding with previous attention focused on techniques employing predictions that are diverse spatially or temporally. In this paper, the multihypothesis concept is extended into the transform domain by using a redundant wavelet transform to produce multiple predictions that are diverse in transform phase. The corresponding multiple-phase inverse transform implicitly combines the phase-diverse predictions into a single spatial-domain prediction for motion compensation. The performance advantage of this redundant-wavelet-multihypothesis approach is investigated analytically, invoking the fact that the multiple-phase inverse involves a projection that significantly reduces the power of a dense-motion residual modeled as additive noise. The analysis shows that redundant-wavelet multihypothesis is capable of up to a 7-dB reduction in prediction-residual variance over an equivalent single-phase, single-hypothesis approach. Experimental results substantiate the performance advantage for a block-based implementation.

  12. Modular, Autonomous Command and Data Handling Software with Built-In Simulation and Test

    Science.gov (United States)

    Cuseo, John

    2012-01-01

    The spacecraft system that plays the greatest role throughout the program lifecycle is the Command and Data Handling System (C&DH), along with the associated algorithms and software. The C&DH takes on this role as cost driver because it is the brains of the spacecraft and is the element of the system that is primarily responsible for the integration and interoperability of all spacecraft subsystems. During design and development, many activities associated with mission design, system engineering, and subsystem development result in products that are directly supported by the C&DH, such as interfaces, algorithms, flight software (FSW), and parameter sets. A modular system architecture has been developed that provides a means for rapid spacecraft assembly, test, and integration. This modular C&DH software architecture, which can be targeted and adapted to a wide variety of spacecraft architectures, payloads, and mission requirements, eliminates the current practice of rewriting the spacecraft software and test environment for every mission. This software allows missionspecific software and algorithms to be rapidly integrated and tested, significantly decreasing time involved in the software development cycle. Additionally, the FSW includes an Onboard Dynamic Simulation System (ODySSy) that allows the C&DH software to support rapid integration and test. With this solution, the C&DH software capabilities will encompass all phases of the spacecraft lifecycle. ODySSy is an on-board simulation capability built directly into the FSW that provides dynamic built-in test capabilities as soon as the FSW image is loaded onto the processor. It includes a six-degrees- of-freedom, high-fidelity simulation that allows complete closed-loop and hardware-in-the-loop testing of a spacecraft in a ground processing environment without any additional external stimuli. ODySSy can intercept and modify sensor inputs using mathematical sensor models, and can intercept and respond to actuator

  13. Measuring Software Test Verification for Complex Workpieces based on Virtual Gear Measuring Instrument

    Directory of Open Access Journals (Sweden)

    Yin Peili

    2017-08-01

    Full Text Available Validity and correctness test verification of the measuring software has been a thorny issue hindering the development of Gear Measuring Instrument (GMI. The main reason is that the software itself is difficult to separate from the rest of the measurement system for independent evaluation. This paper presents a Virtual Gear Measuring Instrument (VGMI to independently validate the measuring software. The triangular patch model with accurately controlled precision was taken as the virtual workpiece and a universal collision detection model was established. The whole process simulation of workpiece measurement is implemented by VGMI replacing GMI and the measuring software is tested in the proposed virtual environment. Taking involute profile measurement procedure as an example, the validity of the software is evaluated based on the simulation results; meanwhile, experiments using the same measuring software are carried out on the involute master in a GMI. The experiment results indicate a consistency of tooth profile deviation and calibration results, thus verifying the accuracy of gear measuring system which includes the measurement procedures. It is shown that the VGMI presented can be applied in the validation of measuring software, providing a new ideal platform for testing of complex workpiece-measuring software without calibrated artifacts.

  14. Unit Testing for the Application Control Language (ACL) Software

    Science.gov (United States)

    Heinich, Christina Marie

    2014-01-01

    In the software development process, code needs to be tested before it can be packaged for release in order to make sure the program actually does what it says is supposed to happen as well as to check how the program deals with errors and edge cases (such as negative or very large numbers). One of the major parts of the testing process is unit testing, where you test specific units of the code to make sure each individual part of the code works. This project is about unit testing many different components of the ACL software and fixing any errors encountered. To do this, mocks of other objects need to be created and every line of code needs to be exercised to make sure every case is accounted for. Mocks are important to make because it gives direct control of the environment the unit lives in instead of attempting to work with the entire program. This makes it easier to achieve the second goal of exercising every line of code.

  15. A study of operational and testing reliability in software reliability analysis

    International Nuclear Information System (INIS)

    Yang, B.; Xie, M.

    2000-01-01

    Software reliability is an important aspect of any complex equipment today. Software reliability is usually estimated based on reliability models such as nonhomogeneous Poisson process (NHPP) models. Software systems are improving in testing phase, while it normally does not change in operational phase. Depending on whether the reliability is to be predicted for testing phase or operation phase, different measure should be used. In this paper, two different reliability concepts, namely, the operational reliability and the testing reliability, are clarified and studied in detail. These concepts have been mixed up or even misused in some existing literature. Using different reliability concept will lead to different reliability values obtained and it will further lead to different reliability-based decisions made. The difference of the estimated reliabilities is studied and the effect on the optimal release time is investigated

  16. Frameworks for Performing on Cloud Automated Software Testing Using Swarm Intelligence Algorithm: Brief Survey

    Directory of Open Access Journals (Sweden)

    Mohammad Hossain

    2018-04-01

    Full Text Available This paper surveys on Cloud Based Automated Testing Software that is able to perform Black-box testing, White-box testing, as well as Unit and Integration Testing as a whole. In this paper, we discuss few of the available automated software testing frameworks on the cloud. These frameworks are found to be more efficient and cost effective because they execute test suites over a distributed cloud infrastructure. One of the framework effectiveness was attributed to having a module that accepts manual test cases from users and it prioritize them accordingly. Software testing, in general, accounts for as much as 50% of the total efforts of the software development project. To lessen the efforts, one the frameworks discussed in this paper used swarm intelligence algorithms. It uses the Ant Colony Algorithm for complete path coverage to minimize time and the Bee Colony Optimization (BCO for regression testing to ensure backward compatibility.

  17. Detection of sensor failures in nuclear plants using analytic redundancy

    International Nuclear Information System (INIS)

    Kitamura, M.

    1980-01-01

    A method for on-line, nonperturbative detection and identification of sensor failures in nuclear power plants was studied to determine its feasibility. This method is called analytic redundancy, or functional redundancy. Sensor failure has traditionally been detected by comparing multiple signals from redundant sensors, such as in two-out-of-three logic. In analytic redundancy, with the help of an assumed model of the physical system, the signals from a set of sensors are processed to reproduce the signals from all system sensors

  18. System support software for TSTA [Tritium Systems Test Assembly

    International Nuclear Information System (INIS)

    Claborn, G.W.; Mann, L.W.; Nielson, C.W.

    1987-10-01

    The fact that Tritium Systems Test Assembly (TSTA) is an experimental facility makes it impossible and undesirable to try to forecast the exact software requirements. Thus the software had to be written in a manner that would allow modifications without compromising the safety requirements imposed by the handling of tritium. This suggested a multi-level approach to the software. In this approach (much like the ISO network model) each level is isolated from the level below and above by cleanly defined interfaces. For example, the subsystem support level interfaces with the subsystem hardware through the software support level. Routines in the software support level provide operations like ''OPEN VALVE'' and CLOSE VALVE'' to the subsystem level. This isolates the subsystem level from the actual hardware. This is advantageous because changes can occur in any level without the need for propagating the change to any other level. The TSTA control system consists of the hardware level, the data conversion level, the operator interface level, and the subsystem process level. These levels are described

  19. A New Biobjective Model to Optimize Integrated Redundancy Allocation and Reliability-Centered Maintenance Problems in a System Using Metaheuristics

    Directory of Open Access Journals (Sweden)

    Shima MohammadZadeh Dogahe

    2015-01-01

    Full Text Available A novel integrated model is proposed to optimize the redundancy allocation problem (RAP and the reliability-centered maintenance (RCM simultaneously. A system of both repairable and nonrepairable components has been considered. In this system, electronic components are nonrepairable while mechanical components are mostly repairable. For nonrepairable components, a redundancy allocation problem is dealt with to determine optimal redundancy strategy and number of redundant components to be implemented in each subsystem. In addition, a maintenance scheduling problem is considered for repairable components in order to identify the best maintenance policy and optimize system reliability. Both active and cold standby redundancy strategies have been taken into account for electronic components. Also, net present value of the secondary cost including operational and maintenance costs has been calculated. The problem is formulated as a biobjective mathematical programming model aiming to reach a tradeoff between system reliability and cost. Three metaheuristic algorithms are employed to solve the proposed model: Nondominated Sorting Genetic Algorithm (NSGA-II, Multiobjective Particle Swarm Optimization (MOPSO, and Multiobjective Firefly Algorithm (MOFA. Several test problems are solved using the mentioned algorithms to test efficiency and effectiveness of the solution approaches and obtained results are analyzed.

  20. Machine Learning Approach for Software Reliability Growth Modeling with Infinite Testing Effort Function

    Directory of Open Access Journals (Sweden)

    Subburaj Ramasamy

    2017-01-01

    Full Text Available Reliability is one of the quantifiable software quality attributes. Software Reliability Growth Models (SRGMs are used to assess the reliability achieved at different times of testing. Traditional time-based SRGMs may not be accurate enough in all situations where test effort varies with time. To overcome this lacuna, test effort was used instead of time in SRGMs. In the past, finite test effort functions were proposed, which may not be realistic as, at infinite testing time, test effort will be infinite. Hence in this paper, we propose an infinite test effort function in conjunction with a classical Nonhomogeneous Poisson Process (NHPP model. We use Artificial Neural Network (ANN for training the proposed model with software failure data. Here it is possible to get a large set of weights for the same model to describe the past failure data equally well. We use machine learning approach to select the appropriate set of weights for the model which will describe both the past and the future data well. We compare the performance of the proposed model with existing model using practical software failure data sets. The proposed log-power TEF based SRGM describes all types of failure data equally well and also improves the accuracy of parameter estimation more than existing TEF and can be used for software release time determination as well.

  1. Software architecture for the ORNL large coil test facility data system

    International Nuclear Information System (INIS)

    Blair, E.T.; Baylor, L.R.

    1986-01-01

    The VAX based data acquisition system for the international fusion superconducting magnetic test facility (IFSMTF) at Oak Ridge National Laboratory (ORNL) is a second generation system that evolved from a PDP-11/60 based system used during the initial phase of facility testing. The VAX based software represents a layered implementation that provides integrated access to all of the data sources within the system, decoupling en-user data retrieval from various front-end data sources through a combination of software architecture and instrumentation data bases. Independent VAX processes manage the various front-end data sources, each being responsible for controlling, monitoring, acquiring, and disposing data and control parameters for access from the data retrieval software

  2. Reliability of redundant structures of nuclear reactor protection systems

    International Nuclear Information System (INIS)

    Vojnovic, B.

    1983-01-01

    In this paper, reliability of various redundant structures of PWR protection systems has been analysed. Structures of reactor tip systems as well as the systems for activation of safety devices have been presented. In all those systems redundancy is achieved by means of so called majority voting logic ('r out of n' structures). Different redundant devices have been compared, concerning probability of occurrence of safe as well as unsafe failures. (author)

  3. Diabetes classification using a redundancy reduction preprocessor

    Directory of Open Access Journals (Sweden)

    Áurea Celeste Ribeiro

    Full Text Available Introduction Diabetes patients can benefit significantly from early diagnosis. Thus, accurate automated screening is becoming increasingly important due to the wide spread of that disease. Previous studies in automated screening have found a maximum accuracy of 92.6%. Methods This work proposes a classification methodology based on efficient coding of the input data, which is carried out by decreasing input data redundancy using well-known ICA algorithms, such as FastICA, JADE and INFOMAX. The classifier used in the task to discriminate diabetics from non-diaibetics is the one class support vector machine. Classification tests were performed using noninvasive and invasive indicators. Results The results suggest that redundancy reduction increases one-class support vector machine performance when discriminating between diabetics and nondiabetics up to an accuracy of 98.47% while using all indicators. By using only noninvasive indicators, an accuracy of 98.28% was obtained. Conclusion The ICA feature extraction improves the performance of the classifier in the data set because it reduces the statistical dependence of the collected data, which increases the ability of the classifier to find accurate class boundaries.

  4. Mining Software Repositories to Study Co-Evolution of Production & Test Code

    NARCIS (Netherlands)

    Zaidman, A.E.; Van Rompaey, B.; Demeyer, S.; Van Deursen, A.

    2008-01-01

    Preprint of paper published in: ICST 2008 - Proceedings of the International Conference on Software Testing, Verification, and Validation, 2008; doi:10.1109/ICST.2008.47 Engineering software systems is a multidisciplinary activity, whereby a number of artifacts must be created — and maintained —

  5. Estimating Rates of Fault Insertion and Test Effectiveness in Software Systems

    Science.gov (United States)

    Nikora, A.; Munson, J.

    1998-01-01

    In developing a software system, we would like to estimate the total number of faults inserted into a software system, the residual fault content of that system at any given time, and the efficacy of the testing activity in executing the code containing the newly inserted faults.

  6. Trophic redundancy reduces vulnerability to extinction cascades.

    Science.gov (United States)

    Sanders, Dirk; Thébault, Elisa; Kehoe, Rachel; Frank van Veen, F J

    2018-03-06

    Current species extinction rates are at unprecedentedly high levels. While human activities can be the direct cause of some extinctions, it is becoming increasingly clear that species extinctions themselves can be the cause of further extinctions, since species affect each other through the network of ecological interactions among them. There is concern that the simplification of ecosystems, due to the loss of species and ecological interactions, increases their vulnerability to such secondary extinctions. It is predicted that more complex food webs will be less vulnerable to secondary extinctions due to greater trophic redundancy that can buffer against the effects of species loss. Here, we demonstrate in a field experiment with replicated plant-insect communities, that the probability of secondary extinctions is indeed smaller in food webs that include trophic redundancy. Harvesting one species of parasitoid wasp led to secondary extinctions of other, indirectly linked, species at the same trophic level. This effect was markedly stronger in simple communities than for the same species within a more complex food web. We show that this is due to functional redundancy in the more complex food webs and confirm this mechanism with a food web simulation model by highlighting the importance of the presence and strength of trophic links providing redundancy to those links that were lost. Our results demonstrate that biodiversity loss, leading to a reduction in redundant interactions, can increase the vulnerability of ecosystems to secondary extinctions, which, when they occur, can then lead to further simplification and run-away extinction cascades. Copyright © 2018 the Author(s). Published by PNAS.

  7. Mobility and Position Error Analysis of a Complex Planar Mechanism with Redundant Constraints

    Science.gov (United States)

    Sun, Qipeng; Li, Gangyan

    2018-03-01

    Nowadays mechanisms with redundant constraints have been created and attracted much attention for their merits. The mechanism of the redundant constraints in a mechanical system is analyzed in this paper. A analysis method of Planar Linkage with a repetitive structure is proposed to get the number and type of constraints. According to the difference of applications and constraint characteristics, the redundant constraints are divided into the theoretical planar redundant constraints and the space-planar redundant constraints. And the calculation formula for the number of redundant constraints and type of judging method are carried out. And a complex mechanism with redundant constraints is analyzed of the influence about redundant constraints on mechanical performance. With the combination of theoretical derivation and simulation research, a mechanism analysis method is put forward about the position error of complex mechanism with redundant constraints. It points out the direction on how to eliminate or reduce the influence of redundant constraints.

  8. Prueba del software: más que una fase en el ciclo de vida/Software testing: more than a stage in the life cycle

    Directory of Open Access Journals (Sweden)

    Edgar Serna

    2011-12-01

    Full Text Available La prueba de software es probablemente la parte menos comprendida del ciclo de vida del desarrollo de software. En este trabajo, mediante una propuesta metodológica de cuatro fases, se muestra por qué es difícil detectar y eliminar errores, por qué es complejo el proceso de realizar pruebas y por qué es necesario prestarle más atención.Software testing probably is the least understood part of the software testing life cycle. In this work, by means of a methodological proposal of four stages, is showed why is complex the process of carrying out the testing software, why is necessary to pay it more attention and why is so difficult to detect and delete the mistakes.

  9. Interaction control of a redundant mobile manipulator

    International Nuclear Information System (INIS)

    Chung, J.H.; Velinsky, S.A.; Hess, R.A.

    1998-01-01

    This paper discusses the modeling and control of a spatial mobile manipulator that consists of a robotic manipulator mounted on a wheeled mobile platform. The Lagrange-d'Alembert formulation is used to obtain a concise description of the dynamics of the system, which is subject to nonholonomic constraints. The complexity of the model is increased by introducing kinematic redundancy, which is created when a multilinked manipulator is used. The kinematic redundancy is resolved by decomposing the mobile manipulator into two subsystems: the mobile platform and the manipulator. The redundancy resolution scheme employs a nonlinear interaction-control algorithm, which is developed and applied to coordinate the two subsystems' controllers. The subsystem controllers are independently designed, based on each subsystem's dynamic characteristics. Simulation results show the promise of the developed algorithm

  10. Data acquisition and test system software

    International Nuclear Information System (INIS)

    Bourgeois, N.A. Jr.

    1979-03-01

    Sandia Laboratories has been assigned the task by the Base and Installation Security Systems (BISS) Program Office to develop various aspects of perimeter security systems. One part of this effort involves the development of advanced signal processing techniques to reduce the false and nuisance alarms from sensor systems while improving the probability of intrusion detection. The need existed for both data acquisition hardware and software. Also, the hardware is used to implement and test the signal processing algorithms in real time. The hardware developed for this signal processing task is the Data Acquisition and Test System (DATS). The programs developed for use on DATS are described. The descriptions are taken directly from the documentation included within the source programs themselves

  11. Testing and Deployment of Software Systems (in practice)

    DEFF Research Database (Denmark)

    Nyborg, Mads; Høgh, Stig

    2014-01-01

    . The aim of this paper is to describe: • the unified software development process and compare this with CDIO. • the activities covering the ‘O’ part in software engineering. • the course structure and schedule. • the evaluations and comments received from students. The paper concludes that: It is possible......The CDIO concept is now well integrated into many curricula at universities around the world and it has meant an increase in the quality of engineering education. However, the main focus has been on design-build projects and less on the ‘C’ and ‘O’ part. In particular, the ‘O’ part of CDIO has...... received very little focus, since this is probably the most difficult part to implement in a university environment. Because of this observation, in 2011 we decided to launch a new elective course, ‘Testing and deployment of software systems (in practice)’, focusing entirely on the ‘O’ part in CDIO...

  12. Comparative efficacy and safety of different circumcisions for patients with redundant prepuce or phimosis: A network meta-analysis.

    Science.gov (United States)

    Huang, Chuiguo; Song, Pan; Xu, Changbao; Wang, Ruofan; Wei, Lei; Zhao, Xinghua

    2017-07-01

    Phimosis and redundant prepuce are defined as the inability of the foreskin to be retracted behind the glans penis in uncircumcised males. To synthesize the evidence and provide the hierarchies of different circumcisions for phimosis and redundant prepuce, we performed an overall network meta-analysis (NMA) based on their comparative efficacy and safety. Electronic databases including PubMed, Embase, Wan Fang, VIP, CNKI and CBM database were researched from randomized controlled trials (RCTs) for redundant prepuce or phimosis. We conducted the direct and indirect comparisons by aggregate data drug information system (ADDIS) software. Moreover, consistency models were applied to assess the differences among the male circumcision practices, and the ranks based on probabilities of intervention for the different endpoints were performed. Node-splitting analysis was used to test inconsistency. Eighteen RCTs were included with 6179 participants. Compared with the conventional circumcision(CC), two new styles of circumcisions, the disposable circumcision suture device(DCSD) and Shang Ring circumcision(SRC), provided significantly shorter operation time[DCSD: standardized mean difference (SMD) = -20.60, 95% credible interval(CI) (-23.38, -17.82); SRC: SMD = -19.16, 95%CI (-21.86, -16.52)], shorter wound healing time [DCSD:SMD = -4.19, 95%CI (-8.24,-0.04); SRC: SMD = 4.55, 95%CI (1.62, 7.57); ] and better postoperative penile appearance [DCSD: odds ratios odds ratios (OR) = 11.42, 95%CI (3.60, 37.68); SRC: OR = 3.85,95%CI (1.29, 12.79)]. Additionally, DCSD showed a lower adverse events rate than other two treatments. However, no significant difference was shown in all surgeries for 24 h postoperative pain score. Node-splitting analysis showed that no significant inconsistency was existed (P > 0.05). Based on the results of NMA, DCSD may be a most effective and safest choice for phimosis and redundant prepuce. DCSD has the advantages of a shorter operation

  13. Inverse kinematics of redundant systems driver IKORv1.0-2.0 (full space parameterization with orientation control, platform mobility, and portability)

    Energy Technology Data Exchange (ETDEWEB)

    Hacker, C.J.; Fries, G.A.; Pin, F.G.

    1997-01-01

    Few optimization methods exist for path planning of kinematically redundant manipulators. Among these, a universal method is lacking that takes advantage of a manipulator`s redundancy while satisfying possibly varying constraints and task requirements. Full Space Parameterization (FSP) is a new method that generates the entire solution space of underspecified systems of algebraic equations and then calculates the unique solution satisfying specific constraints and optimization criteria. The FSP method has been previously tested on several configurations of the redundant manipulator HERMIES-III. This report deals with the extension of the FSP driver, Inverse Kinematics On Redundant systems (IKOR), to include three-dimensional manipulation systems, possibly incorporating a mobile platform, with and without orientation control. The driver was also extended by integrating two optimized versions of the FSP solution generator as well as the ability to easily port to any manipulator. IKOR was first altered to include the ability to handle orientation control and to integrate an optimized solution generator. The resulting system was tested on a 4 degrees-of-redundancy manipulator arm and was found to successfully perform trajectories with least norm criteria while avoiding obstacles and joint limits. Next, the system was adapted and tested on a manipulator arm placed on a mobile platform yielding 7 degrees of redundancy. After successful testing on least norm trajectories while avoiding obstacles and joint limits, IKORv1.0 was developed. The system was successfully verified using comparisons with a current industry standard, the Moore Penrose Pseudo-Inverse. Finally, IKORv2.0 was created, which includes both the one shot and two step methods, manipulator portability, integration of a second optimized solution generator, and finally a more robust and usable code design.

  14. Application of software technology to automatic test data analysis

    Science.gov (United States)

    Stagner, J. R.

    1991-01-01

    The verification process for a major software subsystem was partially automated as part of a feasibility demonstration. The methods employed are generally useful and applicable to other types of subsystems. The effort resulted in substantial savings in test engineer analysis time and offers a method for inclusion of automatic verification as a part of regression testing.

  15. Joint optimization of redundancy level and spare part inventories

    International Nuclear Information System (INIS)

    Sleptchenko, Andrei; Heijden, Matthieu van der

    2016-01-01

    We consider a “k-out-of-N” system with different standby modes. Each of the N components consists of multiple part types. Upon failure, a component can be repaired within a certain time by switching the failed part by a spare, if available. We develop both an exact and a fast approximate analysis to compute the system availability. Next, we jointly optimize the component redundancy level with the inventories of the various spare parts. We find that our approximations are very accurate and suitable for large systems. We apply our model to a case study at a public organization in Qatar, and find that we can improve the availability-to-cost ratio by reducing the redundancy level and increasing the spare part inventories. In general, high redundancy levels appear to be useful only when components are relatively cheap and part replacement times are high. - Highlights: • We analyze a redundant system (k-out-of-N) with multiple parts and spares. • We jointly optimize the redundancy level and the spare part inventories. • We develop an exact method and an approximation to evaluate the system availability. • Adding spare parts and reducing the redundancy level cuts cost by 50% in a case study. • The availability is not very sensitive to the shape of the failure time distribution.

  16. Tests of Event Filter Configuration Software

    CERN Multimedia

    Wickens, F.J.

    TDAQ - Tests of Event Filter configuration software Within Trigger/DAQ a major consideration is how well the performance of the system components scale in going from the small set-ups used for development work to the final system with many hundreds of processors and links. In the case of the Event Filter, which makes the final stage of on-line event selection, plus on-line calibrations and monitoring, more than a thousand processors are envisaged. These processors will be divided into sub-farms, most will be remote from the detector and some could even be at institutes far from CERN. As part of the on-line system it is important that the software in the sub-farms can be reconfigured rapidly as runs start and stop, and that the system be fault tolerant. The flow of data inside a sub-farm involves many processes, for distribution and collection of results in addition to those for event processing itself. Supervision code written in Java has been developed to manage the processes within a cluster, with XML f...

  17. Western aeronautical test range real-time graphics software package MAGIC

    Science.gov (United States)

    Malone, Jacqueline C.; Moore, Archie L.

    1988-01-01

    The master graphics interactive console (MAGIC) software package used on the Western Aeronautical Test Range (WATR) of the NASA Ames Research Center is described. MAGIC is a resident real-time research tool available to flight researchers-scientists in the NASA mission control centers of the WATR at the Dryden Flight Research Facility at Edwards, California. The hardware configuration and capabilities of the real-time software package are also discussed.

  18. REDUNDANT ARRAY CONFIGURATIONS FOR 21 cm COSMOLOGY

    Energy Technology Data Exchange (ETDEWEB)

    Dillon, Joshua S.; Parsons, Aaron R., E-mail: jsdillon@berkeley.edu [Department of Astronomy, UC Berkeley, Berkeley, CA (United States)

    2016-08-01

    Realizing the potential of 21 cm tomography to statistically probe the intergalactic medium before and during the Epoch of Reionization requires large telescopes and precise control of systematics. Next-generation telescopes are now being designed and built to meet these challenges, drawing lessons from first-generation experiments that showed the benefits of densely packed, highly redundant arrays—in which the same mode on the sky is sampled by many antenna pairs—for achieving high sensitivity, precise calibration, and robust foreground mitigation. In this work, we focus on the Hydrogen Epoch of Reionization Array (HERA) as an interferometer with a dense, redundant core designed following these lessons to be optimized for 21 cm cosmology. We show how modestly supplementing or modifying a compact design like HERA’s can still deliver high sensitivity while enhancing strategies for calibration and foreground mitigation. In particular, we compare the imaging capability of several array configurations, both instantaneously (to address instrumental and ionospheric effects) and with rotation synthesis (for foreground removal). We also examine the effects that configuration has on calibratability using instantaneous redundancy. We find that improved imaging with sub-aperture sampling via “off-grid” antennas and increased angular resolution via far-flung “outrigger” antennas is possible with a redundantly calibratable array configuration.

  19. SSE software test management STM capability: Using STM in the Ground Systems Development Environment (GSDE)

    Science.gov (United States)

    Church, Victor E.; Long, D.; Hartenstein, Ray; Perez-Davila, Alfredo

    1992-01-01

    This report is one of a series discussing configuration management (CM) topics for Space Station ground systems software development. It provides a description of the Software Support Environment (SSE)-developed Software Test Management (STM) capability, and discusses the possible use of this capability for management of developed software during testing performed on target platforms. This is intended to supplement the formal documentation of STM provided by the SEE Project. How STM can be used to integrate contractor CM and formal CM for software before delivery to operations is described. STM provides a level of control that is flexible enough to support integration and debugging, but sufficiently rigorous to insure the integrity of the testing process.

  20. Optimal integration and test plans for software releases of lithographic systems

    NARCIS (Netherlands)

    Boumen, R.; Jong, de I.S.M.; Mortel - Fronczak, van de J.M.; Rooda, J.E.

    2007-01-01

    This paper describes a method to determine the optimal integration and test plan for embedded systems software releases. The method consists of four steps: 1)describe the integration and test problem in an integration and test model which is introduced in this paper, 2) determine possible test

  1. A software prototype development of human system interfaces for human factors engineering validation tests of SMART MCR

    International Nuclear Information System (INIS)

    Lim, Jong Tae; Han, Kwan Ho; Yang, Seung Won

    2011-02-01

    An integrated system validation test bed used for human factors engineering validation test is being developed. This study has a goal to develop a software prototype for HFE validation of SMART MCR design. To achieve these, first, some prototype specifications of the software was developed. Then software prototypes of alarm reduction logic system, Plant Protection System, ESF-CCS, Elastic Tile Alarm Indication, and EID-based HSIs were implemented as codes. Test procedures for the software prototypes were established to verify the completeness of the codes implemented. The careful software test has been done according to these test procedures, and the result were documented

  2. Analysis and Design of Offset QPSK Using Redundant Filter Banks

    International Nuclear Information System (INIS)

    Fernandez-Vazquez, Alfonso; Jovanovic-Dolecek, Gordana

    2013-01-01

    This paper considers the analysis and design of OQPSK digital modulation. We first establish the discrete time formulation, which allows us to find the equivalent redundant filter banks. It is well known that redundant filter banks are related with redundant transformation of the Frame theory. According to the Frame theory, the redundant transformations and corresponding representations are not unique. In this way, we show that the solution to the pulse shaping problem is not unique. Then we use this property to minimize the effect of the channel noise in the reconstructed symbol stream. We evaluate the performance of the digital communication using numerical examples.

  3. Redundancy for electric motors in spacecraft applications

    Science.gov (United States)

    Smith, Robert J.; Flew, Alastair R.

    1986-01-01

    The parts of electric motors which should be duplicated in order to provide maximum reliability in spacecraft application are identified. Various common types of redundancy are described. The advantages and disadvantages of each are noted. The principal types are illustrated by reference to specific examples. For each example, constructional details, basic performance data and failure modes are described, together with a discussion of the suitability of particular redundancy techniques to motor types.

  4. Dynamic Control of Kinematically Redundant Robotic Manipulators

    Directory of Open Access Journals (Sweden)

    Erling Lunde

    1987-07-01

    Full Text Available Several methods for task space control of kinematically redundant manipulators have been proposed in the literature. Most of these methods are based on a kinematic analysis of the manipulator. In this paper we propose a control algorithm in which we are especially concerned with the manipulator dynamics. The algorithm is particularly well suited for the class of redundant manipulators consisting of a relatively small manipulator mounted on a larger positioning part.

  5. Interactions between facial emotion and identity in face processing: evidence based on redundancy gains.

    Science.gov (United States)

    Yankouskaya, Alla; Booth, David A; Humphreys, Glyn

    2012-11-01

    Interactions between the processing of emotion expression and form-based information from faces (facial identity) were investigated using the redundant-target paradigm, in which we specifically tested whether identity and emotional expression are integrated in a superadditive manner (Miller, Cognitive Psychology 14:247-279, 1982). In Experiments 1 and 2, participants performed emotion and face identity judgments on faces with sad or angry emotional expressions. Responses to redundant targets were faster than responses to either single target when a universal emotion was conveyed, and performance violated the predictions from a model assuming independent processing of emotion and face identity. Experiment 4 showed that these effects were not modulated by varying interstimulus and nontarget contingencies, and Experiment 5 demonstrated that the redundancy gains were eliminated when faces were inverted. Taken together, these results suggest that the identification of emotion and facial identity interact in face processing.

  6. Study of redundant Models in reliability prediction of HXMT's HES

    International Nuclear Information System (INIS)

    Wang Jinming; Liu Congzhan; Zhang Zhi; Ji Jianfeng

    2010-01-01

    Two redundant equipment structures of HXMT's HES are proposed firstly, the block backup and dual system cold-redundancy. Then prediction of the reliability is made by using parts count method. Research of comparison and analysis is also performed on the two proposals. A conclusion is drawn that a higher reliability and longer service life could be offered by taking a redundant equipment structure of block backup. (authors)

  7. 3D seismic denoising based on a low-redundancy curvelet transform

    International Nuclear Information System (INIS)

    Cao, Jingjie; Zhao, Jingtao; Hu, Zhiying

    2015-01-01

    Contamination of seismic signal with noise is one of the main challenges during seismic data processing. Several methods exist for eliminating different types of noises, but optimal random noise attenuation remains difficult. Based on multi-scale, multi-directional locality of curvelet transform, the curvelet thresholding method is a relatively new method for random noise elimination. However, the high redundancy of a 3D curvelet transform makes its computational time and memory for massive data processing costly. To improve the efficiency of the curvelet thresholding denoising, a low-redundancy curvelet transform was introduced. The redundancy of the low-redundancy curvelet transform is approximately one-quarter of the original transform and the tightness of the original transform is also kept, thus the low-redundancy curvelet transform calls for less memory and computational resource compared with the original one. Numerical results on 3D synthetic and field data demonstrate that the low-redundancy curvelet denoising consumes one-quarter of the CPU time compared with the original curvelet transform using iterative thresholding denoising when comparable results are obtained. Thus, the low-redundancy curvelet transform is a good candidate for massive seismic denoising. (paper)

  8. Coherent network detection of gravitational waves: the redundancy veto

    International Nuclear Information System (INIS)

    Wen Linqing; Schutz, Bernard F

    2005-01-01

    A network of gravitational wave detectors is called redundant if, given the direction to a source, the strain induced by a gravitational wave in one or more of the detectors can be fully expressed in terms of the strain induced in others in the network. Because gravitational waves have only two polarizations, any network of three or more differently oriented interferometers with similar observing bands is redundant. The three-armed LISA space interferometer has three outputs that are redundant at low frequencies. The two aligned LIGO interferometers at Hanford WA are redundant, and the LIGO detector at Livingston LA is nearly redundant with either of the Hanford detectors. Redundant networks have a powerful veto against spurious noise, a linear combination of the detector outputs that contains no gravitational wave signal. For LISA, this 'null' output is known as the Sagnac mode, and its use in discriminating between detector noise and a cosmological gravitational wave background is well understood. But the usefulness of the null veto for ground-based detector networks has been ignored until now. We show that it should make it possible to discriminate in a model-independent way between real gravitational waves and accidentally coincident non-Gaussian noise 'events' in redundant networks of two or more broadband detectors. It has been shown that with three detectors, the null output can even be used to locate the direction to the source, and then two other linear combinations of detector outputs give the optimal 'coherent' reconstruction of the two polarization components of the signal. We discuss briefly the implementation of such a detection strategy in realistic networks, where signals are weak, detector calibration is a significant uncertainty, and the various detectors may have different (but overlapping) observing bands

  9. The cellular robustness by genetic redundancy in budding yeast.

    Directory of Open Access Journals (Sweden)

    Jingjing Li

    2010-11-01

    Full Text Available The frequent dispensability of duplicated genes in budding yeast is heralded as a hallmark of genetic robustness contributed by genetic redundancy. However, theoretical predictions suggest such backup by redundancy is evolutionarily unstable, and the extent of genetic robustness contributed from redundancy remains controversial. It is anticipated that, to achieve mutual buffering, the duplicated paralogs must at least share some functional overlap. However, counter-intuitively, several recent studies reported little functional redundancy between these buffering duplicates. The large yeast genetic interactions released recently allowed us to address these issues on a genome-wide scale. We herein characterized the synthetic genetic interactions for ∼500 pairs of yeast duplicated genes originated from either whole-genome duplication (WGD or small-scale duplication (SSD events. We established that functional redundancy between duplicates is a pre-requisite and thus is highly predictive of their backup capacity. This observation was particularly pronounced with the use of a newly introduced metric in scoring functional overlap between paralogs on the basis of gene ontology annotations. Even though mutual buffering was observed to be prevalent among duplicated genes, we showed that the observed backup capacity is largely an evolutionarily transient state. The loss of backup capacity generally follows a neutral mode, with the buffering strength decreasing in proportion to divergence time, and the vast majority of the paralogs have already lost their backup capacity. These observations validated previous theoretic predictions about instability of genetic redundancy. However, departing from the general neutral mode, intriguingly, our analysis revealed the presence of natural selection in stabilizing functional overlap between SSD pairs. These selected pairs, both WGD and SSD, tend to have decelerated functional evolution, have higher propensities of co

  10. Redundant nerve roots of the cauda equina : MR findings

    International Nuclear Information System (INIS)

    Oh, Kyu Hyen; Lee, Jung Man; Jung, Hak Young; Lee, Young Hwan; Sung, Nak Kwan; Chung, Duck Soo; Kim, Ok Dong; Lee, Sang Kwon; Suh, Kyung Jin

    1997-01-01

    To evaluate MR findings of redundant nerve roots (RNR) of the cauda equina. 17 patients with RNR were studied; eight were men and nine were women, and their ages ranged from 46 to 82 (mean 63) years. Diagroses were established on the basis of T2-weighted sagittal and coronal MRI, which showed a tortuous or coiled configuration of the nerve roots of the cauda equina. MR findings were reviewed for location, magnitude, and signal intensity of redundant nerve roots, and the relationship between magnitude of redundancy and severity of lumbar spinal canal stenosis (LSCS) was evaluated. In all 17 patients, MR showed moderate or severe LSCS caused by herniation or bulging of an intervertebral disc, osteophyte from the vertebral body or facet joint, thickening of the ligamentum flavum, degenerative spondylolisthesis, or a combination of these. T2-weighted sagittal and coronal MR images well clearly showed the location of RNR of the cauda equina;in 16 patients(94%), these were seen above the level of constriction of the spinal canal, and in one case, they were observed below the level of constriction. T2-weighted axial images showed the thecal sac filled with numerous nerve roots. The magnitude of RNR was mild in six cases (35%), moderate in five cases (30%), and severe in six cases (35%). Compared with normal nerve roots, the RNR signal on T2-weighted images was iso-intense. All patients with severe redundancy showed severe LSCS, but not all cases with severe LSCS showed severe redundancy. Redundant nerve roots of cauda equina were seen in relatively older patients with moderate or severe LSCS and T2-weighted MR images were accurate in identifying redundancy of nerve roots and evaluating their magnitude and location

  11. Compliant behaviour of redundant robot arm - experiments with null-space

    Directory of Open Access Journals (Sweden)

    Petrović Petar B.

    2015-01-01

    Full Text Available This paper presents theoretical and experimental aspects of Jacobian nullspace use in kinematically redundant robots for achieving kinetostatically consistent control of their compliant behavior. When the stiffness of the robot endpoint is dominantly influenced by the compliance of the robot joints, generalized stiffness matrix can be mapped into joint space using appropriate congruent transformation. Actuation stiffness matrix achieved by this transformation is generally nondiagonal. Off-diagonal elements of the actuation matrix can be generated by redundant actuation only (polyarticular actuators, but such kind of actuation is very difficult to realize practically in technical systems. The approach of solving this problem which is proposed in this paper is based on the use of kinematic redundancy and nullspace of the Jacobian matrix. Evaluation of the developed analytical model was done numerically by a minimal redundant robot with one redundant d.o.f. and experimentally by a 7 d.o.f. Yaskawa SIA 10F robot arm. [Projekat Ministarstva nauke Republike Srbije, br. TR35007

  12. Temporal information partitioning: Characterizing synergy, uniqueness, and redundancy in interacting environmental variables

    Science.gov (United States)

    Goodwell, Allison E.; Kumar, Praveen

    2017-07-01

    Information theoretic measures can be used to identify nonlinear interactions between source and target variables through reductions in uncertainty. In information partitioning, multivariate mutual information is decomposed into synergistic, unique, and redundant components. Synergy is information shared only when sources influence a target together, uniqueness is information only provided by one source, and redundancy is overlapping shared information from multiple sources. While this partitioning has been applied to provide insights into complex dependencies, several proposed partitioning methods overestimate redundant information and omit a component of unique information because they do not account for source dependencies. Additionally, information partitioning has only been applied to time-series data in a limited context, using basic pdf estimation techniques or a Gaussian assumption. We develop a Rescaled Redundancy measure (Rs) to solve the source dependency issue, and present Gaussian, autoregressive, and chaotic test cases to demonstrate its advantages over existing techniques in the presence of noise, various source correlations, and different types of interactions. This study constitutes the first rigorous application of information partitioning to environmental time-series data, and addresses how noise, pdf estimation technique, or source dependencies can influence detected measures. We illustrate how our techniques can unravel the complex nature of forcing and feedback within an ecohydrologic system with an application to 1 min environmental signals of air temperature, relative humidity, and windspeed. The methods presented here are applicable to the study of a broad range of complex systems composed of interacting variables.

  13. Maximization of learning speed in the motor cortex due to neuronal redundancy.

    Directory of Open Access Journals (Sweden)

    Ken Takiyama

    2012-01-01

    Full Text Available Many redundancies play functional roles in motor control and motor learning. For example, kinematic and muscle redundancies contribute to stabilizing posture and impedance control, respectively. Another redundancy is the number of neurons themselves; there are overwhelmingly more neurons than muscles, and many combinations of neural activation can generate identical muscle activity. The functional roles of this neuronal redundancy remains unknown. Analysis of a redundant neural network model makes it possible to investigate these functional roles while varying the number of model neurons and holding constant the number of output units. Our analysis reveals that learning speed reaches its maximum value if and only if the model includes sufficient neuronal redundancy. This analytical result does not depend on whether the distribution of the preferred direction is uniform or a skewed bimodal, both of which have been reported in neurophysiological studies. Neuronal redundancy maximizes learning speed, even if the neural network model includes recurrent connections, a nonlinear activation function, or nonlinear muscle units. Furthermore, our results do not rely on the shape of the generalization function. The results of this study suggest that one of the functional roles of neuronal redundancy is to maximize learning speed.

  14. SLAC FASTBUS Snoop Module: test results and support software

    International Nuclear Information System (INIS)

    Gustavson, D.B.; Walz, H.V.

    1985-09-01

    The development of a diagnostic module for FASTBUS has been completed. The Snoop Module is designed to reside on a Crate Segment and provide high-speed diagnostic monitoring and testing capabilities. Final hardware details and testing of production prototype modules are reported. Features of software under development for a stand-alone single Snoop diagnostic system and Multi-Snoop networks will be discussed. 3 refs., 2 figs

  15. Fuzzy Mutual Information Based min-Redundancy and Max-Relevance Heterogeneous Feature Selection

    Directory of Open Access Journals (Sweden)

    Daren Yu

    2011-08-01

    Full Text Available Feature selection is an important preprocessing step in pattern classification and machine learning, and mutual information is widely used to measure relevance between features and decision. However, it is difficult to directly calculate relevance between continuous or fuzzy features using mutual information. In this paper we introduce the fuzzy information entropy and fuzzy mutual information for computing relevance between numerical or fuzzy features and decision. The relationship between fuzzy information entropy and differential entropy is also discussed. Moreover, we combine fuzzy mutual information with qmin-Redundancy-Max-Relevanceq, qMax-Dependencyq and min-Redundancy-Max-Dependencyq algorithms. The performance and stability of the proposed algorithms are tested on benchmark data sets. Experimental results show the proposed algorithms are effective and stable.

  16. Beautiful Testing Leading Professionals Reveal How They Improve Software

    CERN Document Server

    Goucher, Adam

    2009-01-01

    Successful software depends as much on scrupulous testing as it does on solid architecture or elegant code. But testing is not a routine process, it's a constant exploration of methods and an evolution of good ideas. Beautiful Testing offers 23 essays from 27 leading testers and developers that illustrate the qualities and techniques that make testing an art. Through personal anecdotes, you'll learn how each of these professionals developed beautiful ways of testing a wide range of products -- valuable knowledge that you can apply to your own projects. Here's a sample of what you'll find i

  17. EFFICIENCY OF REDUNDANT QUERY EXECUTION IN MULTI-CHANNEL SERVICE SYSTEMS

    Directory of Open Access Journals (Sweden)

    V. A. Bogatyrev

    2016-03-01

    Full Text Available Subject of Research.The paper deals with analysis of the effectiveness of redundant queries based on untrusted computing in computer systems, represented by multi-channel queuing systems with a common queue. The objective of research is the possibility of increasing the efficiency of service requests while performing redundant copies of requests in different devices of a multi-channel system under conditions of calculations unreliability. The redundant service of requests requires the infallibility of its implementation at least in one of the devices.Method. We have considered estimation of the average time spent in the system with and without the use of redundant requests at the presentation of a simple queuing model of the M / M / n type to analyze the effectiveness of redundant service of requests. Presented evaluation of the average waiting time in the redundant queries is the upper one, since it ignores the possibility of reducing the average waiting time as a result of the spread of the probability of time querying at different devices. The integrated efficiency of redundant service of requests is defined based on the multiplicative index that takes into account the infallibility of calculations and the average time allowance with respect to the maximum tolerated delay of service. Evaluation of error-free computing at reserved queries is received at the requirement of faultless execution of at least one copy of the request. Main Results. We have shown that the reservation of requests gives the gain in efficiency of the system at low demand rate (load. We have defined the boundaries of expediency (efficiency for redundant service of requests. We have shown the possibility of the effectiveness increasing of the adaptive changes in the multiplicity of the reservation of requests, depending on the intensity of the flow of requests. We have found out that the choice of service discipline in information service systems is largely determined by

  18. Availability analysis of safety grade multiple redundant controller used in advanced nuclear safety systems

    International Nuclear Information System (INIS)

    Son, Kwang Seop; Kim, Dong Hoon; Park, Gee Yong; Kang, Hyun Gook

    2018-01-01

    Highlights: •The multiple redundant controller, SPLC is configured as the combination of DMR and TMR architecture. •We construct the Markov model of SPLC using the concept of the system unavailability rate. •To satisfy the availability requirement of safety grade controller, the fault coverage factor (FCF) should be ≥0.8 and the MTTR of each module should be ≤100 h when FCF is 0.9. •The availability of SPLC is better than that of PLC having iTMR architecture however it is poorer than iTMR considering the off-line test and inspection on the assumption that MTTR of each module is ≤200 h. -- Abstract: We analyze the availability of the Safety Programmable Logic Controller (SPLC) having multiple redundant architectures. In the SPLC, input/output and processor module are configured as triple modular redundancy (TMR), and backplane bus, power and communication modules are configured as dual modular redundancy (DMR). The voting logics for redundant architectures are based on the forwarding error detection. It means that the receivers perform the voting logics based on the status information of transmitters. To analyze the availability of SPLC, we construct the Markov model and simplify the model adopting the system unavailability rate. The results show that the fault coverage factor should be ≥0.8 and Mean Time To Repair (MTTR) should be ≤100 h in order to satisfy the requirement that the availability of the safety grade PLC should be ≥0.995. Also we evaluate the availability of SPLC comparing to other PLCs such as simplex, processor DMR (pDMR) and independent TMR (iTMR) PLCs used in the existing nuclear safety systems. The availability of SPLC is higher than those of the simplex, pDMR but is lower than that of iTMR for one month which is the periodic off-line test and inspection. That’s why the number of redundant modules used in PLC is more dominant to increasing the availability than the number of fault masking methods such as voting logics used

  19. HITCal: a software tool for analysis of video head impulse test responses.

    Science.gov (United States)

    Rey-Martinez, Jorge; Batuecas-Caletrio, Angel; Matiño, Eusebi; Perez Fernandez, Nicolás

    2015-09-01

    The developed software (HITCal) may be a useful tool in the analysis and measurement of the saccadic video head impulse test (vHIT) responses and with the experience obtained during its use the authors suggest that HITCal is an excellent method for enhanced exploration of vHIT outputs. To develop a (software) method to analyze and explore the vHIT responses, mainly saccades. HITCal was written using a computational development program; the function to access a vHIT file was programmed; extended head impulse exploration and measurement tools were created and an automated saccade analysis was developed using an experimental algorithm. For pre-release HITCal laboratory tests, a database of head impulse tests (HITs) was created with the data collected retrospectively in three reference centers. This HITs database was evaluated by humans and was also computed with HITCal. The authors have successfully built HITCal and it has been released as open source software; the developed software was fully operative and all the proposed characteristics were incorporated in the released version. The automated saccades algorithm implemented in HITCal has good concordance with the assessment by human observers (Cohen's kappa coefficient = 0.7).

  20. Software framework developed for the slice test of the ATLAS endcap muon trigger system

    CERN Document Server

    Komatsu, S; Ishida, Y; Tanaka, K; Hasuko, K; Kano, H; Matsumoto, Y; Yakamura, Y; Sakamoto, H; Ikeno, M; Nakayoshi, K; Sasaki, O; Yasu, Y; Hasegawa, Y; Totsuka, M; Tsuji, S; Maeno, T; Ichimiya, R; Kurashige, H

    2002-01-01

    A sliced system test of the ATLAS end cap muon level 1 trigger system has been done in 2001 and 2002 separately. We have developed an own software framework for property and run controls for the slice test in 2001. The system is described in C++ throughout. The multi-PC control system is accomplished using the CORBA system. We have then restructured the software system on top of the ATLAS online software framework, and used this one for the slice test in 2002. In this report we discuss two systems in detail with emphasizing the module property configuration and run control. (8 refs).

  1. Optimization of robustness of interdependent network controllability by redundant design.

    Directory of Open Access Journals (Sweden)

    Zenghu Zhang

    Full Text Available Controllability of complex networks has been a hot topic in recent years. Real networks regarded as interdependent networks are always coupled together by multiple networks. The cascading process of interdependent networks including interdependent failure and overload failure will destroy the robustness of controllability for the whole network. Therefore, the optimization of the robustness of interdependent network controllability is of great importance in the research area of complex networks. In this paper, based on the model of interdependent networks constructed first, we determine the cascading process under different proportions of node attacks. Then, the structural controllability of interdependent networks is measured by the minimum driver nodes. Furthermore, we propose a parameter which can be obtained by the structure and minimum driver set of interdependent networks under different proportions of node attacks and analyze the robustness for interdependent network controllability. Finally, we optimize the robustness of interdependent network controllability by redundant design including node backup and redundancy edge backup and improve the redundant design by proposing different strategies according to their cost. Comparative strategies of redundant design are conducted to find the best strategy. Results shows that node backup and redundancy edge backup can indeed decrease those nodes suffering from failure and improve the robustness of controllability. Considering the cost of redundant design, we should choose BBS (betweenness-based strategy or DBS (degree based strategy for node backup and HDF(high degree first for redundancy edge backup. Above all, our proposed strategies are feasible and effective at improving the robustness of interdependent network controllability.

  2. Reactor protection system software test-case selection based on input-profile considering concurrent events and uncertainties

    International Nuclear Information System (INIS)

    Khalaquzzaman, M.; Lee, Seung Jun; Cho, Jaehyun; Jung, Wondea

    2016-01-01

    Recently, the input-profile-based testing for safety critical software has been proposed for determining the number of test cases and quantifying the failure probability of the software. Input-profile of a reactor protection system (RPS) software is the input which causes activation of the system for emergency shutdown of a reactor. This paper presents a method to determine the input-profile of a RPS software which considers concurrent events/transients. A deviation of a process parameter value begins through an event and increases owing to the concurrent multi-events depending on the correlation of process parameters and severity of incidents. A case of reactor trip caused by feedwater loss and main steam line break is simulated and analyzed to determine the RPS software input-profile and estimate the number of test cases. The different sizes of the main steam line breaks (e.g., small, medium, large break) with total loss of feedwater supply are considered in constructing the input-profile. The uncertainties of the simulation related to the input-profile-based software testing are also included. Our study is expected to provide an option to determine test cases and quantification of RPS software failure probability. (author)

  3. Parameter identifiability and redundancy: theoretical considerations.

    Directory of Open Access Journals (Sweden)

    Mark P Little

    Full Text Available BACKGROUND: Models for complex biological systems may involve a large number of parameters. It may well be that some of these parameters cannot be derived from observed data via regression techniques. Such parameters are said to be unidentifiable, the remaining parameters being identifiable. Closely related to this idea is that of redundancy, that a set of parameters can be expressed in terms of some smaller set. Before data is analysed it is critical to determine which model parameters are identifiable or redundant to avoid ill-defined and poorly convergent regression. METHODOLOGY/PRINCIPAL FINDINGS: In this paper we outline general considerations on parameter identifiability, and introduce the notion of weak local identifiability and gradient weak local identifiability. These are based on local properties of the likelihood, in particular the rank of the Hessian matrix. We relate these to the notions of parameter identifiability and redundancy previously introduced by Rothenberg (Econometrica 39 (1971 577-591 and Catchpole and Morgan (Biometrika 84 (1997 187-196. Within the widely used exponential family, parameter irredundancy, local identifiability, gradient weak local identifiability and weak local identifiability are shown to be largely equivalent. We consider applications to a recently developed class of cancer models of Little and Wright (Math Biosciences 183 (2003 111-134 and Little et al. (J Theoret Biol 254 (2008 229-238 that generalize a large number of other recently used quasi-biological cancer models. CONCLUSIONS/SIGNIFICANCE: We have shown that the previously developed concepts of parameter local identifiability and redundancy are closely related to the apparently weaker properties of weak local identifiability and gradient weak local identifiability--within the widely used exponential family these concepts largely coincide.

  4. Input relegation control for gross motion of a kinematically redundant manipulator

    Energy Technology Data Exchange (ETDEWEB)

    Unseren, M.A.

    1992-10-01

    This report proposes a method for resolving the kinematic redundancy of a serial link manipulator moving in a three-dimensional workspace. The underspecified problem of solving for the joint velocities based on the classical kinematic velocity model is transformed into a well-specified problem. This is accomplished by augmenting the original model with additional equations which relate a new vector variable quantifying the redundant degrees of freedom (DOF) to the joint velocities. The resulting augmented system yields a well specified solution for the joint velocities. Methods for selecting the redundant DOF quantifying variable and the transformation matrix relating it to the joint velocities are presented so as to obtain a minimum Euclidean norm solution for the joint velocities. The approach is also applied to the problem of resolving the kinematic redundancy at the acceleration level. Upon resolving the kinematic redundancy, a rigid body dynamical model governing the gross motion of the manipulator is derived. A control architecture is suggested which according to the model, decouples the Cartesian space DOF and the redundant DOF.

  5. The HIE-ISOLDE alignment and monitoring system software and test mock up

    CERN Document Server

    Kautzmann, G; Kadi, Y; Leclercq, Y; Waniorek, S; Williams, L

    2012-01-01

    For the HIE Isolde project a superconducting linac will be built at CERN in the Isolde facility area. The linac will be based on the creation and installation of 2 high- β and 4 low- β cryomodules containing respectively 5 high-β superconducting cavities and 1 superconducting solenoid for the two first ones, 6 low-β superconducting cavities and 2 superconducting solenoids for the four other ones. An alignment and monitoring system of the RF cavities and solenoids placed inside the cryomodules is needed to reach the optimum linac working conditions. The alignment system is based on opto-electronics, optics and precise mechanical instrumentation. The geometrical frame configuration, the data acquisition and the 3D adjustment will be managed using a dedicated software application. In parallel to the software development, an alignment system test mock-up has been built for software validation and dimensional tests. This paper will present the software concept and the development status, and then will describe...

  6. Parallel-Processing Test Bed For Simulation Software

    Science.gov (United States)

    Blech, Richard; Cole, Gary; Townsend, Scott

    1996-01-01

    Second-generation Hypercluster computing system is multiprocessor test bed for research on parallel algorithms for simulation in fluid dynamics, electromagnetics, chemistry, and other fields with large computational requirements but relatively low input/output requirements. Built from standard, off-shelf hardware readily upgraded as improved technology becomes available. System used for experiments with such parallel-processing concepts as message-passing algorithms, debugging software tools, and computational steering. First-generation Hypercluster system described in "Hypercluster Parallel Processor" (LEW-15283).

  7. Reliability Analysis and Calibration of Partial Safety Factors for Redundant Structures

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    1998-01-01

    Redundancy is important to include in the design and analysis of structural systems. In most codes of practice redundancy is not directly taken into account. In the paper various definitions of a deterministic and reliability based redundancy measure are reviewed. It is described how reundancy can...... be included in the safety system and how partial safety factors can be calibrated. An example is presented illustrating how redundancy is taken into account in the safety system in e.g. the Danish codes. The example shows how partial safety factors can be calibrated to comply with the safety level...

  8. Redundancy Optimization for Error Recovery in Digital Microfluidic Biochips

    DEFF Research Database (Denmark)

    Alistar, Mirela; Pop, Paul; Madsen, Jan

    2015-01-01

    Microfluidic-based biochips are replacing the conventional biochemical analyzers, and are able to integrate all the necessary functions for biochemical analysis. The digital microfluidic biochips are based on the manipulation of liquids not as a continuous flow, but as discrete droplets. Research......Microfluidic-based biochips are replacing the conventional biochemical analyzers, and are able to integrate all the necessary functions for biochemical analysis. The digital microfluidic biochips are based on the manipulation of liquids not as a continuous flow, but as discrete droplets....... Researchers have proposed approaches for the synthesis of digital microfluidic biochips, which, starting from a biochemical application and a given biochip architecture, determine the allocation, resource binding, scheduling, placement and routing of the operations in the application. During the execution...... propose an online recovery strategy, which decides during the execution of the biochemical application the introduction of the redundancy required for fault-tolerance. We consider both time redundancy, i.e., re-executing erroneous operations, and space redundancy, i.e., creating redundant droplets...

  9. Management of redundancy in flight control systems using optimal decision theory

    Science.gov (United States)

    1981-01-01

    The problem of using redundancy that exists between dissimilar systems in aircraft flight control is addressed. That is, using the redundancy that exists between a rate gyro and an accelerometer--devices that have dissimilar outputs which are related only through the dynamics of the aircraft motion. Management of this type of redundancy requires advanced logic so that the system can monitor failure status and can reconfigure itself in the event of one or more failures. An optimal decision theory was tutorially developed for the management of sensor redundancy and the theory is applied to two aircraft examples. The first example is the space shuttle and the second is a highly maneuvering high performance aircraft--the F8-C. The examples illustrate the redundancy management design process and the performance of the algorithms presented in failure detection and control law reconfiguration.

  10. Functional over-redundancy and high functional vulnerability in global fish faunas on tropical reefs.

    Science.gov (United States)

    Mouillot, David; Villéger, Sébastien; Parravicini, Valeriano; Kulbicki, Michel; Arias-González, Jesus Ernesto; Bender, Mariana; Chabanet, Pascale; Floeter, Sergio R; Friedlander, Alan; Vigliola, Laurent; Bellwood, David R

    2014-09-23

    When tropical systems lose species, they are often assumed to be buffered against declines in functional diversity by the ability of the species-rich biota to display high functional redundancy: i.e., a high number of species performing similar functions. We tested this hypothesis using a ninefold richness gradient in global fish faunas on tropical reefs encompassing 6,316 species distributed among 646 functional entities (FEs): i.e., unique combinations of functional traits. We found that the highest functional redundancy is located in the Central Indo-Pacific with a mean of 7.9 species per FE. However, this overall level of redundancy is disproportionately packed into few FEs, a pattern termed functional over-redundancy (FOR). For instance, the most speciose FE in the Central Indo-Pacific contains 222 species (out of 3,689) whereas 38% of FEs (180 out of 468) have no functional insurance with only one species. Surprisingly, the level of FOR is consistent across the six fish faunas, meaning that, whatever the richness, over a third of the species may still be in overrepresented FEs whereas more than one third of the FEs are left without insurance, these levels all being significantly higher than expected by chance. Thus, our study shows that, even in high-diversity systems, such as tropical reefs, functional diversity remains highly vulnerable to species loss. Although further investigations are needed to specifically address the influence of redundant vs. vulnerable FEs on ecosystem functioning, our results suggest that the promised benefits from tropical biodiversity may not be as strong as previously thought.

  11. WLS software for the Los Alamos geophysical instrumentation truck

    International Nuclear Information System (INIS)

    Ideker, C.D.; LaDelfe, C.M.

    1985-01-01

    Los Alamos National Laboratory's capabilities for special downhole geophysical well logging has increased steadily over the past few years. Software was developed originally for each individual tool as it became operational. With little or no standardization for tool software modules, software development became redundant, time consuming, and cost ineffective. With long-term use and the rapid evolution of well logging capacity in mind. Los Alamos and EG and G personnel decided to purchase a software system. The system was designed to offer: wide-range use and programming flexibility; standardization subroutines for tool module development; user friendly operation which would reduce training time; operator error checking and alarm activation; maximum growth capacity for new tools as they are added to the inventory; and the ability to incorporate changes made to the computer operating system and hardware. The end result is a sophisticated and flexible software tool and for transferring downhole geophysical measurement data to computer disk files. This paper outlines the need, design, development, and implementation of the WLS software for geophysical data acquisition. A demonstration and working examples are included in the presentation

  12. Multisensory processing in the redundant-target effect

    DEFF Research Database (Denmark)

    Gondan, Matthias; Niederhaus, Birgit; Rösler, Frank

    2005-01-01

    Participants respond more quickly to two simultaneously presented target stimuli of two different modalities (redundant targets) than would be predicted from their reaction times to the unimodal targets. To examine the neural correlates of this redundant-target effect, event-related potentials...... (ERPs) were recorded to auditory, visual, and bimodal standard and target stimuli presented at two locations (left and right of central fixation). Bimodal stimuli were combinations of two standards, two targets, or a standard and a target, presented either from the same or from different locations...

  13. Software FMEA analysis for safety-related application software

    International Nuclear Information System (INIS)

    Park, Gee-Yong; Kim, Dong Hoon; Lee, Dong Young

    2014-01-01

    Highlights: • We develop a modified FMEA analysis suited for applying to software architecture. • A template for failure modes on a specific software language is established. • A detailed-level software FMEA analysis on nuclear safety software is presented. - Abstract: A method of a software safety analysis is described in this paper for safety-related application software. The target software system is a software code installed at an Automatic Test and Interface Processor (ATIP) in a digital reactor protection system (DRPS). For the ATIP software safety analysis, at first, an overall safety or hazard analysis is performed over the software architecture and modules, and then a detailed safety analysis based on the software FMEA (Failure Modes and Effect Analysis) method is applied to the ATIP program. For an efficient analysis, the software FMEA analysis is carried out based on the so-called failure-mode template extracted from the function blocks used in the function block diagram (FBD) for the ATIP software. The software safety analysis by the software FMEA analysis, being applied to the ATIP software code, which has been integrated and passed through a very rigorous system test procedure, is proven to be able to provide very valuable results (i.e., software defects) that could not be identified during various system tests

  14. The nightly build and test system for LCG AA and LHCb software

    CERN Document Server

    Kruzelecki, K; Degaudenzi, H

    2010-01-01

    The core software stack both from the LCG Application Area and LHCb consists of more than 25 C++/Fortran/Python projects build for about 20 different configurations on Linux, Windows and MacOSX. To these projects, one can also add about 70 external software packages (Boost, Python, Qt, CLHEP, ...) which have also to be build for the same configurations. It order to reduce the time of the development cycle and increase the quality insurance, a framework has been developed for the daily (nightly actually) build and test of the software. Performing the build and the tests on several configurations and platform allows to increase the efficiency of the unit and integration tests. Main features: - flexible and fine grained setup (full, partial build) through a web interface; - possibility to build several “slots” with different configurations; - precise and highly granular reports on a web server; - support for CMT projects (but not only) with their cross-dependencies; - scalable client-server architecture for ...

  15. Launch Control System Software Development System Automation Testing

    Science.gov (United States)

    Hwang, Andrew

    2017-01-01

    The Spaceport Command and Control System (SCCS) is the National Aeronautics and Space Administration's (NASA) launch control system for the Orion capsule and Space Launch System, the next generation manned rocket currently in development. This system requires high quality testing that will measure and test the capabilities of the system. For the past two years, the Exploration and Operations Division at Kennedy Space Center (KSC) has assigned a group including interns and full-time engineers to develop automated tests to save the project time and money. The team worked on automating the testing process for the SCCS GUI that would use streamed simulated data from the testing servers to produce data, plots, statuses, etc. to the GUI. The software used to develop automated tests included an automated testing framework and an automation library. The automated testing framework has a tabular-style syntax, which means the functionality of a line of code must have the appropriate number of tabs for the line to function as intended. The header section contains either paths to custom resources or the names of libraries being used. The automation library contains functionality to automate anything that appears on a desired screen with the use of image recognition software to detect and control GUI components. The data section contains any data values strictly created for the current testing file. The body section holds the tests that are being run. The function section can include any number of functions that may be used by the current testing file or any other file that resources it. The resources and body section are required for all test files; the data and function sections can be left empty if the data values and functions being used are from a resourced library or another file. To help equip the automation team with better tools, the Project Lead of the Automated Testing Team, Jason Kapusta, assigned the task to install and train an optical character recognition (OCR

  16. Redundant measurements for controlling errors

    International Nuclear Information System (INIS)

    Ehinger, M.H.; Crawford, J.M.; Madeen, M.L.

    1979-07-01

    Current federal regulations for nuclear materials control require consideration of operating data as part of the quality control program and limits of error propagation. Recent work at the BNFP has revealed that operating data are subject to a number of measurement problems which are very difficult to detect and even more difficult to correct in a timely manner. Thus error estimates based on operational data reflect those problems. During the FY 1978 and FY 1979 R and D demonstration runs at the BNFP, redundant measurement techniques were shown to be effective in detecting these problems to allow corrective action. The net effect is a reduction in measurement errors and a significant increase in measurement sensitivity. Results show that normal operation process control measurements, in conjunction with routine accountability measurements, are sensitive problem indicators when incorporated in a redundant measurement program

  17. Quantum Darwinism: Entanglement, branches, and the emergent classicality of redundantly stored quantum information

    International Nuclear Information System (INIS)

    Blume-Kohout, Robin; Zurek, Wojciech H.

    2006-01-01

    We lay a comprehensive foundation for the study of redundant information storage in decoherence processes. Redundancy has been proposed as a prerequisite for objectivity, the defining property of classical objects. We consider two ensembles of states for a model universe consisting of one system and many environments: the first consisting of arbitrary states, and the second consisting of 'singly branching' states consistent with a simple decoherence model. Typical states from the random ensemble do not store information about the system redundantly, but information stored in branching states has a redundancy proportional to the environment's size. We compute the specific redundancy for a wide range of model universes, and fit the results to a simple first-principles theory. Our results show that the presence of redundancy divides information about the system into three parts: classical (redundant); purely quantum; and the borderline, undifferentiated or 'nonredundant', information

  18. Quantum Darwinism: Entanglement, branches, and the emergent classicality of redundantly stored quantum information

    Science.gov (United States)

    Blume-Kohout, Robin; Zurek, Wojciech H.

    2006-06-01

    We lay a comprehensive foundation for the study of redundant information storage in decoherence processes. Redundancy has been proposed as a prerequisite for objectivity, the defining property of classical objects. We consider two ensembles of states for a model universe consisting of one system and many environments: the first consisting of arbitrary states, and the second consisting of “singly branching” states consistent with a simple decoherence model. Typical states from the random ensemble do not store information about the system redundantly, but information stored in branching states has a redundancy proportional to the environment’s size. We compute the specific redundancy for a wide range of model universes, and fit the results to a simple first-principles theory. Our results show that the presence of redundancy divides information about the system into three parts: classical (redundant); purely quantum; and the borderline, undifferentiated or “nonredundant,” information.

  19. Kinematics and control of redundant robotic arm based on dielectric elastomer actuators

    Science.gov (United States)

    Branz, Francesco; Antonello, Andrea; Carron, Andrea; Carli, Ruggero; Francesconi, Alessandro

    2015-04-01

    Soft robotics is a promising field and its application to space mechanisms could represent a breakthrough in space technologies by enabling new operative scenarios (e.g. soft manipulators, capture systems). Dielectric Elastomers Actuators have been under deep study for a number of years and have shown several advantages that could be of key importance for space applications. Among such advantages the most notable are high conversion efficiency, distributed actuation, self-sensing capability, multi-degree-of-freedom design, light weight and low cost. The big potentialities of double cone actuators have been proven in terms of good performances (i.e. stroke and force/torque), ease of manufacturing and durability. In this work the kinematic, dynamic and control design of a two-joint redundant robotic arm is presented. Two double cone actuators are assembled in series to form a two-link design. Each joint has two degrees of freedom (one rotational and one translational) for a total of four. The arm is designed to move in a 2-D environment (i.e. the horizontal plane) with 4 DoF, consequently having two degrees of redundancy. The redundancy is exploited in order to minimize the joint loads. The kinematic design with redundant Jacobian inversion is presented. The selected control algorithm is described along with the results of a number of dynamic simulations that have been executed for performance verification. Finally, an experimental setup is presented based on a flexible structure that counteracts gravity during testing in order to better emulate future zero-gravity applications.

  20. SigmaPlot 2000, Version 6.00, SPSS Inc. Computer Software Test Plan

    Energy Technology Data Exchange (ETDEWEB)

    HURLBUT, S.T.

    2000-10-24

    SigmaPlot is a vendor software product used in conjunction with the supercritical fluid extraction Fourier transform infrared spectrometer (SFE-FTIR) system. This product converts the raw spectral data to useful area numbers. SigmaPlot will be used in conjunction with procedure ZA-565-301, ''Determination of Moisture by Supercritical Fluid Extraction and Infrared Detection.'' This test plan will be performed in conjunction with or prior to HNF-6936, ''HA-53 Supercritical Fluid Extraction System Acceptance Test Plan'', to perform analyses for water. The test will ensure that the software can be installed properly and will manipulate the analytical data correctly.

  1. The nightly build and test system for LCG AA and LHCb software

    International Nuclear Information System (INIS)

    Kruzelecki, Karol; Roiser, Stefan; Degaudenzi, Hubert

    2010-01-01

    The core software stack both from the LCG Application Area and LHCb consists of more than 25 C++/Fortran/Python projects built for about 20 different configurations on Linux, Windows and MacOSX. To these projects, one can also add about 70 external software packages (Boost, Python, Qt, CLHEP, ...) which also have to be built for the same configurations. It order to reduce the time of the development cycle and assure the quality, a framework has been developed for the daily (in fact nightly) build and test of the software. Performing the build and the tests on several configurations and platforms increases the efficiency of the unit and integration tests. Main features: - flexible and fine grained setup (full, partial build) through a web interface; - possibility to build several 'slots' with different configurations; - precise and highly granular reports on a web server; - support for CMT projects (but not only) with their cross-dependencies; - scalable client-server architecture for the control machine and its build machines; - copy of the results in a common place to allow early view of the software stack. The nightly build framework is written in Python for portability and it is easily extensible to accommodate new build procedures.

  2. Software for Displaying High-Frequency Test Data

    Science.gov (United States)

    Elmore, Jason L.

    2003-01-01

    An easy-to-use, intuitive computer program was written to satisfy a need of test operators and data requestors to quickly view and manipulate high-frequency test data recorded at the East and West Test Areas at Marshall Space Flight Center. By enabling rapid analysis, this program makes it possible to reduce times between test runs, thereby potentially reducing the overall cost of test operations. The program can be used to perform quick frequency analysis, using multiple fast- Fourier-transform windowing and amplitude options. The program can generate amplitude-versus-time plots with full zoom capabilities, frequency-component plots at specified time intervals, and waterfall plots (plots of spectral intensity versus frequency at successive small time intervals, showing the changing frequency components over time). There are options for printing of the plots and saving plot data as text files that can be imported into other application programs. The program can perform all of the aforementioned plotting and plot-data-handling functions on a relatively inexpensive computer; other software that performs the same functions requires computers with large amounts of power and memory.

  3. Quantum Darwinism Requires an Extra-Theoretical Assumption of Encoding Redundancy

    Science.gov (United States)

    Fields, Chris

    2010-10-01

    Observers restricted to the observation of pointer states of apparatus cannot conclusively demonstrate that the pointer of an apparatus mathcal{A} registers the state of a system of interest S without perturbing S. Observers cannot, therefore, conclusively demonstrate that the states of a system S are redundantly encoded by pointer states of multiple independent apparatus without destroying the redundancy of encoding. The redundancy of encoding required by quantum Darwinism must, therefore, be assumed from outside the quantum-mechanical formalism and without the possibility of experimental demonstration.

  4. The software testing of PPS for shin Ulchin nuclear power plant units 1 and 2

    International Nuclear Information System (INIS)

    Kang, Dong Pa; Park, Cheol Lak; Cho, Chang Hui; Sohn, Se Do; Baek, Seung Min

    2012-01-01

    The testing of software (S/W) is the process of analyzing a software item to detect the differences between existing and required conditions to evaluate the features of the software items. This paper introduces the S/W testing of Plant Protection System (PPS), as a safety system which actuate Reactor Trip (RT) and Engineered Safety Features (ESF) for Shin Ulchin Nuclear Power Plant Units 1 and 2 (SUN 1 and 2)

  5. Adaptive increase in force variance during fatigue in tasks with low redundancy.

    Science.gov (United States)

    Singh, Tarkeshwar; S K M, Varadhan; Zatsiorsky, Vladimir M; Latash, Mark L

    2010-11-26

    We tested a hypothesis that fatigue of an element (a finger) leads to an adaptive neural strategy that involves an increase in force variability in the other finger(s) and an increase in co-variation of commands to fingers to keep total force variability relatively unchanged. We tested this hypothesis using a system with small redundancy (two fingers) and a marginally redundant system (with an additional constraint related to the total moment of force produced by the fingers, unstable condition). The subjects performed isometric accurate rhythmic force production tasks by the index (I) finger and two fingers (I and middle, M) pressing together before and after a fatiguing exercise by the I finger. Fatigue led to a large increase in force variance in the I-finger task and a smaller increase in the IM-task. We quantified two components of variance in the space of hypothetical commands to fingers, finger modes. Under both stable and unstable conditions, there was a large increase in the variance component that did not affect total force and a much smaller increase in the component that did. This resulted in an increase in an index of the force-stabilizing synergy. These results indicate that marginal redundancy is sufficient to allow the central nervous system to use adaptive increase in variability to shield important variables from effects of fatigue. We offer an interpretation of these results based on a recent development of the equilibrium-point hypothesis known as the referent configuration hypothesis. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  6. Exploration of joint redundancy but not task space variability facilitates supervised motor learning.

    Science.gov (United States)

    Singh, Puneet; Jana, Sumitash; Ghosal, Ashitava; Murthy, Aditya

    2016-12-13

    The number of joints and muscles in a human arm is more than what is required for reaching to a desired point in 3D space. Although previous studies have emphasized how such redundancy and the associated flexibility may play an important role in path planning, control of noise, and optimization of motion, whether and how redundancy might promote motor learning has not been investigated. In this work, we quantify redundancy space and investigate its significance and effect on motor learning. We propose that a larger redundancy space leads to faster learning across subjects. We observed this pattern in subjects learning novel kinematics (visuomotor adaptation) and dynamics (force-field adaptation). Interestingly, we also observed differences in the redundancy space between the dominant hand and nondominant hand that explained differences in the learning of dynamics. Taken together, these results provide support for the hypothesis that redundancy aids in motor learning and that the redundant component of motor variability is not noise.

  7. Void fraction instrument software, Version 1,2, Acceptance test report

    International Nuclear Information System (INIS)

    Gimera, M.

    1995-01-01

    This provides the report for the void fraction instrument acceptance test software Version 1.2. The void fraction will collect data that will be used to calculate the quantity of gas trapped in waste tanks

  8. Redundancy scheme for multi-layered accelerator control system

    International Nuclear Information System (INIS)

    Chauhan, Amit; Fatnani, Pravin

    2009-01-01

    The control system for SRS Indus-2 has three-layered architecture. There are VMEbus based stations at the lower two layers that are controlled by their respective CPU board. The 'Profibus' fieldbus standard is used for communication between these VME stations distributed in the field. There is a Profibus controller board at each station to implement the communication protocol. The mode of communication is master-slave (command-response) type. This paper proposes a scheme to implement redundancy at the lower two layers namely Layer-2 (Supervisory Layer / Profibus-master) and Layer-3 (Equipment Unit Interface Layer / Profibus-slave). The redundancy is for both the CPU and the communication board. The scheme uses two CPU boards and two Profi controller boards at each L-3 station. This helps in decreasing any downtime resulting either from CPU faults or communication board faults that are placed in the field area. Redundancy of Profi boards provides two active communication channels between the stations that can be used in different ways thereby increasing the availability on a communication link. Redundancy of CPU boards provides certain level of auto fault-recovery as one CPU remains active and the other CPU remains in standby mode, which takes over the control of VMEbus in case of any fault in the main CPU. (author)

  9. Techno-Economic Assessment of Redundancy Systems for a Cogeneration Plant

    Directory of Open Access Journals (Sweden)

    Majid Mohd Amin Abd

    2014-07-01

    Full Text Available The use of distributed power generation has advantage as well as disadvantage. One of the disadvantages is that the plant requires a dependable redundancy system to provide back up of power during failure of its power generation equipment. This paper presents a study on techno-economic assessment of redundancy systems for a cogeneration plant. Three redundancy systems were investigated; using public utility, generator set and gas turbine as back up during failures. Results from the analysis indicate that using public utility provides technical as well as economic advantages in comparison to using generator set or turbine as back up. However, the economic advantage of the public utility depends on the frequency of failures the plant will experience as well on the maximum demand charge. From the break even analysis of the understudied plant, if the number of failures exceeds 3 failures per year for the case of maximum demand charge of RM56.80, it is more economical to install a generator set as redundancy. The study will be useful for the co-generator operators to evaluate the feasibility of redundancy systems.

  10. Testing the race inequality

    DEFF Research Database (Denmark)

    Gondan, Matthias; Heckel, A.

    2008-01-01

    In speeded response tasks with redundant signals, parallel processing of the redundant signals is generally tested using the so-called race inequality. The race inequality states that the distribution of fast responses for a redundant stimulus never exceeds the summed distributions of fast...

  11. Architecture and method for optimization of cloud resources used in software testing

    Directory of Open Access Journals (Sweden)

    Joana Coelho Vigário

    2016-03-01

    Full Text Available Nowadays systems can evolve quickly, and to this growth is associated, for example, the production of new features, or even the change of system perspective, required by the stakeholders. These conditions require the development of software testing in order to validate the systems. Run a large battery of tests sequentially can take hours. However, tests can run faster in a distributed environment with rapid availability of pre-configured systems, such as cloud computing. There is increasing demand for automation of the entire process, including integration, build, running tests and management of cloud resources.This paper aims to demonstrate the applicability of the practice continuous integration (CI in Information Systems, for automating the build and software testing performed in a distributed environment of cloud computing, in order to achieve optimization and elasticity of the resources provided by the cloud.

  12. Design and implementation of a software tool intended for simulation and test of real time codes

    International Nuclear Information System (INIS)

    Le Louarn, C.

    1986-09-01

    The objective of real time software testing is to show off processing errors and unobserved functional requirements or timing constraints in a code. In the perspective of safety analysis of nuclear equipments of power plants testing should be carried independently from the physical process (which is not generally available), and because casual hardware failures must be considered. We propose here a simulation and test tool, integrally software, with large interactive possibilities for testing assembly code running on microprocessor. The OST (outil d'aide a la simulation et au Test de logiciels temps reel) simulates code execution and hardware or software environment behaviour. Test execution is closely monitored and many useful informations are automatically saved. The present thesis work details, after exposing methods and tools dedicated to real time software, the OST system. We show the internal mechanisms and objects of the system: particularly ''events'' (which describe evolutions of the system under test) and mnemonics (which describe the variables). Then, we detail the interactive means available to the user for constructing the test data and the environment of the tested software. Finally, a prototype implementation is presented along with the results of the tests carried out. This demonstrates the many advantages of the use of an automatic tool over a manual investigation. As a conclusion, further developments, nececessary to complete the final tool are rewieved [fr

  13. Space-Based Reconfigurable Software Defined Radio Test Bed Aboard International Space Station

    Science.gov (United States)

    Reinhart, Richard C.; Lux, James P.

    2014-01-01

    The National Aeronautical and Space Administration (NASA) recently launched a new software defined radio research test bed to the International Space Station. The test bed, sponsored by the Space Communications and Navigation (SCaN) Office within NASA is referred to as the SCaN Testbed. The SCaN Testbed is a highly capable communications system, composed of three software defined radios, integrated into a flight system, and mounted to the truss of the International Space Station. Software defined radios offer the future promise of in-flight reconfigurability, autonomy, and eventually cognitive operation. The adoption of software defined radios offers space missions a new way to develop and operate space transceivers for communications and navigation. Reconfigurable or software defined radios with communications and navigation functions implemented in software or VHDL (Very High Speed Hardware Description Language) provide the capability to change the functionality of the radio during development or after launch. The ability to change the operating characteristics of a radio through software once deployed to space offers the flexibility to adapt to new science opportunities, recover from anomalies within the science payload or communication system, and potentially reduce development cost and risk by adapting generic space platforms to meet specific mission requirements. The software defined radios on the SCaN Testbed are each compliant to NASA's Space Telecommunications Radio System (STRS) Architecture. The STRS Architecture is an open, non-proprietary architecture that defines interfaces for the connections between radio components. It provides an operating environment to abstract the communication waveform application from the underlying platform specific hardware such as digital-to-analog converters, analog-to-digital converters, oscillators, RF attenuators, automatic gain control circuits, FPGAs, general-purpose processors, etc. and the interconnections among

  14. A Multi-objective PMU Placement Method Considering Observability and Measurement Redundancy using ABC Algorithm

    Directory of Open Access Journals (Sweden)

    KULANTHAISAMY, A.

    2014-05-01

    Full Text Available This paper presents a Multi- objective Optimal Placement of Phasor Measurement Units (MOPP method in large electric transmission systems. It is proposed for minimizing the number of Phasor Measurement Units (PMUs for complete system observability and maximizing the measurement redundancy of the system, simultaneously. The measurement redundancy means that number of times a bus is able to monitor more than once by PMUs set. A higher level of measurement redundancy can maximize the total system observability and it is desirable for a reliable power system state estimation. Therefore, simultaneous optimization of the two conflicting objectives are performed using a binary coded Artificial Bee Colony (ABC algorithm. The complete observability of the power system is first prepared and then, single line loss contingency condition is considered to the main model. The efficiency of the proposed method is validated on IEEE 14, 30, 57 and 118 bus test systems. The valuable approach of ABC algorithm is demonstrated in finding the optimal number of PMUs and their locations by comparing the performance with earlier works.

  15. SIFT - Design and analysis of a fault-tolerant computer for aircraft control. [Software Implemented Fault Tolerant systems

    Science.gov (United States)

    Wensley, J. H.; Lamport, L.; Goldberg, J.; Green, M. W.; Levitt, K. N.; Melliar-Smith, P. M.; Shostak, R. E.; Weinstock, C. B.

    1978-01-01

    SIFT (Software Implemented Fault Tolerance) is an ultrareliable computer for critical aircraft control applications that achieves fault tolerance by the replication of tasks among processing units. The main processing units are off-the-shelf minicomputers, with standard microcomputers serving as the interface to the I/O system. Fault isolation is achieved by using a specially designed redundant bus system to interconnect the processing units. Error detection and analysis and system reconfiguration are performed by software. Iterative tasks are redundantly executed, and the results of each iteration are voted upon before being used. Thus, any single failure in a processing unit or bus can be tolerated with triplication of tasks, and subsequent failures can be tolerated after reconfiguration. Independent execution by separate processors means that the processors need only be loosely synchronized, and a novel fault-tolerant synchronization method is described.

  16. Software Considerations for Subscale Flight Testing of Experimental Control Laws

    Science.gov (United States)

    Murch, Austin M.; Cox, David E.; Cunningham, Kevin

    2009-01-01

    The NASA AirSTAR system has been designed to address the challenges associated with safe and efficient subscale flight testing of research control laws in adverse flight conditions. In this paper, software elements of this system are described, with an emphasis on components which allow for rapid prototyping and deployment of aircraft control laws. Through model-based design and automatic coding a common code-base is used for desktop analysis, piloted simulation and real-time flight control. The flight control system provides the ability to rapidly integrate and test multiple research control laws and to emulate component or sensor failures. Integrated integrity monitoring systems provide aircraft structural load protection, isolate the system from control algorithm failures, and monitor the health of telemetry streams. Finally, issues associated with software configuration management and code modularity are briefly discussed.

  17. Image Registration Using Redundant Wavelet Transforms

    National Research Council Canada - National Science Library

    Brown, Richard

    2001-01-01

    .... In our research, we present a fundamentally new wavelet-based registration algorithm utilizing redundant transforms and a masking process to suppress the adverse effects of noise and improve processing efficiency...

  18. A control method for manipulators with redundancy

    International Nuclear Information System (INIS)

    Furusho, Junji; Usui, Hiroyuki

    1989-01-01

    Redundant manipulators have more ability than nonredundant ones in many aspects such as avoiding obstacles, avoiding singular states, etc. In this paper, a control algorithm for redundant manipulators working under the circumstance in the presence of obstacles is presented. First, the measure of manipulability for robot manipulators under obstacle circumstances is defined. Then, the control algorithm for the obstacle avoidance is derived by using this measure of manipulability. The obstacle avoidance and the maintenance of good posture are simultaneously achieved by this algorithm. Lastly, an experiment and simulation results using an eight degree of freedom manipulator are shown. (author)

  19. Meta-DiSc: a software for meta-analysis of test accuracy data.

    Science.gov (United States)

    Zamora, Javier; Abraira, Victor; Muriel, Alfonso; Khan, Khalid; Coomarasamy, Arri

    2006-07-12

    Systematic reviews and meta-analyses of test accuracy studies are increasingly being recognised as central in guiding clinical practice. However, there is currently no dedicated and comprehensive software for meta-analysis of diagnostic data. In this article, we present Meta-DiSc, a Windows-based, user-friendly, freely available (for academic use) software that we have developed, piloted, and validated to perform diagnostic meta-analysis. Meta-DiSc a) allows exploration of heterogeneity, with a variety of statistics including chi-square, I-squared and Spearman correlation tests, b) implements meta-regression techniques to explore the relationships between study characteristics and accuracy estimates, c) performs statistical pooling of sensitivities, specificities, likelihood ratios and diagnostic odds ratios using fixed and random effects models, both overall and in subgroups and d) produces high quality figures, including forest plots and summary receiver operating characteristic curves that can be exported for use in manuscripts for publication. All computational algorithms have been validated through comparison with different statistical tools and published meta-analyses. Meta-DiSc has a Graphical User Interface with roll-down menus, dialog boxes, and online help facilities. Meta-DiSc is a comprehensive and dedicated test accuracy meta-analysis software. It has already been used and cited in several meta-analyses published in high-ranking journals. The software is publicly available at http://www.hrc.es/investigacion/metadisc_en.htm.

  20. Space Shuttle Program Primary Avionics Software System (PASS) Success Legacy - Quality and Reliability Date

    Science.gov (United States)

    Orr, James K.; Peltier, Daryl

    2010-01-01

    Thsi slide presentation reviews the avionics software system on board the space shuttle, with particular emphasis on the quality and reliability. The Primary Avionics Software System (PASS) provides automatic and fly-by-wire control of critical shuttle systems which executes in redundant computers. Charts given show the number of space shuttle flights vs time, PASS's development history, and other charts that point to the reliability of the system's development. The reliability of the system is also compared to predicted reliability.

  1. Piezoelectric multilayer actuator life test.

    Science.gov (United States)

    Sherrit, Stewart; Bao, Xiaoqi; Jones, Christopher M; Aldrich, Jack B; Blodget, Chad J; Moore, James D; Carson, John W; Goullioud, Renaud

    2011-04-01

    Potential NASA optical missions such as the Space Interferometer Mission require actuators for precision positioning to accuracies of the order of nanometers. Commercially available multilayer piezoelectric stack actuators are being considered for driving these precision mirror positioning mechanisms. These mechanisms have potential mission operational requirements that exceed 5 years for one mission life. To test the feasibility of using these commercial actuators for these applications and to determine their reliability and the redundancy requirements, a life test study was undertaken. The nominal actuator requirements for the most critical actuators on the Space Interferometry Mission (SIM) in terms of number of cycles was estimated from the Modulation Optics Mechanism (MOM) and Pathlength control Optics Mechanism (POM) and these requirements were used to define the study. At a nominal drive frequency of 250 Hz, one mission life is calculated to be 40 billion cycles. In this study, a set of commercial PZT stacks configured in a potential flight actuator configuration (pre-stressed to 18 MPa and bonded in flexures) were tested for up to 100 billion cycles. Each test flexure allowed for two sets of primary and redundant stacks to be mechanically connected in series. The tests were controlled using an automated software control and data acquisition system that set up the test parameters and monitored the waveform of the stack electrical current and voltage. The samples were driven between 0 and 20 V at 2000 Hz to accelerate the life test and mimic the voltage amplitude that is expected to be applied to the stacks during operation. During the life test, 10 primary stacks were driven and 10 redundant stacks, mechanically in series with the driven stacks, were open-circuited. The stroke determined from a strain gauge, the temperature and humidity in the chamber, and the temperature of each individual stack were recorded. Other properties of the stacks, including the

  2. Program management aid for redundancy selection and operational guidelines

    Science.gov (United States)

    Hodge, P. W.; Davis, W. L.; Frumkin, B.

    1972-01-01

    Although this criterion was developed specifically for use on the shuttle program, it has application to many other multi-missions programs (i.e. aircraft or mechanisms). The methodology employed is directly applicable even if the tools (nomographs and equations) are for mission peculiar cases. The redundancy selection criterion was developed to insure that both the design and operational cost impacts (life cycle costs) were considered in the selection of the quantity of operational redundancy. These tools were developed as aids in expediting the decision process and not intended as the automatic decision maker. This approach to redundancy selection is unique in that it enables a pseudo systems analysis to be performed on an equipment basis without waiting for all designs to be hardened.

  3. Generating an Automated Test Suite by Variable Strength Combinatorial Testing for Web Services

    Directory of Open Access Journals (Sweden)

    Yin Li

    2016-09-01

    Full Text Available Testing Web Services has become the spotlight of software engineering as an important means to assure the quality of Web application. Due to lacking of graphic interface and source code, Web services need an automated testing method, which is an important part in efficiently designing and generating test suite. However, the existing testing methods may lead to the redundancy of test suite and the decrease of fault-detecting ability since it cannot handle scenarios where the strengths of the different interactions are not uniform. With the purpose of solving this problem, firstly the formal tree model based on WSDL is constructed and the actual interaction relationship of each node is made sufficient consideration into, then the combinatorial testing is proposed to generate variable strength combinatorial test suite based on One-test-at-a-time strategy. At last test cases are minimized according to constraint rules. The results show that compared with conventional random testing, the proposed approach can detect more errors with the same amount of test cases which turning out to be more ideal than existing ones in size.

  4. FIZCON, ENDF/B Cross-Sections Redundancy Check

    International Nuclear Information System (INIS)

    Dunford, Charles L.

    2007-01-01

    1 - Description of program or function: FIZCON is a program for checking that an evaluated data file has valid data and conforms to recommended procedures. Version 7.01 (April 2005): set success flag after return from beginning; fixed valid level check for an isomer; fixed subsection energy range test in ckf9; changed lower limit on potential scattering test; fixed error in j-value test when l=0 and i=0; added one more significant figure to union grid check and sum up output messages; partial fission cross sections mt=19,20,21 and 38 did not require secondary energy distributions in file 5; corrected product test for elastic scattering; moved potential scattering test to psyche. Version 7.02 (May 2005): Fixed resonance parameter sum test. 2 - Method of solution: FIZCON can recognise the difference between ENDF-6 and ENDF-5 formats and performs its tests accordingly. Some of the tests performed include: data arrays are in increasing energy order; resonance parameter widths add up to the total; Q-values are reasonable and consistent; no required sections are missing and all cover the proper energy range; secondary distributions are normalized to 1.0; energy conservation in decay spectra. Optional tests can be performed to check the redundant cross sections, and algorithms can be used to check for possible incorrect entry of data values (Deviant Point test)

  5. SWEPP gamma-ray spectrometer system software test plan and report

    International Nuclear Information System (INIS)

    Femec, D.A.

    1994-09-01

    The SWEPP Gamma-Ray Spectrometer (SGRS) System has been developed by the Radiation Measurements and Development Unit of the Idaho National Engineering Laboratory to assist in the characterization of the radiological contents of contact-handled waste containers at the Stored Waste Examination Pilot Plant (SWEPP). In addition to determining the concentrations of gamma-ray-emitting radionuclides, the software also calculates attenuation-corrected isotopic mass ratios of specific interest, and provides controls for SGRS hardware as required. This document presents the test plan and report for the data acquisition and analysis software associated with the SGRS system

  6. Test documentation for the GENII Software Version 1.485

    International Nuclear Information System (INIS)

    Rittmann, P.D.

    1994-01-01

    Version 1.485 of the GENII software was released by the PNL GENII custodian in December of 1990. At that time the WHC GENII custodian performed several tests to verify that the advertised revisions were indeed present and that these changes had not introduced errors in the calculations normally done by WHC. These tests were not documented at that time. The purpose of this document is to summarize suitable acceptance tests of GENII and compare them with a few hand calculations. The testing is not as thorough as that used by the PNL GENII Custodian, but is sufficient to establish that the GENII program appears to work correctly on WHC managed personal computers

  7. The Legacy of Space Shuttle Flight Software

    Science.gov (United States)

    Hickey, Christopher J.; Loveall, James B.; Orr, James K.; Klausman, Andrew L.

    2011-01-01

    The initial goals of the Space Shuttle Program required that the avionics and software systems blaze new trails in advancing avionics system technology. Many of the requirements placed on avionics and software were accomplished for the first time on this program. Examples include comprehensive digital fly-by-wire technology, use of a digital databus for flight critical functions, fail operational/fail safe requirements, complex automated redundancy management, and the use of a high-order software language for flight software development. In order to meet the operational and safety goals of the program, the Space Shuttle software had to be extremely high quality, reliable, robust, reconfigurable and maintainable. To achieve this, the software development team evolved a software process focused on continuous process improvement and defect elimination that consistently produced highly predictable and top quality results, providing software managers the confidence needed to sign each Certificate of Flight Readiness (COFR). This process, which has been appraised at Capability Maturity Model (CMM)/Capability Maturity Model Integration (CMMI) Level 5, has resulted in one of the lowest software defect rates in the industry. This paper will present an overview of the evolution of the Primary Avionics Software System (PASS) project and processes over thirty years, an argument for strong statistical control of software processes with examples, an overview of the success story for identifying and driving out errors before flight, a case study of the few significant software issues and how they were either identified before flight or slipped through the process onto a flight vehicle, and identification of the valuable lessons learned over the life of the project.

  8. Method and system for redundancy management of distributed and recoverable digital control system

    Science.gov (United States)

    Stange, Kent (Inventor); Hess, Richard (Inventor); Kelley, Gerald B (Inventor); Rogers, Randy (Inventor)

    2012-01-01

    A method and system for redundancy management is provided for a distributed and recoverable digital control system. The method uses unique redundancy management techniques to achieve recovery and restoration of redundant elements to full operation in an asynchronous environment. The system includes a first computing unit comprising a pair of redundant computational lanes for generating redundant control commands. One or more internal monitors detect data errors in the control commands, and provide a recovery trigger to the first computing unit. A second redundant computing unit provides the same features as the first computing unit. A first actuator control unit is configured to provide blending and monitoring of the control commands from the first and second computing units, and to provide a recovery trigger to each of the first and second computing units. A second actuator control unit provides the same features as the first actuator control unit.

  9. The Rapid Integration and Test Environment - A Process for Achieving Software Test Acceptance

    OpenAIRE

    Jack, Rick

    2010-01-01

    Proceedings Paper (for Acquisition Research Program) Approved for public release; distribution unlimited. The Rapid Integration and Test Environment (RITE) initiative, implemented by the Program Executive Office, Command, Control, Communications, Computers and Intelligence, Command and Control Program Office (PMW-150), was born of necessity. Existing processes for requirements definition and management, as well as those for software development, did not consistently deliver high-qualit...

  10. Testing Software Development Project Productivity Model

    Science.gov (United States)

    Lipkin, Ilya

    Software development is an increasingly influential factor in today's business environment, and a major issue affecting software development is how an organization estimates projects. If the organization underestimates cost, schedule, and quality requirements, the end results will not meet customer needs. On the other hand, if the organization overestimates these criteria, resources that could have been used more profitably will be wasted. There is no accurate model or measure available that can guide an organization in a quest for software development, with existing estimation models often underestimating software development efforts as much as 500 to 600 percent. To address this issue, existing models usually are calibrated using local data with a small sample size, with resulting estimates not offering improved cost analysis. This study presents a conceptual model for accurately estimating software development, based on an extensive literature review and theoretical analysis based on Sociotechnical Systems (STS) theory. The conceptual model serves as a solution to bridge organizational and technological factors and is validated using an empirical dataset provided by the DoD. Practical implications of this study allow for practitioners to concentrate on specific constructs of interest that provide the best value for the least amount of time. This study outlines key contributing constructs that are unique for Software Size E-SLOC, Man-hours Spent, and Quality of the Product, those constructs having the largest contribution to project productivity. This study discusses customer characteristics and provides a framework for a simplified project analysis for source selection evaluation and audit task reviews for the customers and suppliers. Theoretical contributions of this study provide an initial theory-based hypothesized project productivity model that can be used as a generic overall model across several application domains such as IT, Command and Control

  11. Reliability-redundancy optimization by means of a chaotic differential evolution approach

    International Nuclear Information System (INIS)

    Coelho, Leandro dos Santos

    2009-01-01

    The reliability design is related to the performance analysis of many engineering systems. The reliability-redundancy optimization problems involve selection of components with multiple choices and redundancy levels that produce maximum benefits, can be subject to the cost, weight, and volume constraints. Classical mathematical methods have failed in handling nonconvexities and nonsmoothness in optimization problems. As an alternative to the classical optimization approaches, the meta-heuristics have been given much attention by many researchers due to their ability to find an almost global optimal solution in reliability-redundancy optimization problems. Evolutionary algorithms (EAs) - paradigms of evolutionary computation field - are stochastic and robust meta-heuristics useful to solve reliability-redundancy optimization problems. EAs such as genetic algorithm, evolutionary programming, evolution strategies and differential evolution are being used to find global or near global optimal solution. A differential evolution approach based on chaotic sequences using Lozi's map for reliability-redundancy optimization problems is proposed in this paper. The proposed method has a fast convergence rate but also maintains the diversity of the population so as to escape from local optima. An application example in reliability-redundancy optimization based on the overspeed protection system of a gas turbine is given to show its usefulness and efficiency. Simulation results show that the application of deterministic chaotic sequences instead of random sequences is a possible strategy to improve the performance of differential evolution.

  12. Sibling rivalry: related bacterial small RNAs and their redundant and non-redundant roles.

    Science.gov (United States)

    Caswell, Clayton C; Oglesby-Sherrouse, Amanda G; Murphy, Erin R

    2014-01-01

    Small RNA molecules (sRNAs) are now recognized as key regulators controlling bacterial gene expression, as sRNAs provide a quick and efficient means of positively or negatively altering the expression of specific genes. To date, numerous sRNAs have been identified and characterized in a myriad of bacterial species, but more recently, a theme in bacterial sRNAs has emerged: the presence of more than one highly related sRNAs produced by a given bacterium, here termed sibling sRNAs. Sibling sRNAs are those that are highly similar at the nucleotide level, and while it might be expected that sibling sRNAs exert identical regulatory functions on the expression of target genes based on their high degree of relatedness, emerging evidence is demonstrating that this is not always the case. Indeed, there are several examples of bacterial sibling sRNAs with non-redundant regulatory functions, but there are also instances of apparent regulatory redundancy between sibling sRNAs. This review provides a comprehensive overview of the current knowledge of bacterial sibling sRNAs, and also discusses important questions about the significance and evolutionary implications of this emerging class of regulators.

  13. Sibling rivalry: Related bacterial small RNAs and their redundant and non-redundant roles

    Directory of Open Access Journals (Sweden)

    Clayton eCaswell

    2014-10-01

    Full Text Available Small RNA molecules (sRNAs are now recognized as key regulators controlling bacterial gene expression, as sRNAs provide a quick and efficient means of positively or negatively altering the expression of specific genes. To date, numerous sRNAs have been identified and characterized in a myriad of bacterial species, but more recently, a theme in bacterial sRNAs has emerged: the presence of more than one highly related sRNAs produced by a given bacterium, here termed sibling sRNAs. Sibling sRNAs are those that are highly similar at the nucleotide level, and while it might be expected that sibling sRNAs exert identical regulatory functions on the expression of target genes based on their high degree of relatedness, emerging evidence is demonstrating that this is not always the case. Indeed, there are several examples of bacterial sibling sRNAs with non-redundant regulatory functions, but there are also instances of apparent regulatory redundancy between sibling sRNAs. This review provides a comprehensive overview of the current knowledge of bacterial sibling sRNAs, and also discusses important questions about the significance and evolutionary implications of this emerging class of regulators.

  14. CSE database: extended annotations and new recommendations for ECG software testing.

    Science.gov (United States)

    Smíšek, Radovan; Maršánová, Lucie; Němcová, Andrea; Vítek, Martin; Kozumplík, Jiří; Nováková, Marie

    2017-08-01

    Nowadays, cardiovascular diseases represent the most common cause of death in western countries. Among various examination techniques, electrocardiography (ECG) is still a highly valuable tool used for the diagnosis of many cardiovascular disorders. In order to diagnose a person based on ECG, cardiologists can use automatic diagnostic algorithms. Research in this area is still necessary. In order to compare various algorithms correctly, it is necessary to test them on standard annotated databases, such as the Common Standards for Quantitative Electrocardiography (CSE) database. According to Scopus, the CSE database is the second most cited standard database. There were two main objectives in this work. First, new diagnoses were added to the CSE database, which extended its original annotations. Second, new recommendations for diagnostic software quality estimation were established. The ECG recordings were diagnosed by five new cardiologists independently, and in total, 59 different diagnoses were found. Such a large number of diagnoses is unique, even in terms of standard databases. Based on the cardiologists' diagnoses, a four-round consensus (4R consensus) was established. Such a 4R consensus means a correct final diagnosis, which should ideally be the output of any tested classification software. The accuracy of the cardiologists' diagnoses compared with the 4R consensus was the basis for the establishment of accuracy recommendations. The accuracy was determined in terms of sensitivity = 79.20-86.81%, positive predictive value = 79.10-87.11%, and the Jaccard coefficient = 72.21-81.14%, respectively. Within these ranges, the accuracy of the software is comparable with the accuracy of cardiologists. The accuracy quantification of the correct classification is unique. Diagnostic software developers can objectively evaluate the success of their algorithm and promote its further development. The annotations and recommendations proposed in this work will allow

  15. Assessment of redundant systems with imperfect coverage by means of binary decision diagrams

    Energy Technology Data Exchange (ETDEWEB)

    Myers, Albert F. [Northrop Grumman Corporation, 1840 Century Park East, Los Angeles, CA 90067-2199 (United States)], E-mail: Al.Myers@ngc.com; Rauzy, Antoine [IML/CNRS, 163, Avenue de Luminy, 13288 Marseille Cedex 09 (France)], E-mail: arauzy@iml.univ-mrs.fr

    2008-07-15

    In this article, we study the assessment of the reliability of redundant systems with imperfect fault coverage. We term fault coverage as the ability of a system to isolate and correctly accommodate failures of redundant elements. For highly reliable systems, such as avionic and space systems, fault coverage is in general imperfect and has a significant impact on system reliability. We review here the different models of imperfect fault coverage. We propose efficient algorithms to assess them separately (as k-out-of-n selectors). We show how to implement these algorithms into a binary decision diagrams engine. Finally, we report experimental results on real life test cases that show on the one hand the importance of imperfect coverage and on the other hand the efficiency of the proposed approach.

  16. LHCb: The nightly build and test system for LCG AA and LHCb software

    CERN Multimedia

    Kruzelecki, K; Degaudenzi, H

    2009-01-01

    The core software stack both from the LCG Application Area and LHCb consists of more than 25 C++/Fortran/Python projects build for about 20 different configurations on Linux, Windows and MacOSX. To these projects, one can also add about 20 external software packages (Boost, Python, Qt, CLHEP, ...) which have also to be build for the same configurations. It order to reduce the time of the development cycle and increase the quality insurance, a framework has been developed for the daily (nightly actually) build and test of the software. Performing the build and the tests on several configurations and platform allows to increase the efficiency of the unit and integration tests. Main features: - flexible and fine grained setup (full, partial build) through a web interface - possibility to build several "slots" with different configurations - precise and highly granular reports on a web server - support for CMT projects (but not only) with their cross-dependencies. - scalable client -server architecture for the co...

  17. SIMULATION MODEL FOR DESIGN SUPPORT OF INFOCOMM REDUNDANT SYSTEMS

    Directory of Open Access Journals (Sweden)

    V. A. Bogatyrev

    2016-09-01

    Full Text Available Subject of Research. The paper deals with the effectiveness of multipath transfer of request copies through the network and their redundant service without the use of laborious analytical modeling. The model and support tools for the design of highly reliable distributed systems based on simulation modeling have been created. Method. The effectiveness of many variants of service organization and delivery through the network to the query servers is formed and analyzed. Options for providing redundant service and delivery via the network to the servers of request copies are also considered. The choice of variants for the distribution and service of requests is carried out taking into account the criticality of queries to the time of their stay in the system. The request is considered successful if at least one of its copies is accurately delivered to the working server, ready to service the request received through a network, if it is fulfilled in the set time. Efficiency analysis of the redundant transmission and service of requests is based on the model built in AnyLogic 7 simulation environment. Main Results. Simulation experiments based on the proposed models have shown the effectiveness of redundant transmission of copies of queries (packets to the servers in the cluster through multiple paths with redundant service of request copies by a group of servers in the cluster. It is shown that this solution allows increasing the probability of exact execution of at least one copy of the request within the required time. We have carried out efficiency evaluation of destruction of outdated request copies in the queues of network nodes and the cluster. We have analyzed options for network implementation of multipath transfer of request copies to the servers in the cluster over disjoint paths, possibly different according to the number of their constituent nodes. Practical Relevance. The proposed simulation models can be used when selecting the optimal

  18. Redundancy Elimination in DTN via ACK Mechanism

    Directory of Open Access Journals (Sweden)

    Xiqing Zhang

    2015-08-01

    Full Text Available The traditional routing protocols for delay tolerant networks (DTN usually take the strategy of spreading multiple copies of one message to the networks. When one copy reaches destination, the transmission of other copies not only waste the bandwidth but also deprive other messages of the opportunities for transmission. This paper brings up a mechanism to eliminate the redundant copies. By adding an acknowledge field to the packet header to delete redundant copies, it can degrade the network overhead while improve the delivery ratio. Simulation results confirm that the proposed method can improve the performance of epidemic and Spray and Wait routing protocol.

  19. Model Based Analysis and Test Generation for Flight Software

    Science.gov (United States)

    Pasareanu, Corina S.; Schumann, Johann M.; Mehlitz, Peter C.; Lowry, Mike R.; Karsai, Gabor; Nine, Harmon; Neema, Sandeep

    2009-01-01

    We describe a framework for model-based analysis and test case generation in the context of a heterogeneous model-based development paradigm that uses and combines Math- Works and UML 2.0 models and the associated code generation tools. This paradigm poses novel challenges to analysis and test case generation that, to the best of our knowledge, have not been addressed before. The framework is based on a common intermediate representation for different modeling formalisms and leverages and extends model checking and symbolic execution tools for model analysis and test case generation, respectively. We discuss the application of our framework to software models for a NASA flight mission.

  20. Software testability and its application to avionic software

    Science.gov (United States)

    Voas, Jeffrey M.; Miller, Keith W.; Payne, Jeffery E.

    1993-01-01

    Randomly generated black-box testing is an established yet controversial method of estimating software reliability. Unfortunately, as software applications have required higher reliabilities, practical difficulties with black-box testing have become increasingly problematic. These practical problems are particularly acute in life-critical avionics software, where requirements of 10 exp -7 failures per hour of system reliability can translate into a probability of failure (POF) of perhaps 10 exp -9 or less for each individual execution of the software. This paper describes the application of one type of testability analysis called 'sensitivity analysis' to B-737 avionics software; one application of sensitivity analysis is to quantify whether software testing is capable of detecting faults in a particular program and thus whether we can be confident that a tested program is not hiding faults. We so 80 by finding the testabilities of the individual statements of the program, and then use those statement testabilities to find the testabilities of the functions and modules. For the B-737 system we analyzed, we were able to isolate those functions that are more prone to hide errors during system/reliability testing.

  1. Stroop test software. The Tastiva proposal (Software para pruebas Stroop. La propuesta de Tastiva

    Directory of Open Access Journals (Sweden)

    María Claudia Scurtu

    2016-08-01

    Full Text Available There has been a great deal of research on emotional information processing within the field of clinical psychology. Many tests have been developed and the emotional Stroop test is one of the most used. However, some versions of the Stroop test have methodological issues when used to study word-colour interferences, especially when the words are emotionally charged. We present a computer-assisted version of the emotional Stroop test called Tastiva, which is highly versatile, useful, and accessible, in addition to being easy to use and widely applicable. The Tastiva software and User Manual is available on the University of Seville website: http://grupo.us.es/recursos/Tastiva/index.htm. We also present a case study using neutral and sexual content words, in which the program calculates the word exposure time by analysing the behaviour of the respondent. One of its novel contributions is the graphic presentation of measures: response time, errors, and non-response to stimuli.

  2. Hardware-Software Complex for Functional and Parametric Tests of ARM Microcontrollers STM32F1XX

    Directory of Open Access Journals (Sweden)

    Egorov Aleksey

    2016-01-01

    Full Text Available The article presents the hardware-software complex for functional and parametric tests of ARM microcontrollers STM32F1XX. The complex is based on PXI devices by National Instruments and LabVIEW software environment. Data exchange procedure between a microcontroller under test and the complex hardware is describes. Some test results are also presented.

  3. Redundancy in electronic health record corpora: analysis, impact on text mining performance and mitigation strategies.

    Science.gov (United States)

    Cohen, Raphael; Elhadad, Michael; Elhadad, Noémie

    2013-01-16

    The increasing availability of Electronic Health Record (EHR) data and specifically free-text patient notes presents opportunities for phenotype extraction. Text-mining methods in particular can help disease modeling by mapping named-entities mentions to terminologies and clustering semantically related terms. EHR corpora, however, exhibit specific statistical and linguistic characteristics when compared with corpora in the biomedical literature domain. We focus on copy-and-paste redundancy: clinicians typically copy and paste information from previous notes when documenting a current patient encounter. Thus, within a longitudinal patient record, one expects to observe heavy redundancy. In this paper, we ask three research questions: (i) How can redundancy be quantified in large-scale text corpora? (ii) Conventional wisdom is that larger corpora yield better results in text mining. But how does the observed EHR redundancy affect text mining? Does such redundancy introduce a bias that distorts learned models? Or does the redundancy introduce benefits by highlighting stable and important subsets of the corpus? (iii) How can one mitigate the impact of redundancy on text mining? We analyze a large-scale EHR corpus and quantify redundancy both in terms of word and semantic concept repetition. We observe redundancy levels of about 30% and non-standard distribution of both words and concepts. We measure the impact of redundancy on two standard text-mining applications: collocation identification and topic modeling. We compare the results of these methods on synthetic data with controlled levels of redundancy and observe significant performance variation. Finally, we compare two mitigation strategies to avoid redundancy-induced bias: (i) a baseline strategy, keeping only the last note for each patient in the corpus; (ii) removing redundant notes with an efficient fingerprinting-based algorithm. (a)For text mining, preprocessing the EHR corpus with fingerprinting yields

  4. An improved method for calculating self-motion coordinates for redundant manipulators

    International Nuclear Information System (INIS)

    Reister, D.B.

    1997-04-01

    For a redundant manipulator, the objective of redundancy resolution is to follow a specified path in Cartesian space and simultaneously perform another task (for example, maximize an objective function or avoid obstacles) at every point along the path. The conventional methods have several drawbacks: a new function must be defined for each task, the extended Jacobian can be singular, closed cycles in Cartesian space may not yield closed cycles in joint space, and the objective is point-wise redundancy resolution (to determine a single point in joint space for each point in Cartesian space). The author divides the redundancy resolution problem into two parts: (1) calculate self-motion coordinates for all possible positions of a manipulator at each point along a Cartesian path and (2) determination of optimal self-motion coordinates that maximize an objective function along the path. This paper will discuss the first part of the problem. The path-wise approach overcomes all of the drawbacks of conventional redundancy resolution methods: no need to define a new function for each task, extended Jacobian cannot be singular, and closed cycles in extended Cartesian space will yield closed cycles in joint space

  5. A comparison of Heuristic method and Llewellyn’s rules for identification of redundant constraints

    Science.gov (United States)

    Estiningsih, Y.; Farikhin; Tjahjana, R. H.

    2018-03-01

    Important techniques in linear programming is modelling and solving practical optimization. Redundant constraints are consider for their effects on general linear programming problems. Identification and reduce redundant constraints are for avoidance of all the calculations associated when solving an associated linear programming problems. Many researchers have been proposed for identification redundant constraints. This paper a compararison of Heuristic method and Llewellyn’s rules for identification of redundant constraints.

  6. Kinematics analysis of a novel planar parallel manipulator with kinematic redundancy

    Energy Technology Data Exchange (ETDEWEB)

    Qu, Haibo; Guo, Sheng [Beijing Jiaotong University, Beijing (China)

    2017-04-15

    In this paper, a novel planar parallel manipulator with kinematic redundancy is proposed. First, the Degrees of freedom (DOF) of the whole parallel manipulator and the Relative DOF (RDOF) between the moving platform and fixed base are studied. The results indicate that the proposed mechanism is kinematically redundant. Then, the kinematics, Jacobian matrices and workspace of this proposed parallel manipulator with kinematic redundancy are analyzed. Finally, the statics simulation of the proposed parallel manipulator is performed. The obtained stress and displacement distribution can be used to determine the easily destroyed place in the mechanism configurations.

  7. Kinematics analysis of a novel planar parallel manipulator with kinematic redundancy

    International Nuclear Information System (INIS)

    Qu, Haibo; Guo, Sheng

    2017-01-01

    In this paper, a novel planar parallel manipulator with kinematic redundancy is proposed. First, the Degrees of freedom (DOF) of the whole parallel manipulator and the Relative DOF (RDOF) between the moving platform and fixed base are studied. The results indicate that the proposed mechanism is kinematically redundant. Then, the kinematics, Jacobian matrices and workspace of this proposed parallel manipulator with kinematic redundancy are analyzed. Finally, the statics simulation of the proposed parallel manipulator is performed. The obtained stress and displacement distribution can be used to determine the easily destroyed place in the mechanism configurations

  8. Use of Docker for deployment and testing of astronomy software

    Science.gov (United States)

    Morris, D.; Voutsinas, S.; Hambly, N. C.; Mann, R. G.

    2017-07-01

    We describe preliminary investigations of using Docker for the deployment and testing of astronomy software. Docker is a relatively new containerization technology that is developing rapidly and being adopted across a range of domains. It is based upon virtualization at operating system level, which presents many advantages in comparison to the more traditional hardware virtualization that underpins most cloud computing infrastructure today. A particular strength of Docker is its simple format for describing and managing software containers, which has benefits for software developers, system administrators and end users. We report on our experiences from two projects - a simple activity to demonstrate how Docker works, and a more elaborate set of services that demonstrates more of its capabilities and what they can achieve within an astronomical context - and include an account of how we solved problems through interaction with Docker's very active open source development community, which is currently the key to the most effective use of this rapidly-changing technology.

  9. Agile deployment and code coverage testing metrics of the boot software on-board Solar Orbiter's Energetic Particle Detector

    Science.gov (United States)

    Parra, Pablo; da Silva, Antonio; Polo, Óscar R.; Sánchez, Sebastián

    2018-02-01

    In this day and age, successful embedded critical software needs agile and continuous development and testing procedures. This paper presents the overall testing and code coverage metrics obtained during the unit testing procedure carried out to verify the correctness of the boot software that will run in the Instrument Control Unit (ICU) of the Energetic Particle Detector (EPD) on-board Solar Orbiter. The ICU boot software is a critical part of the project so its verification should be addressed at an early development stage, so any test case missed in this process may affect the quality of the overall on-board software. According to the European Cooperation for Space Standardization ESA standards, testing this kind of critical software must cover 100% of the source code statement and decision paths. This leads to the complete testing of fault tolerance and recovery mechanisms that have to resolve every possible memory corruption or communication error brought about by the space environment. The introduced procedure enables fault injection from the beginning of the development process and enables to fulfill the exigent code coverage demands on the boot software.

  10. A redundancy-removing feature selection algorithm for nominal data

    Directory of Open Access Journals (Sweden)

    Zhihua Li

    2015-10-01

    Full Text Available No order correlation or similarity metric exists in nominal data, and there will always be more redundancy in a nominal dataset, which means that an efficient mutual information-based nominal-data feature selection method is relatively difficult to find. In this paper, a nominal-data feature selection method based on mutual information without data transformation, called the redundancy-removing more relevance less redundancy algorithm, is proposed. By forming several new information-related definitions and the corresponding computational methods, the proposed method can compute the information-related amount of nominal data directly. Furthermore, by creating a new evaluation function that considers both the relevance and the redundancy globally, the new feature selection method can evaluate the importance of each nominal-data feature. Although the presented feature selection method takes commonly used MIFS-like forms, it is capable of handling high-dimensional datasets without expensive computations. We perform extensive experimental comparisons of the proposed algorithm and other methods using three benchmarking nominal datasets with two different classifiers. The experimental results demonstrate the average advantage of the presented algorithm over the well-known NMIFS algorithm in terms of the feature selection and classification accuracy, which indicates that the proposed method has a promising performance.

  11. Learning contrast-invariant cancellation of redundant signals in neural systems.

    Directory of Open Access Journals (Sweden)

    Jorge F Mejias

    Full Text Available Cancellation of redundant information is a highly desirable feature of sensory systems, since it would potentially lead to a more efficient detection of novel information. However, biologically plausible mechanisms responsible for such selective cancellation, and especially those robust to realistic variations in the intensity of the redundant signals, are mostly unknown. In this work, we study, via in vivo experimental recordings and computational models, the behavior of a cerebellar-like circuit in the weakly electric fish which is known to perform cancellation of redundant stimuli. We experimentally observe contrast invariance in the cancellation of spatially and temporally redundant stimuli in such a system. Our model, which incorporates heterogeneously-delayed feedback, bursting dynamics and burst-induced STDP, is in agreement with our in vivo observations. In addition, the model gives insight on the activity of granule cells and parallel fibers involved in the feedback pathway, and provides a strong prediction on the parallel fiber potentiation time scale. Finally, our model predicts the existence of an optimal learning contrast around 15% contrast levels, which are commonly experienced by interacting fish.

  12. Does functional redundancy stabilize fish communities?

    DEFF Research Database (Denmark)

    Rice, Jake; Daan, Niels; Gislason, Henrik

    2012-01-01

    Functional redundancy is a community property thought to contribute to ecosystem resilience. It is argued that trophic (or other) functional groups with more species have more linkages and opportunities to buffer variation in abundance of individual species. We explored this concept with a 30‐year...... time‐series of data on 83 species sampled in the International Bottom Trawl Survey. Our results were consistent with the hypothesis that functional redundancy leads to more stable (and by inference more resilient) communities. Over the time‐series trophic groups (assigned by diet, size (Lmax) group......, or both factors) with more species had lower coefficients of variation (CVs) in abundance and biomass than did trophic groups with fewer species. These findings are also consistent with Bernoulli’s Law of Large Numbers, a rule that does not require complex ecological and evolutionary processes to produce...

  13. DYNAMIC SOFTWARE TESTING MODELS WITH PROBABILISTIC PARAMETERS FOR FAULT DETECTION AND ERLANG DISTRIBUTION FOR FAULT RESOLUTION DURATION

    Directory of Open Access Journals (Sweden)

    A. D. Khomonenko

    2016-07-01

    Full Text Available Subject of Research.Software reliability and test planning models are studied taking into account the probabilistic nature of error detection and discovering. Modeling of software testing enables to plan the resources and final quality at early stages of project execution. Methods. Two dynamic models of processes (strategies are suggested for software testing, using error detection probability for each software module. The Erlang distribution is used for arbitrary distribution approximation of fault resolution duration. The exponential distribution is used for approximation of fault resolution discovering. For each strategy, modified labeled graphs are built, along with differential equation systems and their numerical solutions. The latter makes it possible to compute probabilistic characteristics of the test processes and states: probability states, distribution functions for fault detection and elimination, mathematical expectations of random variables, amount of detected or fixed errors. Evaluation of Results. Probabilistic characteristics for software development projects were calculated using suggested models. The strategies have been compared by their quality indexes. Required debugging time to achieve the specified quality goals was calculated. The calculation results are used for time and resources planning for new projects. Practical Relevance. The proposed models give the possibility to use the reliability estimates for each individual module. The Erlang approximation removes restrictions on the use of arbitrary time distribution for fault resolution duration. It improves the accuracy of software test process modeling and helps to take into account the viability (power of the tests. With the use of these models we can search for ways to improve software reliability by generating tests which detect errors with the highest probability.

  14. Automatically generated acceptance test: A software reliability experiment

    Science.gov (United States)

    Protzel, Peter W.

    1988-01-01

    This study presents results of a software reliability experiment investigating the feasibility of a new error detection method. The method can be used as an acceptance test and is solely based on empirical data about the behavior of internal states of a program. The experimental design uses the existing environment of a multi-version experiment previously conducted at the NASA Langley Research Center, in which the launch interceptor problem is used as a model. This allows the controlled experimental investigation of versions with well-known single and multiple faults, and the availability of an oracle permits the determination of the error detection performance of the test. Fault interaction phenomena are observed that have an amplifying effect on the number of error occurrences. Preliminary results indicate that all faults examined so far are detected by the acceptance test. This shows promise for further investigations, and for the employment of this test method on other applications.

  15. LHCb - Automated Testing Infrastructure for the Software Framework Gaudi

    CERN Multimedia

    Clemencic, M

    2009-01-01

    An extensive test suite is the first step towards the delivery of robust software, but it is not always easy to implement it, especially in projects with many developers. An easy to use and flexible infrastructure to use to write and execute the tests reduces the work each developer has to do to instrument his packages with tests. At the same time, the infrastructure gives the same look and feel to the tests and allows automated execution of the test suite. For Gaudi, we decided to develop the testing infrastructure on top of the free tool QMTest, used already in LCG Application Area for the routine tests run in the nightly build system. The high flexibility of QMTest allowed us to integrate it in the Gaudi package structure. A specialized test class and some utility functions have been developed to simplify the definition of a test for a Gaudi-based application. Thanks to the testing infrastructure described here, we managed to quickly extend the standard Gaudi test suite and add tests to the main LHCb appli...

  16. Software development for simplified performance tests and weekly performance check in Younggwang NPP Unit 3 and 4

    International Nuclear Information System (INIS)

    Hur, K. Y.; Jang, S. H.; Lee, J. W.; Kim, J. T.; Park, J. C.

    2002-01-01

    This paper covers the current status of turbine cycle performance test in nuclear power plants and the software development which can solve some shortcomings related to performance tests. The software developed is for simplified performance tests and weekly performance checks in Yonggwang nuclear power plant unit 3 and 4. This software includes the requirements from the efficiency division for the consistency with actual performance analysis work and the usability of the collected performance test data. From the working survey, we identify the difference between the embedded performance analysis modules and the actual performance analysis work. This software helps operation or maintenance personnel to reduce work load, to support the trend analysis of essential parameters in a turbine cycle, and to utilize the correction curves for the decision-making in their work

  17. High precision redundant robotic manipulator

    International Nuclear Information System (INIS)

    Young, K.K.D.

    1998-01-01

    A high precision redundant robotic manipulator for overcoming contents imposed by obstacles or imposed by a highly congested work space is disclosed. One embodiment of the manipulator has four degrees of freedom and another embodiment has seven degrees of freedom. Each of the embodiments utilize a first selective compliant assembly robot arm (SCARA) configuration to provide high stiffness in the vertical plane, a second SCARA configuration to provide high stiffness in the horizontal plane. The seven degree of freedom embodiment also utilizes kinematic redundancy to provide the capability of avoiding obstacles that lie between the base of the manipulator and the end effector or link of the manipulator. These additional three degrees of freedom are added at the wrist link of the manipulator to provide pitch, yaw and roll. The seven degrees of freedom embodiment uses one revolute point per degree of freedom. For each of the revolute joints, a harmonic gear coupled to an electric motor is introduced, and together with properly designed based servo controllers provide an end point repeatability of less than 10 microns. 3 figs

  18. Analysis of functional redundancies within the Arabidopsis TCP transcription factor family

    NARCIS (Netherlands)

    Danisman, S.; Dijk, van A.D.J.; Bimbo, A.; Wal, van der F.; Hennig, L.; Folter, de S.; Angenent, G.C.; Immink, R.G.H.

    2013-01-01

    Analyses of the functions of TEOSINTE-LIKE1, CYCLOIDEA, and ROLIFERATING CELL FACTOR1 (TCP) transcription factors have been hampered by functional redundancy between its individual members. In general, putative functionally redundant genes are predicted based on sequence similarity and confirmed by

  19. Features of the Upgraded Imaging for Hypersonic Experimental Aeroheating Testing (IHEAT) Software

    Science.gov (United States)

    Mason, Michelle L.; Rufer, Shann J.

    2016-01-01

    The Imaging for Hypersonic Experimental Aeroheating Testing (IHEAT) software is used at the NASA Langley Research Center to analyze global aeroheating data on wind tunnel models tested in the Langley Aerothermodynamics Laboratory. One-dimensional, semi-infinite heating data derived from IHEAT are used in the design of thermal protection systems for hypersonic vehicles that are exposed to severe aeroheating loads, such as reentry vehicles during descent and landing procedures. This software program originally was written in the PV-WAVE(Registered Trademark) programming language to analyze phosphor thermography data from the two-color, relative-intensity system developed at Langley. To increase the efficiency, functionality, and reliability of IHEAT, the program was migrated to MATLAB(Registered Trademark) syntax and compiled as a stand-alone executable file labeled version 4.0. New features of IHEAT 4.0 include the options to perform diagnostic checks of the accuracy of the acquired data during a wind tunnel test, to extract data along a specified multi-segment line following a feature such as a leading edge or a streamline, and to batch process all of the temporal frame data from a wind tunnel run. Results from IHEAT 4.0 were compared on a pixel level to the output images from the legacy software to validate the program. The absolute differences between the heat transfer data output from the two programs were on the order of 10(exp -5) to 10(exp -7). IHEAT 4.0 replaces the PV-WAVE(Registered Trademark) version as the production software for aeroheating experiments conducted in the hypersonic facilities at NASA Langley.

  20. Fuzzy Cognitive Map for Software Testing Using Artificial Intelligence Techniques

    OpenAIRE

    Larkman , Deane; Mohammadian , Masoud; Balachandran , Bala; Jentzsch , Ric

    2010-01-01

    International audience; This paper discusses a framework to assist test managers to evaluate the use of AI techniques as a potential tool in software testing. Fuzzy Cognitive Maps (FCMs) are employed to evaluate the framework and make decision analysis easier. A what-if analysis is presented that explores the general application of the framework. Simulations are performed to show the effectiveness of the proposed method. The framework proposed is innovative and it assists managers in making e...

  1. Deep in Data. Empirical Data Based Software Accuracy Testing Using the Building America Field Data Repository

    Energy Technology Data Exchange (ETDEWEB)

    Neymark, J. [J.Neymark and Associates, Golden, CO (United States); Roberts, D. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2013-06-01

    This paper describes progress toward developing a usable, standardized, empirical data-based software accuracy test suite using home energy consumption and building description data. Empirical data collected from around the United States have been translated into a uniform Home Performance Extensible Markup Language format that may enable software developers to create translators to their input schemes for efficient access to the data. This could allow for modeling many homes expediently, and thus implementing software accuracy test cases by applying the translated data.

  2. Equivalence of velocity-level and acceleration-level redundancy-resolution of manipulators

    International Nuclear Information System (INIS)

    Cai Binghuang; Zhang Yunong

    2009-01-01

    The equivalence of velocity-level and acceleration-level redundancy resolution of robot manipulators is investigated in this Letter. Theoretical analysis based on gradient-descent method and computer simulations based on PUMA560 robot manipulator both demonstrate the equivalence of redundancy-resolution schemes at different levels.

  3. Chaste: A test-driven approach to software development for biological modelling

    KAUST Repository

    Pitt-Francis, Joe; Pathmanathan, Pras; Bernabeu, Miguel O.; Bordas, Rafel; Cooper, Jonathan; Fletcher, Alexander G.; Mirams, Gary R.; Murray, Philip; Osborne, James M.; Walter, Alex; Chapman, S. Jon; Garny, Alan; van Leeuwen, Ingeborg M.M.; Maini, Philip K.; Rodrí guez, Blanca; Waters, Sarah L.; Whiteley, Jonathan P.; Byrne, Helen M.; Gavaghan, David J.

    2009-01-01

    Chaste ('Cancer, heart and soft-tissue environment') is a software library and a set of test suites for computational simulations in the domain of biology. Current functionality has arisen from modelling in the fields of cancer, cardiac physiology

  4. Reliability optimization of series-parallel systems with a choice of redundancy strategies using a genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Tavakkoli-Moghaddam, R. [Department of Industrial Engineering, Faculty of Engineering, University of Tehran, P.O. Box 11365/4563, Tehran (Iran, Islamic Republic of); Department of Mechanical Engineering, The University of British Columbia, Vancouver (Canada)], E-mail: tavakoli@ut.ac.ir; Safari, J. [Department of Industrial Engineering, Science and Research Branch, Islamic Azad University, Tehran (Iran, Islamic Republic of)], E-mail: jalalsafari@pideco.com; Sassani, F. [Department of Mechanical Engineering, The University of British Columbia, Vancouver (Canada)], E-mail: sassani@mech.ubc.ca

    2008-04-15

    This paper proposes a genetic algorithm (GA) for a redundancy allocation problem for the series-parallel system when the redundancy strategy can be chosen for individual subsystems. Majority of the solution methods for the general redundancy allocation problems assume that the redundancy strategy for each subsystem is predetermined and fixed. In general, active redundancy has received more attention in the past. However, in practice both active and cold-standby redundancies may be used within a particular system design and the choice of the redundancy strategy becomes an additional decision variable. Thus, the problem is to select the best redundancy strategy, component, and redundancy level for each subsystem in order to maximize the system reliability under system-level constraints. This belongs to the NP-hard class of problems. Due to its complexity, it is so difficult to optimally solve such a problem by using traditional optimization tools. It is demonstrated in this paper that GA is an efficient method for solving this type of problems. Finally, computational results for a typical scenario are presented and the robustness of the proposed algorithm is discussed.

  5. Reliability optimization of series-parallel systems with a choice of redundancy strategies using a genetic algorithm

    International Nuclear Information System (INIS)

    Tavakkoli-Moghaddam, R.; Safari, J.; Sassani, F.

    2008-01-01

    This paper proposes a genetic algorithm (GA) for a redundancy allocation problem for the series-parallel system when the redundancy strategy can be chosen for individual subsystems. Majority of the solution methods for the general redundancy allocation problems assume that the redundancy strategy for each subsystem is predetermined and fixed. In general, active redundancy has received more attention in the past. However, in practice both active and cold-standby redundancies may be used within a particular system design and the choice of the redundancy strategy becomes an additional decision variable. Thus, the problem is to select the best redundancy strategy, component, and redundancy level for each subsystem in order to maximize the system reliability under system-level constraints. This belongs to the NP-hard class of problems. Due to its complexity, it is so difficult to optimally solve such a problem by using traditional optimization tools. It is demonstrated in this paper that GA is an efficient method for solving this type of problems. Finally, computational results for a typical scenario are presented and the robustness of the proposed algorithm is discussed

  6. The Public Collaboration Lab—Infrastructuring Redundancy with Communities-in-Place

    Directory of Open Access Journals (Sweden)

    Adam Thorpe

    Full Text Available In this article we share an example of challenge-driven learning in design education and consider the contribution of such approaches to the weaving of communities-in-place. We describe the research and practice of the Public Collaboration Lab (PCL, a prototype public social innovation lab developed and tested via a collaborative action research partnership between a London borough council and an art and design university. We make the case that this collaboration is an effective means of bringing capacity in design to public service innovation, granting the redundancy of resources necessary for the experimentation, reflection, and learning that leads to innovation—particularly at a time of financial austerity. We summarize three collaborative design experiments delivered by local government officers working with student designers and residents supported by design researchers and tutors. We identify particular qualities of participatory and collaborative design that foster the construction of meaningful connections among participants in the design process—connections that have the potential to catalyze or strengthen the relationships, experiences, and understandings that contribute to enrich communities-in-place, and infrastructure community resilience in the process. Keywords: Participatory design, Public social innovation, Redundancy, Infrastructuring, Local government

  7. Resolving Actuator Redundancy - Control Allocation vs. Linear Quadratic Control

    OpenAIRE

    Härkegård, Ola

    2004-01-01

    When designing control laws for systems with more inputs than controlled variables, one issue to consider is how to deal with actuator redundancy. Two tools for distributing the control effort among a redundant set of actuators are control allocation and linear quadratic control design. In this paper, we investigate the relationship between these two design tools when a quadratic performance index is used for control allocation. We show that for a particular class of linear systems, they give...

  8. System testing software deployments using Docker and Kubernetes in gitlab CI: EOS + CTA use case

    CERN Document Server

    CERN. Geneva

    2017-01-01

    It needs to be seamlessly integrated with `EOS`, which has become the de facto disk storage system at CERN. `CTA` and `EOS` integration requires parallel development of features in both software that needs to be **synchronized and systematically tested** on a specific distributed development infrastructure for each commit in the code base. This presentation describes the full gitlab continuous integration work flow that builds, tests, deploys and run system tests of the full software stack in docker containers on our specific kubernetes infrastructure.

  9. Repetitive motion planning and control of redundant robot manipulators

    CERN Document Server

    Zhang, Yunong

    2013-01-01

    Repetitive Motion Planning and Control of Redundant Robot Manipulators presents four typical motion planning schemes based on optimization techniques, including the fundamental RMP scheme and its extensions. These schemes are unified as quadratic programs (QPs), which are solved by neural networks or numerical algorithms. The RMP schemes are demonstrated effectively by the simulation results based on various robotic models; the experiments applying the fundamental RMP scheme to a physical robot manipulator are also presented. As the schemes and the corresponding solvers presented in the book have solved the non-repetitive motion problems existing in redundant robot manipulators, it is of particular use in applying theoretical research based on the quadratic program for redundant robot manipulators in industrial situations. This book will be a valuable reference work for engineers, researchers, advanced undergraduate and graduate students in robotics fields. Yunong Zhang is a professor at The School of Informa...

  10. A Comparison of Routing Protocol for WSNs: Redundancy Based Approach A Comparison of Routing Protocol for WSNs: Redundancy Based Approach

    Directory of Open Access Journals (Sweden)

    Anand Prakash

    2014-03-01

    Full Text Available Wireless Sensor Networks (WSNs with their dynamic applications gained a tremendous attention of researchers. Constant monitoring of critical situations attracted researchers to utilize WSNs at vast platforms. The main focus in WSNs is to enhance network localization as much as one could, for efficient and optimal utilization of resources. Different approaches based upon redundancy are proposed for optimum functionality. Localization is always related with redundancy of sensor nodes deployed at remote areas for constant and fault tolerant monitoring. In this work, we propose a comparison of classic flooding and the gossip protocol for homogenous networks which enhances stability and throughput quiet significantly.  

  11. Coupling ant colony and the degraded ceiling algorithm for the redundancy allocation problem of series-parallel systems

    International Nuclear Information System (INIS)

    Nahas, Nabil; Nourelfath, Mustapha; Ait-Kadi, Daoud

    2007-01-01

    The redundancy allocation problem (RAP) is a well known NP-hard problem which involves the selection of elements and redundancy levels to maximize system reliability given various system-level constraints. As telecommunications and internet protocol networks, manufacturing and power systems are becoming more and more complex, while requiring short developments schedules and very high reliability, it is becoming increasingly important to develop efficient solutions to the RAP. This paper presents an efficient algorithm to solve this reliability optimization problem. The idea of a heuristic approach design is inspired from the ant colony meta-heuristic optimization method and the degraded ceiling local search technique. Our hybridization of the ant colony meta-heuristic with the degraded ceiling performs well and is competitive with the best-known heuristics for redundancy allocation. Numerical results for the 33 test problems from previous research are reported and compared. The solutions found by our approach are all better than or are in par with the well-known best solutions

  12. Minister wants age balance to play greater role in redundancy selection

    NARCIS (Netherlands)

    Grünell, M.

    2004-01-01

    In May 2004, the Dutch Minister of Social Affairs proposed changes to the statutory rules on selection for redundancy, with less emphasis on the last in, first out seniority-based principle and a greater focus on distributing the redundancies between employees of different ages. The social partners

  13. Intraguild predation reduces redundancy of predator species in multiple predator assemblage.

    Science.gov (United States)

    Griffen, Blaine D; Byers, James E

    2006-07-01

    1. Interference between predator species frequently decreases predation rates, lowering the risk of predation for shared prey. However, such interference can also occur between conspecific predators. 2. Therefore, to understand the importance of predator biodiversity and the degree that predator species can be considered functionally interchangeable, we determined the degree of additivity and redundancy of predators in multiple- and single-species combinations. 3. We show that interference between two invasive species of predatory crabs, Carcinus maenas and Hemigrapsus sanguineus, reduced the risk of predation for shared amphipod prey, and had redundant per capita effects in most multiple- and single-species predator combinations. 4. However, when predator combinations with the potential for intraguild predation were examined, predator interference increased and predator redundancy decreased. 5. Our study indicates that trophic structure is important in determining how the effects of predator species combine and demonstrates the utility of determining the redundancy, as well as the additivity, of multiple predator species.

  14. Efficient exact optimization of multi-objective redundancy allocation problems in series-parallel systems

    International Nuclear Information System (INIS)

    Cao, Dingzhou; Murat, Alper; Chinnam, Ratna Babu

    2013-01-01

    This paper proposes a decomposition-based approach to exactly solve the multi-objective Redundancy Allocation Problem for series-parallel systems. Redundancy allocation problem is a form of reliability optimization and has been the subject of many prior studies. The majority of these earlier studies treat redundancy allocation problem as a single objective problem maximizing the system reliability or minimizing the cost given certain constraints. The few studies that treated redundancy allocation problem as a multi-objective optimization problem relied on meta-heuristic solution approaches. However, meta-heuristic approaches have significant limitations: they do not guarantee that Pareto points are optimal and, more importantly, they may not identify all the Pareto-optimal points. In this paper, we treat redundancy allocation problem as a multi-objective problem, as is typical in practice. We decompose the original problem into several multi-objective sub-problems, efficiently and exactly solve sub-problems, and then systematically combine the solutions. The decomposition-based approach can efficiently generate all the Pareto-optimal solutions for redundancy allocation problems. Experimental results demonstrate the effectiveness and efficiency of the proposed method over meta-heuristic methods on a numerical example taken from the literature.

  15. Accuracy Test of Software Architecture Compliance Checking Tools : Test Instruction

    NARCIS (Netherlands)

    Prof.dr. S. Brinkkemper; Dr. Leo Pruijt; C. Köppe; J.M.E.M. van der Werf

    2015-01-01

    Author supplied: "Abstract Software Architecture Compliance Checking (SACC) is an approach to verify conformance of implemented program code to high-level models of architectural design. Static SACC focuses on the modular software architecture and on the existence of rule violating dependencies

  16. Reliability Quantification Method for Safety Critical Software Based on a Finite Test Set

    International Nuclear Information System (INIS)

    Shin, Sung Min; Kim, Hee Eun; Kang, Hyun Gook; Lee, Seung Jun

    2014-01-01

    Software inside of digitalized system have very important role because it may cause irreversible consequence and affect the whole system as common cause failure. However, test-based reliability quantification method for some safety critical software has limitations caused by difficulties in developing input sets as a form of trajectory which is series of successive values of variables. To address these limitations, this study proposed another method which conduct the test using combination of single values of variables. To substitute the trajectory form of input using combination of variables, the possible range of each variable should be identified. For this purpose, assigned range of each variable, logical relations between variables, plant dynamics under certain situation, and characteristics of obtaining information of digital device are considered. A feasibility of the proposed method was confirmed through an application to the Reactor Protection System (RPS) software trip logic

  17. An integrated software testing framework for FGA-based controllers in nuclear power plants

    International Nuclear Information System (INIS)

    Kim, Jae Yeob; Kim, Eun Sub; Yoo, Jun Beom; Lee, Young Jun; Choi, Jong Gyun

    2016-01-01

    Field-programmable gate arrays (FPGAs) have received much attention from the nuclear industry as an alternative platform to programmable logic controllers for digital instrumentation and control. The software aspect of FPGA development consists of several steps of synthesis and refinement, and also requires verification activities, such as simulations that are performed individually at each step. This study proposed an integrated software-testing framework for simulating all artifacts of the FPGA software development simultaneously and evaluating whether all artifacts work correctly using common oracle programs. This method also generates a massive number of meaningful simulation scenarios that reflect reactor shutdown logics. The experiment, which was performed on two FPGA software implementations, showed that it can dramatically save both time and costs

  18. AirSTAR Hardware and Software Design for Beyond Visual Range Flight Research

    Science.gov (United States)

    Laughter, Sean; Cox, David

    2016-01-01

    The National Aeronautics and Space Administration (NASA) Airborne Subscale Transport Aircraft Research (AirSTAR) Unmanned Aerial System (UAS) is a facility developed to study the flight dynamics of vehicles in emergency conditions, in support of aviation safety research. The system was upgraded to have its operational range significantly expanded, going beyond the line of sight of a ground-based pilot. A redesign of the airborne flight hardware was undertaken, as well as significant changes to the software base, in order to provide appropriate autonomous behavior in response to a number of potential failures and hazards. Ground hardware and system monitors were also upgraded to include redundant communication links, including ADS-B based position displays and an independent flight termination system. The design included both custom and commercially available avionics, combined to allow flexibility in flight experiment design while still benefiting from tested configurations in reversionary flight modes. A similar hierarchy was employed in the software architecture, to allow research codes to be tested, with a fallback to more thoroughly validated flight controls. As a remotely piloted facility, ground systems were also developed to ensure the flight modes and system state were communicated to ground operations personnel in real-time. Presented in this paper is a general overview of the concept of operations for beyond visual range flight, and a detailed review of the airborne hardware and software design. This discussion is held in the context of the safety and procedural requirements that drove many of the design decisions for the AirSTAR UAS Beyond Visual Range capability.

  19. Statistical test data selection for reliability evalution of process computer software

    International Nuclear Information System (INIS)

    Volkmann, K.P.; Hoermann, H.; Ehrenberger, W.

    1976-01-01

    The paper presents a concept for converting knowledge about the characteristics of process states into practicable procedures for the statistical selection of test cases in testing process computer software. Process states are defined as vectors whose components consist of values of input variables lying in discrete positions or within given limits. Two approaches for test data selection, based on knowledge about cases of demand, are outlined referring to a purely probabilistic method and to the mathematics of stratified sampling. (orig.) [de

  20. A novel redundant INS based on triple rotary inertial measurement units

    Science.gov (United States)

    Chen, Gang; Li, Kui; Wang, Wei; Li, Peng

    2016-10-01

    Accuracy and reliability are two key performances of inertial navigation system (INS). Rotation modulation (RM) can attenuate the bias of inertial sensors and make it possible for INS to achieve higher navigation accuracy with lower-class sensors. Therefore, the conflict between the accuracy and cost of INS can be eased. Traditional system redundancy and recently researched sensor redundancy are two primary means to improve the reliability of INS. However, how to make the best use of the redundant information from redundant sensors hasn’t been studied adequately, especially in rotational INS. This paper proposed a novel triple rotary unit strapdown inertial navigation system (TRUSINS), which combines RM and sensor redundancy design to enhance the accuracy and reliability of rotational INS. Each rotary unit independently rotates to modulate the errors of two gyros and two accelerometers. Three units can provide double sets of measurements along all three axes of body frame to constitute a couple of INSs which make TRUSINS redundant. Experiments and simulations based on a prototype which is made up of six fiber-optic gyros with drift stability of 0.05° h-1 show that TRUSINS can achieve positioning accuracy of about 0.256 n mile h-1, which is ten times better than that of a normal non-rotational INS with the same level inertial sensors. The theoretical analysis and the experimental results show that due to the advantage of the innovative structure, the designed fault detection and isolation (FDI) strategy can tolerate six sensor faults at most, and is proved to be effective and practical. Therefore, TRUSINS is particularly suitable and highly beneficial for the applications where high accuracy and high reliability is required.

  1. Objective past of a quantum universe: Redundant records of consistent histories

    Science.gov (United States)

    Riedel, C. Jess; Zurek, Wojciech H.; Zwolak, Michael

    2016-03-01

    Motivated by the advances of quantum Darwinism and recognizing the role played by redundancy in identifying the small subset of quantum states with resilience characteristic of objective classical reality, we explore the implications of redundant records for consistent histories. The consistent histories formalism is a tool for describing sequences of events taking place in an evolving closed quantum system. A set of histories is consistent when one can reason about them using Boolean logic, i.e., when probabilities of sequences of events that define histories are additive. However, the vast majority of the sets of histories that are merely consistent are flagrantly nonclassical in other respects. This embarras de richesses (known as the set selection problem) suggests that one must go beyond consistency to identify how the classical past arises in our quantum universe. The key intuition we follow is that the records of events that define the familiar objective past are inscribed in many distinct systems, e.g., subsystems of the environment, and are accessible locally in space and time to observers. We identify histories that are not just consistent but redundantly consistent using the partial-trace condition introduced by Finkelstein as a bridge between histories and decoherence. The existence of redundant records is a sufficient condition for redundant consistency. It selects, from the multitude of the alternative sets of consistent histories, a small subset endowed with redundant records characteristic of the objective classical past. The information about an objective history of the past is then simultaneously within reach of many, who can independently reconstruct it and arrive at compatible conclusions in the present.

  2. Development of remote control software for multiformat test signal generator

    Directory of Open Access Journals (Sweden)

    Gao Yang

    2017-01-01

    Full Text Available The multi format test signal generator mentioned in this paper is the video signal generator named TG8000 produced by Tektronix Company. I will introduce the function about remote control for signal generator, how to connect the computer to the instrument, and how to remote control. My topic uses my computer to connect the instrument through the 10/100/1000 BASE-T port on the rear panel of TG8000. Then I write program to transmit SCPI (Standard Commands for Programmable Instruments to control TG8000. The application is running on the Windows operating system, the programming language is C#, development environment is Microsoft Visual Studio 2010, using the TCP/IP protocol based on Socket. And the method of remote control refers to the application called TGSetup which is developed by Tektronix Company. This paper includes a brief summary of the basic principle, and introduce for details about the process of remote control software development, and how to use my software. In the end, I will talk about the advantages of my software compared with another one.

  3. Earth Observing System (EOS) Advanced Microwave Sounding Unit-A2 (EOS/AMSU-A): EOS Software Test Report

    Science.gov (United States)

    1998-01-01

    This document describes the results of the formal qualification test (FQT)/ Demonstration conducted on September 10, and 14, 1998 for the EOS AMSU-A2 instrument. The purpose of the report is to relate the results of the functional performance and interface tests of the software. This is the final submittal of the EOS/AMSU-A Software Test report.

  4. Software design for the Tritium System Test Assembly

    International Nuclear Information System (INIS)

    Claborn, G.W.; Heaphy, R.T.; Lewis, P.S.; Mann, L.W.; Nielson, C.W.

    1983-01-01

    The control system for the Tritium Systems Test Assembly (TSTA) must execute complicated algorithms for the control of several sophisticated subsystems. It must implement this control with requirements for easy modifiability, for high availability, and provide stringent protection for personnel and the environment. Software techniques used to deal with these requirements are described, including modularization based on the structure of the physical systems, a two-level hierarchy of concurrency, a dynamically modifiable man-machine interface, and a specification and documentation language based on a computerized form of structured flowcharts

  5. Software design for the Tritium Systems Test Assembly

    International Nuclear Information System (INIS)

    Claborn, G.W.; Keaphy, R.T.

    1983-01-01

    The control system for the Tritium Systems Test Assembly (TSTA) must execute complicated algorithms for the control of several sophisticated subsystems. It must implement this control with requirements for easy modifiability, for high availability, and provide stringent protection for personnel and the environment. Software techniques used to deal with these requirements are described, including modularization based on the structure of the physical systems, a two-level hierarchy of concurrency, a dynamically modifiable manmachine interface, and a specification and documentation language based on a computerized form of structured flowcharts

  6. Control and test software for IRAM WideX correlator

    International Nuclear Information System (INIS)

    Blanchet, S.; Broguiere, D.; Chavatte, P.; Morel, F.; Perrigouard, A.; Torres, M.

    2012-01-01

    IRAM is an international research institute for radio astronomy. It has designed a new correlator called WideX for the Plateau de Bure interferometer (an array of six 15-meter telescopes) in the French Alps. The device started its official service in February 2010. This correlator must be driven in real-time at 32 Hz for sending parameters and for data acquisition. With 3.67 million channels, distributed over 1792 dedicated chips, that produce a 1.87 Gbits/sec data output rate, the data acquisition and processing and also the automatic hardware-failure detection are big challenges for the software. This article presents the software that has been developed to drive and test the correlator. In particular it presents an innovative usage of a high speed optical link, initially developed for the CERN ALICE experiment, associated with real-time Linux (RTAI) to achieve our goals. (authors)

  7. An efficient heuristic versus a robust hybrid meta-heuristic for general framework of serial-parallel redundancy problem

    International Nuclear Information System (INIS)

    Sadjadi, Seyed Jafar; Soltani, R.

    2009-01-01

    We present a heuristic approach to solve a general framework of serial-parallel redundancy problem where the reliability of the system is maximized subject to some general linear constraints. The complexity of the redundancy problem is generally considered to be NP-Hard and the optimal solution is not normally available. Therefore, to evaluate the performance of the proposed method, a hybrid genetic algorithm is also implemented whose parameters are calibrated via Taguchi's robust design method. Then, various test problems are solved and the computational results indicate that the proposed heuristic approach could provide us some promising reliabilities, which are fairly close to optimal solutions in a reasonable amount of time.

  8. Agile Acceptance Test-Driven Development of Clinical Decision Support Advisories: Feasibility of Using Open Source Software.

    Science.gov (United States)

    Basit, Mujeeb A; Baldwin, Krystal L; Kannan, Vaishnavi; Flahaven, Emily L; Parks, Cassandra J; Ott, Jason M; Willett, Duwayne L

    2018-04-13

    Moving to electronic health records (EHRs) confers substantial benefits but risks unintended consequences. Modern EHRs consist of complex software code with extensive local configurability options, which can introduce defects. Defects in clinical decision support (CDS) tools are surprisingly common. Feasible approaches to prevent and detect defects in EHR configuration, including CDS tools, are needed. In complex software systems, use of test-driven development and automated regression testing promotes reliability. Test-driven development encourages modular, testable design and expanding regression test coverage. Automated regression test suites improve software quality, providing a "safety net" for future software modifications. Each automated acceptance test serves multiple purposes, as requirements (prior to build), acceptance testing (on completion of build), regression testing (once live), and "living" design documentation. Rapid-cycle development or "agile" methods are being successfully applied to CDS development. The agile practice of automated test-driven development is not widely adopted, perhaps because most EHR software code is vendor-developed. However, key CDS advisory configuration design decisions and rules stored in the EHR may prove amenable to automated testing as "executable requirements." We aimed to establish feasibility of acceptance test-driven development of clinical decision support advisories in a commonly used EHR, using an open source automated acceptance testing framework (FitNesse). Acceptance tests were initially constructed as spreadsheet tables to facilitate clinical review. Each table specified one aspect of the CDS advisory's expected behavior. Table contents were then imported into a test suite in FitNesse, which queried the EHR database to automate testing. Tests and corresponding CDS configuration were migrated together from the development environment to production, with tests becoming part of the production regression test

  9. Software for Preprocessing Data from Rocket-Engine Tests

    Science.gov (United States)

    Cheng, Chiu-Fu

    2004-01-01

    Three computer programs have been written to preprocess digitized outputs of sensors during rocket-engine tests at Stennis Space Center (SSC). The programs apply exclusively to the SSC E test-stand complex and utilize the SSC file format. The programs are the following: Engineering Units Generator (EUGEN) converts sensor-output-measurement data to engineering units. The inputs to EUGEN are raw binary test-data files, which include the voltage data, a list identifying the data channels, and time codes. EUGEN effects conversion by use of a file that contains calibration coefficients for each channel. QUICKLOOK enables immediate viewing of a few selected channels of data, in contradistinction to viewing only after post-test processing (which can take 30 minutes to several hours depending on the number of channels and other test parameters) of data from all channels. QUICKLOOK converts the selected data into a form in which they can be plotted in engineering units by use of Winplot (a free graphing program written by Rick Paris). EUPLOT provides a quick means for looking at data files generated by EUGEN without the necessity of relying on the PV-WAVE based plotting software.

  10. Deep in Data: Empirical Data Based Software Accuracy Testing Using the Building America Field Data Repository: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Neymark, J.; Roberts, D.

    2013-06-01

    An opportunity is available for using home energy consumption and building description data to develop a standardized accuracy test for residential energy analysis tools. That is, to test the ability of uncalibrated simulations to match real utility bills. Empirical data collected from around the United States have been translated into a uniform Home Performance Extensible Markup Language format that may enable software developers to create translators to their input schemes for efficient access to the data. This may facilitate the possibility of modeling many homes expediently, and thus implementing software accuracy test cases by applying the translated data. This paper describes progress toward, and issues related to, developing a usable, standardized, empirical data-based software accuracy test suite.

  11. Software Testing and its Relationship to the Context of the Product

    Directory of Open Access Journals (Sweden)

    Giordano Soares

    2011-12-01

    Full Text Available Test equipment today have to deal with the growing complexity of the systems under test, while the schedules, budgets and team sizes are not scaled in the same relationship. In addition, the entire test plan cannot be automate and due to time and budget constraints, only a small fraction of all use cases can be covered in the manual tests. For this, the testers must be familiar with the context of the product under test, to design a test plan effective. This article describes the advantages that lead the fact that the testers know the domain of the software product, especially in relation to managing the complexity and problems of selection and prioritization of tests

  12. CONFU: Configuration Fuzzing Testing Framework for Software Vulnerability Detection.

    Science.gov (United States)

    Dai, Huning; Murphy, Christian; Kaiser, Gail

    2010-01-01

    Many software security vulnerabilities only reveal themselves under certain conditions, i.e., particular configurations and inputs together with a certain runtime environment. One approach to detecting these vulnerabilities is fuzz testing. However, typical fuzz testing makes no guarantees regarding the syntactic and semantic validity of the input, or of how much of the input space will be explored. To address these problems, we present a new testing methodology called Configuration Fuzzing. Configuration Fuzzing is a technique whereby the configuration of the running application is mutated at certain execution points, in order to check for vulnerabilities that only arise in certain conditions. As the application runs in the deployment environment, this testing technique continuously fuzzes the configuration and checks "security invariants" that, if violated, indicate a vulnerability. We discuss the approach and introduce a prototype framework called ConFu (CONfiguration FUzzing testing framework) for implementation. We also present the results of case studies that demonstrate the approach's feasibility and evaluate its performance.

  13. Developmental changes in children’s processing of redundant modifiers in definite object descriptions

    Directory of Open Access Journals (Sweden)

    Ruud Koolen

    2016-12-01

    Full Text Available This paper investigates developmental changes in children’s processing of redundant information in definite object descriptions. In two experiments, children of two age groups (six or seven, and nine or ten years old were presented with pictures of sweets. In the first experiment (pairwise comparison, two identical sweets were shown, and one of these was described with a redundant modifier. After the description, the children had to indicate the sweet they preferred most in a forced-choice task. In the second experiment (graded rating, only one sweet was shown, which was described with a redundant color modifier in half of the cases (e.g., the blue sweet and in the other half of the cases simply as the sweet. This time, the children were asked to indicate on a 5-point rating scale to what extent they liked the sweets. In both experiments, the results showed that the younger children had a preference for the sweets described with redundant information, while redundant information did not have an effect on the preferences for the older children. These results imply that children are learning to distinguish between situations in which redundant information carries an implicature and situations in which this is not the case.

  14. The research of the test-class method based on interface object in the software integration test of the large container inspection system

    International Nuclear Information System (INIS)

    Sun Shaohua; Chen Zhiqiang; Zhang Li; Gao Wenhuan; Kang Kejun

    2000-01-01

    Software test is the important stage in software process. The has been mature theory, method and model for unit test in practice. But for integration test, there is not regular method to adhere to. The author presents a new method, developed during the development of the large container inspection system, named test class method based on interface object. In this method a set of basic test-class based on the concept of class in the object-oriented method is established and the method of combining the interface graph and the class set is used to describe the test process. So the strict control and the scientific management for the test process are achieved. The conception of test database is introduced in this method, thus the traceability and the repeatability of test process are improved

  15. The research of the test-class method based on interface object in the software integration test of the large container inspection system

    International Nuclear Information System (INIS)

    Sun Shaohua; Chen Zhiqiang; Zhang Li; Gao Wenhuan; Kang Kejun

    2001-01-01

    Software test is the important stage in software process. There has been mature theory, method and model for unit test in practice. But for integration test, there is not regular method to adhere to. The author presents a new method, developed during the development of the large container inspection system, named test-class method based on interface object. A set of basis test-class based on the concept of class in the object-oriented method is established and the method of combining the interface graph and the class set is used to describe the test process. So the strict control and the scientific management for the test process are achieved. The conception of test database is introduced in this method, thus the traceability and the repeatability of test process are improved

  16. A hybrid Jaya algorithm for reliability-redundancy allocation problems

    Science.gov (United States)

    Ghavidel, Sahand; Azizivahed, Ali; Li, Li

    2018-04-01

    This article proposes an efficient improved hybrid Jaya algorithm based on time-varying acceleration coefficients (TVACs) and the learning phase introduced in teaching-learning-based optimization (TLBO), named the LJaya-TVAC algorithm, for solving various types of nonlinear mixed-integer reliability-redundancy allocation problems (RRAPs) and standard real-parameter test functions. RRAPs include series, series-parallel, complex (bridge) and overspeed protection systems. The search power of the proposed LJaya-TVAC algorithm for finding the optimal solutions is first tested on the standard real-parameter unimodal and multi-modal functions with dimensions of 30-100, and then tested on various types of nonlinear mixed-integer RRAPs. The results are compared with the original Jaya algorithm and the best results reported in the recent literature. The optimal results obtained with the proposed LJaya-TVAC algorithm provide evidence for its better and acceptable optimization performance compared to the original Jaya algorithm and other reported optimal results.

  17. On the value of redundancy subject to common-cause failures: Toward the resolution of an on-going debate

    International Nuclear Information System (INIS)

    Hoepfer, V.M.; Saleh, J.H.; Marais, K.B.

    2009-01-01

    Common-cause failures (CCF) are one of the more critical and challenging issues for system reliability and risk analyses. Academic interest in modeling CCF, and more broadly in modeling dependent failures, has steadily grown over the years in the number of publications as well as in the sophistication of the analytical tools used. In the past few years, several influential articles have shed doubts on the relevance of redundancy arguing that 'redundancy backfires' through common-cause failures, and that the latter dominate unreliability, thus defeating the purpose of redundancy. In this work, we take issue with some of the results of these publications. In their stead, we provide a nuanced perspective on the (contingent) value of redundancy subject to common-cause failures. First, we review the incremental reliability and MTTF provided by redundancy subject to common-cause failures. Second, we introduce the concept and develop the analytics of the 'redundancy-relevance boundary': we propose this redundancy-relevance boundary as a design-aid tool that provides an answer to the following question: what level of redundancy is relevant or advantageous given a varying prevalence of common-cause failures? We investigate the conditions under which different levels of redundancy provide an incremental MTTF over that of the single component in the face of common-cause failures. Recognizing that redundancy comes at a cost, we also conduct a cost-benefit analysis of redundancy subject to common-cause failures, and demonstrate how this analysis modifies the redundancy-relevance boundary. We show how the value of redundancy is contingent on the prevalence of common-cause failures, the redundancy level considered, and the monadic cost-benefit ratio. Finally we argue that general unqualified criticism of redundancy is misguided, and efforts are better spent for example on understanding and mitigating the potential sources of common-cause failures rather than deriding the concept

  18. A novel redundant INS based on triple rotary inertial measurement units

    International Nuclear Information System (INIS)

    Chen, Gang; Li, Kui; Wang, Wei; Li, Peng

    2016-01-01

    Accuracy and reliability are two key performances of inertial navigation system (INS). Rotation modulation (RM) can attenuate the bias of inertial sensors and make it possible for INS to achieve higher navigation accuracy with lower-class sensors. Therefore, the conflict between the accuracy and cost of INS can be eased. Traditional system redundancy and recently researched sensor redundancy are two primary means to improve the reliability of INS. However, how to make the best use of the redundant information from redundant sensors hasn’t been studied adequately, especially in rotational INS. This paper proposed a novel triple rotary unit strapdown inertial navigation system (TRUSINS), which combines RM and sensor redundancy design to enhance the accuracy and reliability of rotational INS. Each rotary unit independently rotates to modulate the errors of two gyros and two accelerometers. Three units can provide double sets of measurements along all three axes of body frame to constitute a couple of INSs which make TRUSINS redundant. Experiments and simulations based on a prototype which is made up of six fiber-optic gyros with drift stability of 0.05° h −1 show that TRUSINS can achieve positioning accuracy of about 0.256 n mile h −1 , which is ten times better than that of a normal non-rotational INS with the same level inertial sensors. The theoretical analysis and the experimental results show that due to the advantage of the innovative structure, the designed fault detection and isolation (FDI) strategy can tolerate six sensor faults at most, and is proved to be effective and practical. Therefore, TRUSINS is particularly suitable and highly beneficial for the applications where high accuracy and high reliability is required. (paper)

  19. Virtual Modular Redundancy of Processor Module in the PLC

    International Nuclear Information System (INIS)

    Lee, Kwang-Il; Hwang, SungJae; Yoon, DongHwa

    2016-01-01

    Dual Modular Redundancy (DMR) is mainly used to implement these safety control systems. DMR is conveyed when components of a system are duplicated, providing another component in case one should fault or fail. This feature has a high availability and large fault tolerant. It provides zero downtime that is required for nuclear power plants. So nuclear power plant has been commercialized by multiple redundant systems. The following paper, we propose a Virtual Modular Redundancy (VMR) rather than physical triple of the Programmable Logic Controller (PLC) processor module to ensure the reliability of the nuclear power plant control system. VMR implementation minimizes design changes to continue to use the commercially available redundant system. Also, the purpose of the VMR is to improve the efficiency and reliability in many ways, such as fault tolerant and fail-safe and cost. VMR guarantees a wide range of reliable fault recovery, fault tolerance, etc. It is prevented before it causes great damages due to the continuous failure of the two modules. The reliable communication speed is slow and also it has a small bandwidth. It is a great loss in the safety control system. However, VMR aims to avoid nuclear power plants that were suspended due to fail-safe. It is not for the purpose of commonly used. Application of VMR is actually expected to require a lot of research and trial and error until they adapt to the nuclear regulatory and standards

  20. Virtual Modular Redundancy of Processor Module in the PLC

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kwang-Il; Hwang, SungJae; Yoon, DongHwa [SOOSAN ENS Co., Seoul (Korea, Republic of)

    2016-10-15

    Dual Modular Redundancy (DMR) is mainly used to implement these safety control systems. DMR is conveyed when components of a system are duplicated, providing another component in case one should fault or fail. This feature has a high availability and large fault tolerant. It provides zero downtime that is required for nuclear power plants. So nuclear power plant has been commercialized by multiple redundant systems. The following paper, we propose a Virtual Modular Redundancy (VMR) rather than physical triple of the Programmable Logic Controller (PLC) processor module to ensure the reliability of the nuclear power plant control system. VMR implementation minimizes design changes to continue to use the commercially available redundant system. Also, the purpose of the VMR is to improve the efficiency and reliability in many ways, such as fault tolerant and fail-safe and cost. VMR guarantees a wide range of reliable fault recovery, fault tolerance, etc. It is prevented before it causes great damages due to the continuous failure of the two modules. The reliable communication speed is slow and also it has a small bandwidth. It is a great loss in the safety control system. However, VMR aims to avoid nuclear power plants that were suspended due to fail-safe. It is not for the purpose of commonly used. Application of VMR is actually expected to require a lot of research and trial and error until they adapt to the nuclear regulatory and standards.

  1. Commercial Building Energy Baseline Modeling Software: Performance Metrics and Method Testing with Open Source Models and Implications for Proprietary Software Testing

    Energy Technology Data Exchange (ETDEWEB)

    Price, Phillip N.; Granderson, Jessica; Sohn, Michael; Addy, Nathan; Jump, David

    2013-09-01

    The overarching goal of this work is to advance the capabilities of technology evaluators in evaluating the building-level baseline modeling capabilities of Energy Management and Information System (EMIS) software. Through their customer engagement platforms and products, EMIS software products have the potential to produce whole-building energy savings through multiple strategies: building system operation improvements, equipment efficiency upgrades and replacements, and inducement of behavioral change among the occupants and operations personnel. Some offerings may also automate the quantification of whole-building energy savings, relative to a baseline period, using empirical models that relate energy consumption to key influencing parameters, such as ambient weather conditions and building operation schedule. These automated baseline models can be used to streamline the whole-building measurement and verification (M&V) process, and therefore are of critical importance in the context of multi-measure whole-building focused utility efficiency programs. This report documents the findings of a study that was conducted to begin answering critical questions regarding quantification of savings at the whole-building level, and the use of automated and commercial software tools. To evaluate the modeling capabilities of EMIS software particular to the use case of whole-building savings estimation, four research questions were addressed: 1. What is a general methodology that can be used to evaluate baseline model performance, both in terms of a) overall robustness, and b) relative to other models? 2. How can that general methodology be applied to evaluate proprietary models that are embedded in commercial EMIS tools? How might one handle practical issues associated with data security, intellectual property, appropriate testing ‘blinds’, and large data sets? 3. How can buildings be pre-screened to identify those that are the most model-predictable, and therefore those

  2. Wall adjustment strategy software for use with the NASA Langley 0.3-meter transonic cryogenic tunnel adaptive wall test section

    Science.gov (United States)

    Wolf, Stephen W. D.

    1988-01-01

    The Wall Adjustment Strategy (WAS) software provides successful on-line control of the 2-D flexible walled test section of the Langley 0.3-m Transonic Cryogenic Tunnel. This software package allows the level of operator intervention to be regulated as necessary for research and production type 2-D testing using and Adaptive Wall Test Section (AWTS). The software is designed to accept modification for future requirements, such as 3-D testing, with a minimum of complexity. The WAS software described is an attempt to provide a user friendly package which could be used to control any flexible walled AWTS. Control system constraints influence the details of data transfer, not the data type. Then this entire software package could be used in different control systems, if suitable interface software is available. A complete overview of the software highlights the data flow paths, the modular architecture of the software and the various operating and analysis modes available. A detailed description of the software modules includes listings of the code. A user's manual is provided to explain task generation, operating environment, user options and what to expect at execution.

  3. [Shang Ring circumcision versus conventional circumcision for redundant prepuce or phimosis: a meta analysis].

    Science.gov (United States)

    Xiao, Er-Long; Ding, Hui; Li, Yong-Qian; Wang, Zhi-Ping

    2013-10-01

    To compare the effect and safety of Shang Ring circumcision with those of conventional circumcision in the treatment of redundant prepuce or phimosis. We retrieved the randomized controlled trials on Shang Ring circumcision and conventional circumcision for the treatment of redundant prepuce or phimosis published at home and abroad. Relevant data were selected according to the Cochrane Handbook for Systematic Reviews by two reviewers after quality evaluation of the included trials, and the statistical software RevMan 5.0 was used for meta analysis. Totally 8 randomized controlled trials with 2277 cases were included in this study. Compared with conventional circumcision, Shang Ring circumcision showed a shorter operation time (SMD = -5.82, 95% CI [ -7.39, -4.24], PSMD = -3.28, 95% CI [ -3.47, -3.09], Pinfection (OR = 0.44, 95% CI [0.26, 0.72], P=0.001), lower rate of postoperative bleeding (OR =0.05, 95% CI [0.02, 0.12], PSMD = -3.32, 95% CI [ -3.50, -3.14], PSMD = -3.28, 95% CI [ - 3.47, - 3.00], P<0.00001), but longer wound healing time (OR=1.46, 95% CI [1.03, 1.90], P<0.00001). In comparison with conventional circumcision, Shang Ring circumcision has the advantages of shorter operation time, fewer complications, mild pain, and higher rate of satisfaction with the postoperative penile appearance. However, more high-quality randomized controlled trials with large samples are required to lend further support to our findings.

  4. Past and Present Biophysical Redundancy of Countries as a Buffer to Changes in Food Supply

    Science.gov (United States)

    Fader, Marianela; Rulli, Maria Cristina; Carr, Joel; Dell' Angelo, Jampel; D' Odorico, Paolo; Gephart, Jessica A.; Kummu, Matti; Magliocca, Nicholas; Porkka, Miina; Prell, Christina; hide

    2016-01-01

    Spatially diverse trends in population growth, climate change, industrialization, urbanization and economic development are expected to change future food supply and demand. These changes may affect the suitability of land for food production, implying elevated risks especially for resource constrained, food-importing countries. We present the evolution of biophysical redundancy for agricultural production at country level, from 1992 to 2012. Biophysical redundancy, defined as unused biotic and abiotic environmental resources, is represented by the potential food production of 'spare land', available water resources (i.e., not already used for human activities), as well as production increases through yield gap closure on cultivated areas and potential agricultural areas. In 2012, the biophysical redundancy of 75 (48) countries, mainly in North Africa, Western Europe, the Middle East and Asia, was insufficient to produce the caloric nutritional needs for at least 50% (25%) of their population during a year. Biophysical redundancy has decreased in the last two decades in 102 out of 155 countries, 11 of these went from high to limited redundancy, and nine of these from limited to very low redundancy. Although the variability of the drivers of change across different countries is high, improvements in yield and population growth have a clear impact on the decreases of redundancy towards the very low redundancy category. We took a more detailed look at countries classified as 'Low Income Economies (LIEs)' since they are particularly vulnerable to domestic or external food supply changes, due to their limited capacity to offset for food supply decreases with higher purchasing power on the international market. Currently, nine LIEs have limited or very low biophysical redundancy. Many of these showed a decrease in redundancy over the last two decades, which is not always linked with improvements in per capita food availability.

  5. Past and present biophysical redundancy of countries as a buffer to changes in food supply

    Science.gov (United States)

    Fader, Marianela; Rulli, Maria Cristina; Carr, Joel; Dell'Angelo, Jampel; D'Odorico, Paolo; Gephart, Jessica A.; Kummu, Matti; Magliocca, Nicholas; Porkka, Miina; Prell, Christina; Puma, Michael J.; Ratajczak, Zak; Seekell, David A.; Suweis, Samir; Tavoni, Alessandro

    2016-05-01

    Spatially diverse trends in population growth, climate change, industrialization, urbanization and economic development are expected to change future food supply and demand. These changes may affect the suitability of land for food production, implying elevated risks especially for resource-constrained, food-importing countries. We present the evolution of biophysical redundancy for agricultural production at country level, from 1992 to 2012. Biophysical redundancy, defined as unused biotic and abiotic environmental resources, is represented by the potential food production of ‘spare land’, available water resources (i.e., not already used for human activities), as well as production increases through yield gap closure on cultivated areas and potential agricultural areas. In 2012, the biophysical redundancy of 75 (48) countries, mainly in North Africa, Western Europe, the Middle East and Asia, was insufficient to produce the caloric nutritional needs for at least 50% (25%) of their population during a year. Biophysical redundancy has decreased in the last two decades in 102 out of 155 countries, 11 of these went from high to limited redundancy, and nine of these from limited to very low redundancy. Although the variability of the drivers of change across different countries is high, improvements in yield and population growth have a clear impact on the decreases of redundancy towards the very low redundancy category. We took a more detailed look at countries classified as ‘Low Income Economies (LIEs)’ since they are particularly vulnerable to domestic or external food supply changes, due to their limited capacity to offset for food supply decreases with higher purchasing power on the international market. Currently, nine LIEs have limited or very low biophysical redundancy. Many of these showed a decrease in redundancy over the last two decades, which is not always linked with improvements in per capita food availability.

  6. Research on Generating Method of Embedded Software Test Document Based on Dynamic Model

    Science.gov (United States)

    Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Liu, Ying

    2018-03-01

    This paper provides a dynamic model-based test document generation method for embedded software that provides automatic generation of two documents: test requirements specification documentation and configuration item test documentation. This method enables dynamic test requirements to be implemented in dynamic models, enabling dynamic test demand tracking to be easily generated; able to automatically generate standardized, standardized test requirements and test documentation, improved document-related content inconsistency and lack of integrity And other issues, improve the efficiency.

  7. Kinematic control of redundant robots and the motion optimizability measure.

    Science.gov (United States)

    Li, L; Gruver, W A; Zhang, Q; Yang, Z

    2001-01-01

    This paper treats the kinematic control of manipulators with redundant degrees of freedom. We derive an analytical solution for the inverse kinematics that provides a means for accommodating joint velocity constraints in real time. We define the motion optimizability measure and use it to develop an efficient method for the optimization of joint trajectories subject to multiple criteria. An implementation of the method for a 7-dof experimental redundant robot is present.

  8. Software testing and global industry future paradigms

    CERN Document Server

    Casey, Valentine; Richardson, Ita

    2009-01-01

    Today software development has truly become a globally sourced commodity. This trend has been facilitated by the availability of highly skilled software professionals in low cost locations in Eastern Europe, Latin America and the Far East. Organisations

  9. Optimal Testing Effort Control for Modular Software System Incorporating The Concept of Independent and Dependent Faults: A Control Theoretic Approach

    Directory of Open Access Journals (Sweden)

    Kuldeep CHAUDHARY

    2012-07-01

    Full Text Available In this paper, we discuss modular software system for Software Reliability GrowthModels using testing effort and study the optimal testing effort intensity for each module. The maingoal is to minimize the cost of software development when budget constraint on testing expenditureis given. We discuss the evolution of faults removal dynamics in incorporating the idea of leading/independent and dependent faults in modular software system under the assumption that testing ofeach of the modulus is done independently. The problem is formulated as an optimal controlproblem and the solution to the proposed problem has been obtained by using Pontryagin MaximumPrinciple.

  10. Fault Localization Method by Partitioning Memory Using Memory Map and the Stack for Automotive ECU Software Testing

    Directory of Open Access Journals (Sweden)

    Kwanhyo Kim

    2016-09-01

    Full Text Available Recently, the usage of the automotive Electronic Control Unit (ECU and its software in cars is increasing. Therefore, as the functional complexity of such software increases, so does the likelihood of software-related faults. Therefore, it is important to ensure the reliability of ECU software in order to ensure automobile safety. For this reason, systematic testing methods are required that can guarantee software quality. However, it is difficult to locate a fault during testing with the current ECU development system because a tester performs the black-box testing using a Hardware-in-the-Loop (HiL simulator. Consequently, developers consume a large amount of money and time for debugging because they perform debugging without any information about the location of the fault. In this paper, we propose a method for localizing the fault utilizing memory information during black-box testing. This is likely to be of use to developers who debug automotive software. In order to observe whether symbols stored in the memory have been updated, the memory is partitioned by a memory map and the stack, thus the fault candidate region is reduced. A memory map method has the advantage of being able to finely partition the memory, and the stack method can partition the memory without a memory map. We validated these methods by applying these to HiL testing of the ECU for a body control system. The preliminary results indicate that a memory map and the stack reduce the possible fault locations to 22% and 19% of the updated memory, respectively.

  11. Safety prediction for basic components of safety-critical software based on static testing

    International Nuclear Information System (INIS)

    Son, H.S.; Seong, P.H.

    2000-01-01

    The purpose of this work is to develop a safety prediction method, with which we can predict the risk of software components based on static testing results at the early development stage. The predictive model combines the major factor with the quality factor for the components, which are calculated based on the measures proposed in this work. The application to a safety-critical software system demonstrates the feasibility of the safety prediction method. (authors)

  12. Software Test Description (STD) for the Globally Relocatable Navy Tide/Atmospheric Modeling System (PCTides)

    National Research Council Canada - National Science Library

    Posey, Pamela

    2002-01-01

    The purpose of this Software Test Description (STD) is to establish formal test cases to be used by personnel tasked with the installation and verification of the Globally Relocatable Navy Tide/Atmospheric Modeling System (PCTides...

  13. Distributed Sensor Network Software Development Testing through Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Brennan, Sean M. [Univ. of New Mexico, Albuquerque, NM (United States)

    2003-12-01

    The distributed sensor network (DSN) presents a novel and highly complex computing platform with dif culties and opportunities that are just beginning to be explored. The potential of sensor networks extends from monitoring for threat reduction, to conducting instant and remote inventories, to ecological surveys. Developing and testing for robust and scalable applications is currently practiced almost exclusively in hardware. The Distributed Sensors Simulator (DSS) is an infrastructure that allows the user to debug and test software for DSNs independent of hardware constraints. The exibility of DSS allows developers and researchers to investigate topological, phenomenological, networking, robustness and scaling issues, to explore arbitrary algorithms for distributed sensors, and to defeat those algorithms through simulated failure. The user speci es the topology, the environment, the application, and any number of arbitrary failures; DSS provides the virtual environmental embedding.

  14. An analysis of unit tests of a flight software product line

    NARCIS (Netherlands)

    Ganesan, D.; Lindvall, M.; McComas, D.; Bartholomew, M.; Slegel, S.; Medina, B.; Krikhaar, R.; Verhoef, C.; Dharmalingam, G.; Montgomery, L.P.

    2013-01-01

    This paper presents an analysis of the unit testing approach developed and used by the Core Flight Software System (CFS) product line team at the NASA Goddard Space Flight Center (GSFC). The goal of the analysis is to understand, review, and recommend strategies for improving the CFS' existing unit

  15. Distributed redundancy and robustness in complex systems

    KAUST Repository

    Randles, Martin

    2011-03-01

    The uptake and increasing prevalence of Web 2.0 applications, promoting new large-scale and complex systems such as Cloud computing and the emerging Internet of Services/Things, requires tools and techniques to analyse and model methods to ensure the robustness of these new systems. This paper reports on assessing and improving complex system resilience using distributed redundancy, termed degeneracy in biological systems, to endow large-scale complicated computer systems with the same robustness that emerges in complex biological and natural systems. However, in order to promote an evolutionary approach, through emergent self-organisation, it is necessary to specify the systems in an \\'open-ended\\' manner where not all states of the system are prescribed at design-time. In particular an observer system is used to select robust topologies, within system components, based on a measurement of the first non-zero Eigen value in the Laplacian spectrum of the components\\' network graphs; also known as the algebraic connectivity. It is shown, through experimentation on a simulation, that increasing the average algebraic connectivity across the components, in a network, leads to an increase in the variety of individual components termed distributed redundancy; the capacity for structurally distinct components to perform an identical function in a particular context. The results are applied to a specific application where active clustering of like services is used to aid load balancing in a highly distributed network. Using the described procedure is shown to improve performance and distribute redundancy. © 2010 Elsevier Inc.

  16. Functional redundancy patterns reveal non-random assembly rules in a species-rich marine assemblage.

    Directory of Open Access Journals (Sweden)

    Nicolas Guillemot

    Full Text Available The relationship between species and the functional diversity of assemblages is fundamental in ecology because it contains key information on functional redundancy, and functionally redundant ecosystems are thought to be more resilient, resistant and stable. However, this relationship is poorly understood and undocumented for species-rich coastal marine ecosystems. Here, we used underwater visual censuses to examine the patterns of functional redundancy for one of the most diverse vertebrate assemblages, the coral reef fishes of New Caledonia, South Pacific. First, we found that the relationship between functional and species diversity displayed a non-asymptotic power-shaped curve, implying that rare functions and species mainly occur in highly diverse assemblages. Second, we showed that the distribution of species amongst possible functions was significantly different from a random distribution up to a threshold of ∼90 species/transect. Redundancy patterns for each function further revealed that some functions displayed fast rates of increase in redundancy at low species diversity, whereas others were only becoming redundant past a certain threshold. This suggested non-random assembly rules and the existence of some primordial functions that would need to be fulfilled in priority so that coral reef fish assemblages can gain a basic ecological structure. Last, we found little effect of habitat on the shape of the functional-species diversity relationship and on the redundancy of functions, although habitat is known to largely determine assemblage characteristics such as species composition, biomass, and abundance. Our study shows that low functional redundancy is characteristic of this highly diverse fish assemblage, and, therefore, that even species-rich ecosystems such as coral reefs may be vulnerable to the removal of a few keystone species.

  17. Case studies in configuration control for redundant robots

    Science.gov (United States)

    Seraji, H.; Lee, T.; Colbaugh, R.; Glass, K.

    1989-01-01

    A simple approach to configuration control of redundant robots is presented. The redundancy is utilized to control the robot configuration directly in task space, where the task will be performed. A number of task-related kinematic functions are defined and combined with the end-effector coordinates to form a set of configuration variables. An adaptive control scheme is then utilized to ensure that the configuration variables track the desired reference trajectories as closely as possible. Simulation results are presented to illustrate the control scheme. The scheme has also been implemented for direct online control of a PUMA industrial robot, and experimental results are presented. The simulation and experimental results validate the configuration control scheme for performing various realistic tasks.

  18. An efficient particle swarm approach for mixed-integer programming in reliability-redundancy optimization applications

    International Nuclear Information System (INIS)

    Santos Coelho, Leandro dos

    2009-01-01

    The reliability-redundancy optimization problems can involve the selection of components with multiple choices and redundancy levels that produce maximum benefits, and are subject to the cost, weight, and volume constraints. Many classical mathematical methods have failed in handling nonconvexities and nonsmoothness in reliability-redundancy optimization problems. As an alternative to the classical optimization approaches, the meta-heuristics have been given much attention by many researchers due to their ability to find an almost global optimal solutions. One of these meta-heuristics is the particle swarm optimization (PSO). PSO is a population-based heuristic optimization technique inspired by social behavior of bird flocking and fish schooling. This paper presents an efficient PSO algorithm based on Gaussian distribution and chaotic sequence (PSO-GC) to solve the reliability-redundancy optimization problems. In this context, two examples in reliability-redundancy design problems are evaluated. Simulation results demonstrate that the proposed PSO-GC is a promising optimization technique. PSO-GC performs well for the two examples of mixed-integer programming in reliability-redundancy applications considered in this paper. The solutions obtained by the PSO-GC are better than the previously best-known solutions available in the recent literature

  19. An efficient particle swarm approach for mixed-integer programming in reliability-redundancy optimization applications

    Energy Technology Data Exchange (ETDEWEB)

    Santos Coelho, Leandro dos [Industrial and Systems Engineering Graduate Program, LAS/PPGEPS, Pontifical Catholic University of Parana, PUCPR, Imaculada Conceicao, 1155, 80215-901 Curitiba, Parana (Brazil)], E-mail: leandro.coelho@pucpr.br

    2009-04-15

    The reliability-redundancy optimization problems can involve the selection of components with multiple choices and redundancy levels that produce maximum benefits, and are subject to the cost, weight, and volume constraints. Many classical mathematical methods have failed in handling nonconvexities and nonsmoothness in reliability-redundancy optimization problems. As an alternative to the classical optimization approaches, the meta-heuristics have been given much attention by many researchers due to their ability to find an almost global optimal solutions. One of these meta-heuristics is the particle swarm optimization (PSO). PSO is a population-based heuristic optimization technique inspired by social behavior of bird flocking and fish schooling. This paper presents an efficient PSO algorithm based on Gaussian distribution and chaotic sequence (PSO-GC) to solve the reliability-redundancy optimization problems. In this context, two examples in reliability-redundancy design problems are evaluated. Simulation results demonstrate that the proposed PSO-GC is a promising optimization technique. PSO-GC performs well for the two examples of mixed-integer programming in reliability-redundancy applications considered in this paper. The solutions obtained by the PSO-GC are better than the previously best-known solutions available in the recent literature.

  20. TCV software test and validation tools and technique. [Terminal Configured Vehicle program for commercial transport aircraft operation

    Science.gov (United States)

    Straeter, T. A.; Williams, J. R.

    1976-01-01

    The paper describes techniques for testing and validating software for the TCV (Terminal Configured Vehicle) program which is intended to solve problems associated with operating a commercial transport aircraft in the terminal area. The TCV research test bed is a Boeing 737 specially configured with digital computer systems to carry out automatic navigation, guidance, flight controls, and electronic displays research. The techniques developed for time and cost reduction include automatic documentation aids, an automatic software configuration, and an all software generation and validation system.

  1. Superlinearly scalable noise robustness of redundant coupled dynamical systems.

    Science.gov (United States)

    Kohar, Vivek; Kia, Behnam; Lindner, John F; Ditto, William L

    2016-03-01

    We illustrate through theory and numerical simulations that redundant coupled dynamical systems can be extremely robust against local noise in comparison to uncoupled dynamical systems evolving in the same noisy environment. Previous studies have shown that the noise robustness of redundant coupled dynamical systems is linearly scalable and deviations due to noise can be minimized by increasing the number of coupled units. Here, we demonstrate that the noise robustness can actually be scaled superlinearly if some conditions are met and very high noise robustness can be realized with very few coupled units. We discuss these conditions and show that this superlinear scalability depends on the nonlinearity of the individual dynamical units. The phenomenon is demonstrated in discrete as well as continuous dynamical systems. This superlinear scalability not only provides us an opportunity to exploit the nonlinearity of physical systems without being bogged down by noise but may also help us in understanding the functional role of coupled redundancy found in many biological systems. Moreover, engineers can exploit superlinear noise suppression by starting a coupled system near (not necessarily at) the appropriate initial condition.

  2. Software quality assurance: in large scale and complex software-intensive systems

    NARCIS (Netherlands)

    Mistrik, I.; Soley, R.; Ali, N.; Grundy, J.; Tekinerdogan, B.

    2015-01-01

    Software Quality Assurance in Large Scale and Complex Software-intensive Systems presents novel and high-quality research related approaches that relate the quality of software architecture to system requirements, system architecture and enterprise-architecture, or software testing. Modern software

  3. Test Driven Development of a Parameterized Ice Sheet Component

    Science.gov (United States)

    Clune, T.

    2011-12-01

    Test driven development (TDD) is a software development methodology that offers many advantages over traditional approaches including reduced development and maintenance costs, improved reliability, and superior design quality. Although TDD is widely accepted in many software communities, the suitability to scientific software is largely undemonstrated and warrants a degree of skepticism. Indeed, numerical algorithms pose several challenges to unit testing in general, and TDD in particular. Among these challenges are the need to have simple, non-redundant closed-form expressions to compare against the results obtained from the implementation as well as realistic error estimates. The necessity for serial and parallel performance raises additional concerns for many scientific applicaitons. In previous work I demonstrated that TDD performed well for the development of a relatively simple numerical model that simulates the growth of snowflakes, but the results were anecdotal and of limited relevance to far more complex software components typical of climate models. This investigation has now been extended by successfully applying TDD to the implementation of a substantial portion of a new parameterized ice sheet component within a full climate model. After a brief introduction to TDD, I will present techniques that address some of the obstacles encountered with numerical algorithms. I will conclude with some quantitative and qualitative comparisons against climate components developed in a more traditional manner.

  4. Safety prediction for basic components of safety critical software based on static testing

    International Nuclear Information System (INIS)

    Son, H.S.; Seong, P.H.

    2001-01-01

    The purpose of this work is to develop a safety prediction method, with which we can predict the risk of software components based on static testing results at the early development stage. The predictive model combines the major factor with the quality factor for the components, both of which are calculated based on the measures proposed in this work. The application to a safety-critical software system demonstrates the feasibility of the safety prediction method. (authors)

  5. Software design of an auto-testing system for CAMAC modules

    International Nuclear Information System (INIS)

    Zhang Hao; Xu Jiajun

    1999-01-01

    The author introduces one of software methods of how to get a PC graphic interface operation when a MS-DOS driver is the only possible copy for the device. Human-machine interactive graphic interface is more popular nowadays since it has rich pages and comfortable input-output operations. First step is to set a data exchange between MS-DOS and MS-Windows through a interrupt service. Second step is to program a dynamic link library which VB can invoke. VBX control is possible to extend the system functions. The testing system can test main performances automatically with CAMAC modules of IDIM, IDOM, PSC, SAM, and 3016. It seems the better way to test the linearity, AC correction and so on. The testing system proves to be usable in maintaining CAMAC modules of BEPC control system

  6. STEM - software test and evaluation methods. A study of failure dependency in diverse software

    International Nuclear Information System (INIS)

    Bishop, P.G.; Pullen, F.D.

    1989-02-01

    STEM is a collaborative software reliability project undertaken in partnership with Halden Reactor Project, UKAEA, and the Finnish Technical Research Centre. The objective of STEM is to evaluate a number of fault detection and fault estimation methods which can be applied to high integrity software. This Report presents a study of the observed failure dependencies between faults in diversely produced software. (author)

  7. Software Dependability and Safety Evaluations ESA's Initiative

    Science.gov (United States)

    Hernek, M.

    ESA has allocated funds for an initiative to evaluate Dependability and Safety methods of Software. The objectives of this initiative are; · More extensive validation of Safety and Dependability techniques for Software · Provide valuable results to improve the quality of the Software thus promoting the application of Dependability and Safety methods and techniques. ESA space systems are being developed according to defined PA requirement specifications. These requirements may be implemented through various design concepts, e.g. redundancy, diversity etc. varying from project to project. Analysis methods (FMECA. FTA, HA, etc) are frequently used during requirements analysis and design activities to assure the correct implementation of system PA requirements. The criticality level of failures, functions and systems is determined and by doing that the critical sub-systems are identified, on which dependability and safety techniques are to be applied during development. Proper performance of the software development requires the development of a technical specification for the products at the beginning of the life cycle. Such technical specification comprises both functional and non-functional requirements. These non-functional requirements address characteristics of the product such as quality, dependability, safety and maintainability. Software in space systems is more and more used in critical functions. Also the trend towards more frequent use of COTS and reusable components pose new difficulties in terms of assuring reliable and safe systems. Because of this, its dependability and safety must be carefully analysed. ESA identified and documented techniques, methods and procedures to ensure that software dependability and safety requirements are specified and taken into account during the design and development of a software system and to verify/validate that the implemented software systems comply with these requirements [R1].

  8. Mutual information and redundancy in spontaneous communication between cortical neurons.

    Science.gov (United States)

    Szczepanski, J; Arnold, M; Wajnryb, E; Amigó, J M; Sanchez-Vives, M V

    2011-03-01

    An important question in neural information processing is how neurons cooperate to transmit information. To study this question, we resort to the concept of redundancy in the information transmitted by a group of neurons and, at the same time, we introduce a novel concept for measuring cooperation between pairs of neurons called relative mutual information (RMI). Specifically, we studied these two parameters for spike trains generated by neighboring neurons from the primary visual cortex in the awake, freely moving rat. The spike trains studied here were spontaneously generated in the cortical network, in the absence of visual stimulation. Under these conditions, our analysis revealed that while the value of RMI oscillated slightly around an average value, the redundancy exhibited a behavior characterized by a higher variability. We conjecture that this combination of approximately constant RMI and greater variable redundancy makes information transmission more resistant to noise disturbances. Furthermore, the redundancy values suggest that neurons can cooperate in a flexible way during information transmission. This mostly occurs via a leading neuron with higher transmission rate or, less frequently, through the information rate of the whole group being higher than the sum of the individual information rates-in other words in a synergetic manner. The proposed method applies not only to the stationary, but also to locally stationary neural signals.

  9. The bliss (not the problem) of motor abundance (not redundancy).

    Science.gov (United States)

    Latash, Mark L

    2012-03-01

    Motor control is an area of natural science exploring how the nervous system interacts with other body parts and the environment to produce purposeful, coordinated actions. A central problem of motor control-the problem of motor redundancy-was formulated by Nikolai Bernstein as the problem of elimination of redundant degrees-of-freedom. Traditionally, this problem has been addressed using optimization methods based on a variety of cost functions. This review draws attention to a body of recent findings suggesting that the problem has been formulated incorrectly. An alternative view has been suggested as the principle of abundance, which considers the apparently redundant degrees-of-freedom as useful and even vital for many aspects of motor behavior. Over the past 10 years, dozens of publications have provided support for this view based on the ideas of synergic control, computational apparatus of the uncontrolled manifold hypothesis, and the equilibrium-point (referent configuration) hypothesis. In particular, large amounts of "good variance"-variance in the space of elements that has no effect on the overall performance-have been documented across a variety of natural actions. "Good variance" helps an abundant system to deal with secondary tasks and unexpected perturbations; its amount shows adaptive modulation across a variety of conditions. These data support the view that there is no problem of motor redundancy; there is bliss of motor abundance.

  10. Description and Flight Test Results of the NASA F-8 Digital Fly-by-Wire Control System

    Science.gov (United States)

    1975-01-01

    A NASA program to develop digital fly-by-wire (DFBW) technology for aircraft applications is discussed. Phase I of the program demonstrated the feasibility of using a digital fly-by-wire system for aircraft control through developing and flight testing a single channel system, which used Apollo hardware, in an F-8C airplane. The objective of Phase II of the program is to establish a technology base for designing practical DFBW systems. It will involve developing and flight testing a triplex digital fly-by-wire system using state-of-the-art airborne computers, system hardware, software, and redundancy concepts. The papers included in this report describe the Phase I system and its development and present results from the flight program. Man-rated flight software and the effects of lightning on digital flight control systems are also discussed.

  11. Product-oriented Software Certification Process for Software Synthesis

    Science.gov (United States)

    Nelson, Stacy; Fischer, Bernd; Denney, Ewen; Schumann, Johann; Richardson, Julian; Oh, Phil

    2004-01-01

    The purpose of this document is to propose a product-oriented software certification process to facilitate use of software synthesis and formal methods. Why is such a process needed? Currently, software is tested until deemed bug-free rather than proving that certain software properties exist. This approach has worked well in most cases, but unfortunately, deaths still occur due to software failure. Using formal methods (techniques from logic and discrete mathematics like set theory, automata theory and formal logic as opposed to continuous mathematics like calculus) and software synthesis, it is possible to reduce this risk by proving certain software properties. Additionally, software synthesis makes it possible to automate some phases of the traditional software development life cycle resulting in a more streamlined and accurate development process.

  12. Multisensory processing of redundant information in go/no-go and choice responses

    DEFF Research Database (Denmark)

    Blurton, Steven Paul; Greenlee, Mark W.; Gondan, Matthias

    2014-01-01

    In multisensory research, faster responses are commonly observed when multimodal stimuli are presented as compared to unimodal target presentations. This so-called redundant signals effect can be explained by several frameworks including separate activation and coactivation models. The redundant ...... of redundant information provided by different sensory channels and is not restricted to simple responses. The results connect existing theories on multisensory integration with theories on choice behavior....... processes (Schwarz, 1994) within two absorbing barriers. The diffusion superposition model accurately describes mean and variance of response times as well as the proportion of correct responses observed in the two tasks. Linear superposition seems, thus, to be a general principle in integration...

  13. Software quality assurance and software safety in the Biomed Control System

    International Nuclear Information System (INIS)

    Singh, R.P.; Chu, W.T.; Ludewigt, B.A.; Marks, K.M.; Nyman, M.A.; Renner, T.R.; Stradtner, R.

    1989-01-01

    The Biomed Control System is a hardware/software system used for the delivery, measurement and monitoring of heavy-ion beams in the patient treatment and biology experiment rooms in the Bevalac at the Lawrence Berkeley Laboratory (LBL). This paper describes some aspects of this system including historical background philosophy, configuration management, hardware features that facilitate software testing, software testing procedures, the release of new software quality assurance, safety and operator monitoring. 3 refs

  14. Synergy and redundancy in the Granger causal analysis of dynamical networks

    International Nuclear Information System (INIS)

    Stramaglia, Sebastiano; M Cortes, Jesus; Marinazzo, Daniele

    2014-01-01

    We analyze, by means of Granger causality (GC), the effect of synergy and redundancy in the inference (from time series data) of the information flow between subsystems of a complex network. While we show that fully conditioned GC (CGC) is not affected by synergy, the pairwise analysis fails to prove synergetic effects. In cases when the number of samples is low, thus making the fully conditioned approach unfeasible, we show that partially conditioned GC (PCGC) is an effective approach if the set of conditioning variables is properly chosen. Here we consider two different strategies (based either on informational content for the candidate driver or on selecting the variables with highest pairwise influences) for PCGC and show that, depending on the data structure, either one or the other might be equally valid. On the other hand, we observe that fully conditioned approaches do not work well in the presence of redundancy, thus suggesting the strategy of separating the pairwise links in two subsets: those corresponding to indirect connections of the CGC (which should thus be excluded) and links that can be ascribed to redundancy effects and, together with the results from the fully connected approach, provide a better description of the causality pattern in the presence of redundancy. Finally we apply these methods to two different real datasets. First, analyzing electrophysiological data from an epileptic brain, we show that synergetic effects are dominant just before seizure occurrences. Second, our analysis applied to gene expression time series from HeLa culture shows that the underlying regulatory networks are characterized by both redundancy and synergy. (paper)

  15. Software quality assurance plans for safety-critical software

    International Nuclear Information System (INIS)

    Liddle, P.

    2006-01-01

    Application software is defined as safety-critical if a fault in the software could prevent the system components from performing their nuclear-safety functions. Therefore, for nuclear-safety systems, the AREVA TELEPERM R XS (TXS) system is classified 1E, as defined in the Inst. of Electrical and Electronics Engineers (IEEE) Std 603-1998. The application software is classified as Software Integrity Level (SIL)-4, as defined in IEEE Std 7-4.3.2-2003. The AREVA NP Inc. Software Program Manual (SPM) describes the measures taken to ensure that the TELEPERM XS application software attains a level of quality commensurate with its importance to safety. The manual also describes how TELEPERM XS correctly performs the required safety functions and conforms to established technical and documentation requirements, conventions, rules, and standards. The program manual covers the requirements definition, detailed design, integration, and test phases for the TELEPERM XS application software, and supporting software created by AREVA NP Inc. The SPM is required for all safety-related TELEPERM XS system applications. The program comprises several basic plans and practices: 1. A Software Quality-Assurance Plan (SQAP) that describes the processes necessary to ensure that the software attains a level of quality commensurate with its importance to safety function. 2. A Software Safety Plan (SSP) that identifies the process to reasonably ensure that safety-critical software performs as intended during all abnormal conditions and events, and does not introduce any new hazards that could jeopardize the health and safety of the public. 3. A Software Verification and Validation (V and V) Plan that describes the method of ensuring the software is in accordance with the requirements. 4. A Software Configuration Management Plan (SCMP) that describes the method of maintaining the software in an identifiable state at all times. 5. A Software Operations and Maintenance Plan (SO and MP) that

  16. Factors That Affect Software Testability

    Science.gov (United States)

    Voas, Jeffrey M.

    1991-01-01

    Software faults that infrequently affect software's output are dangerous. When a software fault causes frequent software failures, testing is likely to reveal the fault before the software is releases; when the fault remains undetected during testing, it can cause disaster after the software is installed. A technique for predicting whether a particular piece of software is likely to reveal faults within itself during testing is found in [Voas91b]. A piece of software that is likely to reveal faults within itself during testing is said to have high testability. A piece of software that is not likely to reveal faults within itself during testing is said to have low testability. It is preferable to design software with higher testabilities from the outset, i.e., create software with as high of a degree of testability as possible to avoid the problems of having undetected faults that are associated with low testability. Information loss is a phenomenon that occurs during program execution that increases the likelihood that a fault will remain undetected. In this paper, I identify two brad classes of information loss, define them, and suggest ways of predicting the potential for information loss to occur. We do this in order to decrease the likelihood that faults will remain undetected during testing.

  17. Capillary response to skeletal muscle contraction: evidence that redundancy between vasodilators is physiologically relevant during active hyperaemia.

    Science.gov (United States)

    Lamb, Iain R; Novielli, Nicole M; Murrant, Coral L

    2018-04-15

    The current theory behind matching blood flow to metabolic demand of skeletal muscle suggests redundant interactions between metabolic vasodilators. Capillaries play an important role in blood flow control given their ability to respond to muscle contraction by causing conducted vasodilatation in upstream arterioles that control their perfusion. We sought to determine whether redundancies occur between vasodilators at the level of the capillary by stimulating the capillaries with muscle contraction and vasodilators relevant to muscle contraction. We identified redundancies between potassium and both adenosine and nitric oxide, between nitric oxide and potassium, and between adenosine and both potassium and nitric oxide. During muscle contraction, we demonstrate redundancies between potassium and nitric oxide as well as between potassium and adenosine. Our data show that redundancy is physiologically relevant and involved in the coordination of the vasodilator response during muscle contraction at the level of the capillaries. We sought to determine if redundancy between vasodilators is physiologically relevant during active hyperaemia. As inhibitory interactions between vasodilators are indicative of redundancy, we tested whether vasodilators implicated in mediating active hyperaemia (potassium (K + ), adenosine (ADO) and nitric oxide (NO)) inhibit one another's vasodilatory effects through direct application of pharmacological agents and during muscle contraction. Using the hamster cremaster muscle and intravital microscopy, we locally stimulated capillaries with one vasodilator in the absence and the presence of a second vasodilator (10 -7 m S-nitroso-N-acetylpenicillamine (SNAP), 10 -7 m ADO, 10 mm KCl) applied sequentially and simultaneously, and observed the response in the associated upstream 4A arteriole controlling the perfusion of the stimulated capillary. We found that KCl significantly attenuated SNAP- and ADO-induced vasodilatations by ∼49.7% and

  18. Redundancy of einselected information in quantum Darwinism: The irrelevance of irrelevant environment bits

    Science.gov (United States)

    Zwolak, Michael; Zurek, Wojciech H.

    2017-03-01

    The objective, classical world emerges from the underlying quantum substrate via the proliferation of redundant copies of selected information into the environment, which acts as a communication channel, transmitting that information to observers. These copies are independently accessible, allowing many observers to reach consensus about the state of a quantum system via its imprints in the environment. Quantum Darwinism recognizes that the redundancy of information is thus central to the emergence of objective reality in the quantum world. However, in addition to the "quantum system of interest," there are many other systems "of no interest" in the Universe that can imprint information on the common environment. There is therefore a danger that the information of interest will be diluted with irrelevant bits, suppressing the redundancy responsible for objectivity. We show that mixing of the relevant (the "wheat") and irrelevant (the "chaff") bits of information makes little quantitative difference to the redundancy of the information of interest. Thus, we demonstrate that it does not matter whether one separates the wheat (relevant information) from the (irrelevant) chaff: The large redundancy of the relevant information survives dilution, providing evidence of the objective, effectively classical world.

  19. Design method of redundancy of brace-anchor sharing supporting based on cooperative deformation

    Science.gov (United States)

    Liu, Jun-yan; Li, Bing; Liu, Yan; Cai, Shan-bing

    2017-11-01

    Because of the complicated environment requirement, the support form of foundation pit is diversified, and the brace-anchor sharing support is widely used. However, the research on the force deformation characteristics and the related aspects of the cooperative response of the brace-anchor sharing support is insufficient. The application of redundancy theory in structural engineering has been more mature, but there is little theoretical research on redundancy theory in underground engineering. Based on the idea of collaborative deformation, the paper calculates the ratio of the redundancy degree of the cooperative deformation by using the local reinforcement design method and the structural component redundancy parameter calculation formula based on Frangopol. Combined with the engineering case, through the calculation of the ratio of cooperative deformation redundancy in the joint of brace-anchor sharing support. This paper explores the optimal anchor distribution form under the condition of cooperative deformation, and through the analysis and research of displacement field and stress field, the results of the collaborative deformation are validated by comparing the field monitoring data. It provides theoretical basis for the design of this kind of foundation pit in the future.

  20. Impedance Control of a Redundant Parallel Manipulator

    DEFF Research Database (Denmark)

    Méndez, Juan de Dios Flores; Schiøler, Henrik; Madsen, Ole

    2017-01-01

    This paper presents the design of Impedance Control to a redundantly actuated Parallel Kinematic Manipulator. The proposed control is based on treating each limb as a single system and their connection through the internal interaction forces. The controller introduces a stiffness and damping...

  1. Accuracy Test of Software Architecture Compliance Checking Tools – Test Instruction

    NARCIS (Netherlands)

    Pruijt, Leo; van der Werf, J.M.E.M.|info:eu-repo/dai/nl/36950674X; Brinkkemper., Sjaak|info:eu-repo/dai/nl/07500707X

    2015-01-01

    Software Architecture Compliance Checking (SACC) is an approach to verify conformance of implemented program code to high-level models of architectural design. Static SACC focuses on the modular software architecture and on the existence of rule violating dependencies between modules. Accurate tool

  2. Development of computer tablet software for clinical quantification of lateral knee compartment translation during the pivot shift test.

    Science.gov (United States)

    Muller, Bart; Hofbauer, Marcus; Rahnemai-Azar, Amir Ata; Wolf, Megan; Araki, Daisuke; Hoshino, Yuichi; Araujo, Paulo; Debski, Richard E; Irrgang, James J; Fu, Freddie H; Musahl, Volker

    2016-01-01

    The pivot shift test is a commonly used clinical examination by orthopedic surgeons to evaluate knee function following injury. However, the test can only be graded subjectively by the examiner. Therefore, the purpose of this study is to develop software for a computer tablet to quantify anterior translation of the lateral knee compartment during the pivot shift test. Based on the simple image analysis method, software for a computer tablet was developed with the following primary design constraint - the software should be easy to use in a clinical setting and it should not slow down an outpatient visit. Translation of the lateral compartment of the intact knee was 2.0 ± 0.2 mm and for the anterior cruciate ligament-deficient knee was 8.9 ± 0.9 mm (p software provides reliable, objective, and quantitative data on translation of the lateral knee compartment during the pivot shift test and meets the design constraints posed by the clinical setting.

  3. Developing software to "track and catch" missed follow-up of abnormal test results in a complex sociotechnical environment.

    Science.gov (United States)

    Smith, M; Murphy, D; Laxmisan, A; Sittig, D; Reis, B; Esquivel, A; Singh, H

    2013-01-01

    Abnormal test results do not always receive timely follow-up, even when providers are notified through electronic health record (EHR)-based alerts. High workload, alert fatigue, and other demands on attention disrupt a provider's prospective memory for tasks required to initiate follow-up. Thus, EHR-based tracking and reminding functionalities are needed to improve follow-up. The purpose of this study was to develop a decision-support software prototype enabling individual and system-wide tracking of abnormal test result alerts lacking follow-up, and to conduct formative evaluations, including usability testing. We developed a working prototype software system, the Alert Watch And Response Engine (AWARE), to detect abnormal test result alerts lacking documented follow-up, and to present context-specific reminders to providers. Development and testing took place within the VA's EHR and focused on four cancer-related abnormal test results. Design concepts emphasized mitigating the effects of high workload and alert fatigue while being minimally intrusive. We conducted a multifaceted formative evaluation of the software, addressing fit within the larger socio-technical system. Evaluations included usability testing with the prototype and interview questions about organizational and workflow factors. Participants included 23 physicians, 9 clinical information technology specialists, and 8 quality/safety managers. Evaluation results indicated that our software prototype fit within the technical environment and clinical workflow, and physicians were able to use it successfully. Quality/safety managers reported that the tool would be useful in future quality assurance activities to detect patients who lack documented follow-up. Additionally, we successfully installed the software on the local facility's "test" EHR system, thus demonstrating technical compatibility. To address the factors involved in missed test results, we developed a software prototype to account for

  4. Método para generar casos de prueba funcional en el desarrollo de software Generating functional testing case method in software development

    Directory of Open Access Journals (Sweden)

    Liliana González Palacio

    2009-07-01

    Full Text Available Un aspecto crucial en el control de calidad del desarrollo de software son las pruebas y, dentro de estas, las pruebas funcionales, en las cuales se hace una verificación dinámica del comportamiento de un sistema, basada en la observación de un conjunto seleccionado de ejecuciones controladas o casos de prueba. Para hacer pruebas funcionales se requiere una planificación que consiste en definir los aspectos a chequear y la forma de verificar su correcto funcionamiento, punto en el cual adquieren sentido los casos de prueba. En este artículo derivado de investigación se define un método para generar casos de prueba funcional a partir de casos de uso del sistema, como producto intermedio del proyecto cofinanciado titulado "Herramienta para la documentación de pruebas funcionales"Testing is a main aspect in quality control of software development, especially functional tests. The aim of functional testing is to dynamically verify the system behavior, based on the observation of a given set of controlled executions or test cases. Planning is required to make functional tests, defining the aspects to be checked and the way to verify its proper operation; this allows test cases make sense. In this paper (research based, we propose a method to generate functional test cases from system use cases, based on the co-financed project "Tool for Documenting Functional Testing."

  5. Software design of a auto-testing system for CAMAC modules

    International Nuclear Information System (INIS)

    Zhang Hao; Xu Jiajun

    1997-01-01

    The author introduces one of software methods of how to get a PC graphic interface operation when a MS-DOS driver is the only possible copy for the device. Human-machine interactive graphic interface is more popular nowadays since it has rich pages and comfortable input/output operations. First step is to set up a data exchange between MS-DOS and MS-Windows through a interrupt service. Second step is to program a dynamic link library which VB can invoke. VBX control is possible to extend the system functions. The testing system can test main performances automatically with CAMAC modules of IDIM, IDOM, PSC, SAM, and 3016. It seems the better way to test the linearity, AC correction and so on. The testing system is proved to be useable in maintaining CAMAC modules of BEPC control system

  6. Views on Software Testability

    OpenAIRE

    Shimeall, Timothy; Friedman, Michael; Chilenski, John; Voas, Jeffrey

    1994-01-01

    The field of testability is an active, well-established part of engineering of modern computer systems. However, only recently have technologies for software testability began to be developed. These technologies focus on accessing the aspects of software that improve or depreciate the ease of testing. As both the size of implemented software and the amount of effort required to test that software increase, so will the important of software testability technologies in influencing the softwa...

  7. Quality of radiomic features in glioblastoma multiforme: Impact of semi-automated tumor segmentation software

    International Nuclear Information System (INIS)

    Lee, Myung Eun; Kim, Jong Hyo; Woo, Bo Yeong; Ko, Micheal D.; Jamshidi, Neema

    2017-01-01

    The purpose of this study was to evaluate the reliability and quality of radiomic features in glioblastoma multiforme (GBM) derived from tumor volumes obtained with semi-automated tumor segmentation software. MR images of 45 GBM patients (29 males, 16 females) were downloaded from The Cancer Imaging Archive, in which post-contrast T1-weighted imaging and fluid-attenuated inversion recovery MR sequences were used. Two raters independently segmented the tumors using two semi-automated segmentation tools (TumorPrism3D and 3D Slicer). Regions of interest corresponding to contrast-enhancing lesion, necrotic portions, and non-enhancing T2 high signal intensity component were segmented for each tumor. A total of 180 imaging features were extracted, and their quality was evaluated in terms of stability, normalized dynamic range (NDR), and redundancy, using intra-class correlation coefficients, cluster consensus, and Rand Statistic. Our study results showed that most of the radiomic features in GBM were highly stable. Over 90% of 180 features showed good stability (intra-class correlation coefficient [ICC] ≥ 0.8), whereas only 7 features were of poor stability (ICC NDR ≥1), while above 35% of the texture features showed poor NDR (< 1). Features were shown to cluster into only 5 groups, indicating that they were highly redundant. The use of semi-automated software tools provided sufficiently reliable tumor segmentation and feature stability; thus helping to overcome the inherent inter-rater and intra-rater variability of user intervention. However, certain aspects of feature quality, including NDR and redundancy, need to be assessed for determination of representative signature features before further development of radiomics

  8. NASA software documentation standard software engineering program

    Science.gov (United States)

    1991-01-01

    The NASA Software Documentation Standard (hereinafter referred to as Standard) can be applied to the documentation of all NASA software. This Standard is limited to documentation format and content requirements. It does not mandate specific management, engineering, or assurance standards or techniques. This Standard defines the format and content of documentation for software acquisition, development, and sustaining engineering. Format requirements address where information shall be recorded and content requirements address what information shall be recorded. This Standard provides a framework to allow consistency of documentation across NASA and visibility into the completeness of project documentation. This basic framework consists of four major sections (or volumes). The Management Plan contains all planning and business aspects of a software project, including engineering and assurance planning. The Product Specification contains all technical engineering information, including software requirements and design. The Assurance and Test Procedures contains all technical assurance information, including Test, Quality Assurance (QA), and Verification and Validation (V&V). The Management, Engineering, and Assurance Reports is the library and/or listing of all project reports.

  9. Information filtering based on corrected redundancy-eliminating mass diffusion.

    Science.gov (United States)

    Zhu, Xuzhen; Yang, Yujie; Chen, Guilin; Medo, Matus; Tian, Hui; Cai, Shi-Min

    2017-01-01

    Methods used in information filtering and recommendation often rely on quantifying the similarity between objects or users. The used similarity metrics often suffer from similarity redundancies arising from correlations between objects' attributes. Based on an unweighted undirected object-user bipartite network, we propose a Corrected Redundancy-Eliminating similarity index (CRE) which is based on a spreading process on the network. Extensive experiments on three benchmark data sets-Movilens, Netflix and Amazon-show that when used in recommendation, the CRE yields significant improvements in terms of recommendation accuracy and diversity. A detailed analysis is presented to unveil the origins of the observed differences between the CRE and mainstream similarity indices.

  10. Low-redundancy linear arrays in mirrored interferometric aperture synthesis.

    Science.gov (United States)

    Zhu, Dong; Hu, Fei; Wu, Liang; Li, Jun; Lang, Liang

    2016-01-15

    Mirrored interferometric aperture synthesis (MIAS) is a novel interferometry that can improve spatial resolution compared with that of conventional IAS. In one-dimensional (1-D) MIAS, antenna array with low redundancy has the potential to achieve a high spatial resolution. This Letter presents a technique for the direct construction of low-redundancy linear arrays (LRLAs) in MIAS and derives two regular analytical patterns that can yield various LRLAs in short computation time. Moreover, for a better estimation of the observed scene, a bi-measurement method is proposed to handle the rank defect associated with the transmatrix of those LRLAs. The results of imaging simulation demonstrate the effectiveness of the proposed method.

  11. Structural redundance of NPPs and diagnostics

    International Nuclear Information System (INIS)

    Znyshev, V.V.; Sabaev, E.F.

    1988-01-01

    A new approach to functional diagnosis of NPP state based on structural redundance, owing to which in major of the facilities there are elements identical as to the structure and operational conditions, is suggested. The difference from zero by the given value for one parameter measured for various identical elements is an indicator of a failed element and a signal for diagnostic analysis

  12. Analysis of singularity in redundant manipulators

    International Nuclear Information System (INIS)

    Watanabe, Koichi

    2000-03-01

    In the analysis of arm positions and configurations of redundant manipulators, the singularity avoidance problems are important themes. This report presents singularity avoidance computations of a 7 DOF manipulator by using a computer code based on human-arm models. The behavior of the arm escaping from the singular point can be identified satisfactorily through the use of 3-D plotting tools. (author)

  13. Software engineering and automatic continuous verification of scientific software

    Science.gov (United States)

    Piggott, M. D.; Hill, J.; Farrell, P. E.; Kramer, S. C.; Wilson, C. R.; Ham, D.; Gorman, G. J.; Bond, T.

    2011-12-01

    Software engineering of scientific code is challenging for a number of reasons including pressure to publish and a lack of awareness of the pitfalls of software engineering by scientists. The Applied Modelling and Computation Group at Imperial College is a diverse group of researchers that employ best practice software engineering methods whilst developing open source scientific software. Our main code is Fluidity - a multi-purpose computational fluid dynamics (CFD) code that can be used for a wide range of scientific applications from earth-scale mantle convection, through basin-scale ocean dynamics, to laboratory-scale classic CFD problems, and is coupled to a number of other codes including nuclear radiation and solid modelling. Our software development infrastructure consists of a number of free tools that could be employed by any group that develops scientific code and has been developed over a number of years with many lessons learnt. A single code base is developed by over 30 people for which we use bazaar for revision control, making good use of the strong branching and merging capabilities. Using features of Canonical's Launchpad platform, such as code review, blueprints for designing features and bug reporting gives the group, partners and other Fluidity uers an easy-to-use platform to collaborate and allows the induction of new members of the group into an environment where software development forms a central part of their work. The code repositoriy are coupled to an automated test and verification system which performs over 20,000 tests, including unit tests, short regression tests, code verification and large parallel tests. Included in these tests are build tests on HPC systems, including local and UK National HPC services. The testing of code in this manner leads to a continuous verification process; not a discrete event performed once development has ceased. Much of the code verification is done via the "gold standard" of comparisons to analytical

  14. One Size Does Not Fit All: Older Adults Benefit From Redundant Text in Multimedia Instruction

    Directory of Open Access Journals (Sweden)

    Barbara eFenesi

    2015-07-01

    Full Text Available The multimedia design of presentations typically ignores that younger and older adults have varying cognitive strengths and weaknesses. We examined whether differential instructional design may enhance learning in these populations. Younger and older participants viewed one of three computer-based presentations: Audio only (narration, Redundant (audio narration with redundant text, or Complementary (audio narration with non–redundant text and images. Younger participants learned better when audio narration was paired with relevant images compared to when audio narration was paired with redundant text. However, older participants learned best when audio narration was paired with redundant text. Younger adults, who presumably have a higher working memory capacity, appear to benefit more from complementary information that may drive deeper conceptual processing. In contrast, older adults learn better from presentations that support redundant coding across modalities, which may help mitigate the effects of age-related decline in working memory capacity. Additionally, several misconceptions of design quality appeared across age groups: both younger and older participants positively rated less effective designs. Findings suggest that one-size does not fit all, with older adults requiring unique multimedia design tailored to their cognitive abilities for effective learning.

  15. One size does not fit all: older adults benefit from redundant text in multimedia instruction.

    Science.gov (United States)

    Fenesi, Barbara; Vandermorris, Susan; Kim, Joseph A; Shore, David I; Heisz, Jennifer J

    2015-01-01

    The multimedia design of presentations typically ignores that younger and older adults have varying cognitive strengths and weaknesses. We examined whether differential instructional design may enhance learning in these populations. Younger and older participants viewed one of three computer-based presentations: Audio only (narration), Redundant (audio narration with redundant text), or Complementary (audio narration with non-redundant text and images). Younger participants learned better when audio narration was paired with relevant images compared to when audio narration was paired with redundant text. However, older participants learned best when audio narration was paired with redundant text. Younger adults, who presumably have a higher working memory capacity (WMC), appear to benefit more from complementary information that may drive deeper conceptual processing. In contrast, older adults learn better from presentations that support redundant coding across modalities, which may help mitigate the effects of age-related decline in WMC. Additionally, several misconceptions of design quality appeared across age groups: both younger and older participants positively rated less effective designs. Findings suggest that one-size does not fit all, with older adults requiring unique multimedia design tailored to their cognitive abilities for effective learning.

  16. Software design practice using two SCADA software packages

    DEFF Research Database (Denmark)

    Basse, K.P.; Christensen, Georg Kronborg; Frederiksen, P. K.

    1996-01-01

    Typical software development for manufacturing control is done either by specialists with consideral real-time programming experience or done by the adaptation of standard software packages for manufacturing control. After investigation and test of two commercial software packages: "InTouch" and ......Touch" and "Fix", it is argued, that a more efficient software solution can be achieved by utilising an integrated specification for SCADA and PLC-programming. Experiences gained from process control is planned investigated for descrete parts manufacturing....

  17. One size does not fit all: older adults benefit from redundant text in multimedia instruction

    OpenAIRE

    Fenesi, Barbara; Vandermorris, Susan; Kim, Joseph A.; Shore, David I.; Heisz, Jennifer J.

    2015-01-01

    The multimedia design of presentations typically ignores that younger and older adults have varying cognitive strengths and weaknesses. We examined whether differential instructional design may enhance learning in these populations. Younger and older participants viewed one of three computer-based presentations: Audio only (narration), Redundant (audio narration with redundant text), or Complementary (audio narration with non–redundant text and images). Younger participants learned better ...

  18. Solving the redundancy allocation problem with multiple strategy choices using a new simplified particle swarm optimization

    International Nuclear Information System (INIS)

    Kong, Xiangyong; Gao, Liqun; Ouyang, Haibin; Li, Steven

    2015-01-01

    In most research on redundancy allocation problem (RAP), the redundancy strategy for each subsystem is assumed to be predetermined and fixed. This paper focuses on a specific RAP with multiple strategy choices (RAP-MSC), in which both active redundancy and cold standby redundancy can be selected as an additional decision variable for individual subsystems. To do so, the component type, redundancy strategy and redundancy level for each subsystem should be chosen subject to the system constraints appropriately such that the system reliability is maximized. Meanwhile, imperfect switching for cold standby redundancy is considered and a k-Erlang distribution is introduced to model the time-to-failure component as well. Given the importance and complexity of RAP-MSC, we propose a new efficient simplified version of particle swarm optimization (SPSO) to solve such NP-hard problems. In this method, a new position updating scheme without velocity is presented with stochastic disturbance and a low probability. Moreover, it is compared with several well-known PSO variants and other state-of-the-art approaches in the literature to evaluate its performance. The experiment results demonstrate the superiority of SPSO as an alternative for solving the RAP-MSC. - Highlights: • A more realistic RAP form with multiple strategy choices is focused. • Redundancy strategies are to be selected rather than fixed in general RAP. • A new simplified particle swarm optimization is proposed. • Higher reliabilities are achieved than the state-of-the-art approaches.

  19. A tutorial on testing the race model inequality

    DEFF Research Database (Denmark)

    Gondan, Matthias; Minakata, Katsumi

    2016-01-01

    , to faster responses to redundant signals. In contrast, coactivation models assume integrated processing of the combined stimuli. To distinguish between these two accounts, Miller (1982) derived the well-known race model inequality, which has become a routine test for behavioral data in experiments...... with redundant signals. In this tutorial, we review the basic properties of redundant signals experiments and current statistical procedures used to test the race model inequality during the period between 2011 and 2014. We highlight and discuss several issues concerning study design and the test of the race...... model inequality, such as inappropriate control of Type I error, insufficient statistical power, wrong treatment of omitted responses or anticipations and the interpretation of violations of the race model inequality. We make detailed recommendations on the design of redundant signals experiments...

  20. Software maintenance and evolution and automated software engineering

    NARCIS (Netherlands)

    Carver, Jeffrey C.; Serebrenik, Alexander

    2018-01-01

    This issue's column reports on the 33rd International Conference on Software Maintenance and Evolution and 32nd International Conference on Automated Software Engineering. Topics include flaky tests, technical debt, QA bots, and regular expressions.