WorldWideScience

Sample records for common software configuration

  1. Dependability Analysis Methods For Configurable Software

    International Nuclear Information System (INIS)

    Dahll, Gustav; Pulkkinen, Urho

    1996-01-01

    Configurable software systems are systems which are built up by standard software components in the same way as a hardware system is built up by standard hardware components. Such systems are often used in the control of NPPs, also in safety related applications. A reliability analysis of such systems is therefore necessary. This report discusses what configurable software is, and what is particular with respect to reliability assessment of such software. Two very commonly used techniques in traditional reliability analysis, viz. failure mode, effect and criticality analysis (FMECA) and fault tree analysis are investigated. A real example is used to illustrate the discussed methods. Various aspects relevant to the assessment of the software reliability in such systems are discussed. Finally some models for quantitative software reliability assessment applicable on configurable software systems are described. (author)

  2. Software configuration management

    CERN Document Server

    Keyes, Jessica

    2004-01-01

    Software Configuration Management discusses the framework from a standards viewpoint, using the original DoD MIL-STD-973 and EIA-649 standards to describe the elements of configuration management within a software engineering perspective. Divided into two parts, the first section is composed of 14 chapters that explain every facet of configuration management related to software engineering. The second section consists of 25 appendices that contain many valuable real world CM templates.

  3. Software configuration management

    International Nuclear Information System (INIS)

    Arribas Peces, E.; Martin Faraldo, P.

    1993-01-01

    Software Configuration Management is directed towards identifying system configuration at specific points of its life cycle, so as to control changes to the configuration and to maintain the integrity and traceability of the configuration throughout its life. SCM functions and tasks are presented in the paper

  4. Example of software configuration management model

    International Nuclear Information System (INIS)

    Roth, P.

    2006-01-01

    Software configuration management is the mechanism used to track and control software changes and may include the following actions: A tracking system should be established for any changes made to the existing software configuration. Requirement of the configuration management system are the following: - Backup the different software configuration; - Record the details (the date, the subject, the filenames, the supporting documents, the tests, ...) of the changes introduced in the new configuration; - Document all the differences between the different versions. Configuration management allows simultaneous exploitation of one specific version and development of the next version. Minor correction can be perform in the current exploitation version

  5. A Software Configuration Management Course

    DEFF Research Database (Denmark)

    Asklund, U.; Bendix, Lars Gotfred

    2003-01-01

    Software Configuration Management has been a big success in research and creation of tools. There are also many vendors in the market of selling courses to companies. However, in the education sector Software Configuration Management has still not quite made it - at least not into the university...... curriculum. It is either not taught at all or is just a minor part of a general course in software engineering. In this paper, we report on our experience with giving a full course entirely dedicated to Software Configuration Management topics and start a discussion of what ideally should be the goal...

  6. Computer software configuration management

    International Nuclear Information System (INIS)

    Pelletier, G.

    1987-08-01

    This report reviews the basic elements of software configuration management (SCM) as defined by military and industry standards. Several software configuration management standards are evaluated given the requirements of the nuclear industry. A survey is included of available automated tools for supporting SCM activities. Some information is given on the experience of establishing and using SCM plans of other organizations that manage critical software. The report concludes with recommendations of practices that would be most appropriate for the nuclear power industry in Canada

  7. A Common Software Configuration Management System for CERN SPS and LEP Accelerators and Technical Services

    CERN Document Server

    Hatziangeli, Eugenia; Bragg, A E; Ninin, P; Patino, J; Sobczak, H

    1999-01-01

    Software configuration management activities are crucial to assure the integrity of current operational and the quality of new software either being developed at CERN or outsourced. The functionality of the present management system became insufficient with large maintenance overheads. In order to improve our situation, a new software configuration management system has been set up. It is based on Razor, a commercial tool, which supports the management of file versions and operational software releases, along with integrated problem reporting capabilities. In addition to the basic tool functionality, automated procedures were custom made, for the installation and distribution of operational software. Policies were developed and applied over the software development life cycle to provide visibility and control. The system ensures that, at all times, the status and location of all deliverable versions are known, the state of shared objects is carefully controlled and unauthorised changes prevented. It provides ...

  8. A Common Software-Configuration Management System for CERN SPS and LEP Accelerators and Technical Services

    CERN Document Server

    Hatziangeli, Eugenia; Bragg, A E; Ninin, P; Patino, J; Sobczak, H

    2000-01-01

    Software-configuration management activities are crucial to ensure the integrity of current operational software and the quality of new software either being developed at CERN or outsourced. The functionality of the present management system became insufficient with large maintenance overheads. In order to improve our situation, a new software-configuration management system has been set up. It is based on Razor R, a commercial tool, which supports the management of file versions and operational software releases, along with integrated problem-reporting capabilities. In addition to the basic tool functionality, automated procedures were custom-made for the installation and distribution of operational software. The system ensures that, at all times, the status and location of all deliverable versions are known, the state of shared objects is carefully controlled and unauthorized changes prevented. This paper outlines the reasons for selecting the chosen tool, the implementation of the system and the final goal...

  9. The Ragnarok Architectural Software Configuration Management Model

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak

    1999-01-01

    The architecture is the fundamental framework for designing and implementing large scale software, and the ability to trace and control its evolution is essential. However, many traditional software configuration management tools view 'software' merely as a set of files, not as an architecture....... This introduces an unfortunate impedance mismatch between the design domain (architecture level) and configuration management domain (file level.) This paper presents a software configuration management model that allows tight version control and configuration management of the architecture of a software system...

  10. A Configurable, Object-Oriented, Transportation System Software Framework

    Energy Technology Data Exchange (ETDEWEB)

    KELLY,SUZANNE M.; MYRE,JOHN W.; PRICE,MARK H.; RUSSELL,ERIC D.; SCOTT,DAN W.

    2000-08-01

    The Transportation Surety Center, 6300, has been conducting continuing research into and development of information systems for the Configurable Transportation Security and Information Management System (CTSS) project, an Object-Oriented Framework approach that uses Component-Based Software Development to facilitate rapid deployment of new systems while improving software cost containment, development reliability, compatibility, and extensibility. The direction has been to develop a Fleet Management System (FMS) framework using object-oriented technology. The goal for the current development is to provide a software and hardware environment that will demonstrate and support object-oriented development commonly in the FMS Central Command Center and Vehicle domains.

  11. Observation-Driven Configuration of Complex Software Systems

    Science.gov (United States)

    Sage, Aled

    2010-06-01

    The ever-increasing complexity of software systems makes them hard to comprehend, predict and tune due to emergent properties and non-deterministic behaviour. Complexity arises from the size of software systems and the wide variety of possible operating environments: the increasing choice of platforms and communication policies leads to ever more complex performance characteristics. In addition, software systems exhibit different behaviour under different workloads. Many software systems are designed to be configurable so that policies can be chosen to meet the needs of various stakeholders. For complex software systems it can be difficult to accurately predict the effects of a change and to know which configuration is most appropriate. This thesis demonstrates that it is useful to run automated experiments that measure a selection of system configurations. Experiments can find configurations that meet the stakeholders' needs, find interesting behavioural characteristics, and help produce predictive models of the system's behaviour. The design and use of ACT (Automated Configuration Tool) for running such experiments is described, in combination a number of search strategies for deciding on the configurations to measure. Design Of Experiments (DOE) is discussed, with emphasis on Taguchi Methods. These statistical methods have been used extensively in manufacturing, but have not previously been used for configuring software systems. The novel contribution here is an industrial case study, applying the combination of ACT and Taguchi Methods to DC-Directory, a product from Data Connection Ltd (DCL). The case study investigated the applicability of Taguchi Methods for configuring complex software systems. Taguchi Methods were found to be useful for modelling and configuring DC- Directory, making them a valuable addition to the techniques available to system administrators and developers.

  12. Experiences with Architectural Software Configuration Management in Ragnarok

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak

    1998-01-01

    This paper describes a model, denoted architectural software configuration management, that minimises the gap between software design and configuration management by allowing developers to do configuration- and version control of the abstractions and hierarchy in a software architecture. The model...... emphasises traceability and reproducibility by unifying the concepts version and bound configuration. Experiences with such a model, implemented in a prototype “Ragnarok”, from three real-life, small- to medium-sized, software development projects are reported. The conclusion is that the presented model...

  13. Configuration Fuzzing for Software Vulnerability Detection.

    Science.gov (United States)

    Dai, Huning; Murphy, Christian; Kaiser, Gail

    2010-02-15

    Many software security vulnerabilities only reveal themselves under certain conditions, i.e., particular configurations of the software together with its particular runtime environment. One approach to detecting these vulnerabilities is fuzz testing, which feeds a range of randomly modified inputs to a software application while monitoring it for failures. However, typical fuzz testing makes no guarantees regarding the syntactic and semantic validity of the input, or of how much of the input space will be explored. To address these problems, in this paper we present a new testing methodology called configuration fuzzing. Configuration fuzzing is a technique whereby the configuration of the running application is randomly modified at certain execution points, in order to check for vulnerabilities that only arise in certain conditions. As the application runs in the deployment environment, this testing technique continuously fuzzes the configuration and checks "security invariants" that, if violated, indicate a vulnerability; however, the fuzzing is performed in a duplicated copy of the original process, so that it does not affect the state of the running application. In addition to discussing the approach and describing a prototype framework for implementation, we also present the results of a case study to demonstrate the approach's efficiency.

  14. Computer Software Configuration Item-Specific Flight Software Image Transfer Script Generator

    Science.gov (United States)

    Bolen, Kenny; Greenlaw, Ronald

    2010-01-01

    A K-shell UNIX script enables the International Space Station (ISS) Flight Control Team (FCT) operators in NASA s Mission Control Center (MCC) in Houston to transfer an entire or partial computer software configuration item (CSCI) from a flight software compact disk (CD) to the onboard Portable Computer System (PCS). The tool is designed to read the content stored on a flight software CD and generate individual CSCI transfer scripts that are capable of transferring the flight software content in a given subdirectory on the CD to the scratch directory on the PCS. The flight control team can then transfer the flight software from the PCS scratch directory to the Electronically Erasable Programmable Read Only Memory (EEPROM) of an ISS Multiplexer/ Demultiplexer (MDM) via the Indirect File Transfer capability. The individual CSCI scripts and the CSCI Specific Flight Software Image Transfer Script Generator (CFITSG), when executed a second time, will remove all components from their original execution. The tool will identify errors in the transfer process and create logs of the transferred software for the purposes of configuration management.

  15. Configuration management plan for the GENII software

    International Nuclear Information System (INIS)

    Rittmann, P.D.

    1994-01-01

    The GENII program calculates doses from radionuclides released into the environment for a variety of possible exposure scenarios. The user prepares an input data file with the necessary modelling assumptions and parameters. The program reads the user's input file, computes the necessary doses and stores these results in an output file. The output file also contains a listing of the user's input and gives the title lines from the data libraries which are accessed in the course of the calculations. The purpose of this document is to provide users of the GENII software with the configuration controls which are planned for use by WHC in accordance with WHC-CM-3-10. The controls are solely for WHC employees. Non-WHC individuals are not excluded, but no promise is made or implied that they will be informed of errors or revisions to the software. The configuration controls cover the GENII software, the GENII user's guide, the list of GENII users at WHC, and the backup copies. Revisions to the software must be approved prior to distribution in accordance with this configuration management plan

  16. Saltwell PIC Skid Programmable Logic Controller (PLC) Software Configuration Management Plan

    International Nuclear Information System (INIS)

    KOCH, M.R.

    1999-01-01

    This document provides the procedures and guidelines necessary for computer software configuration management activities during the operation and maintenance phases of the Saltwell PIC Skids as required by LMH-PRO-309/Rev. 0, Computer Software Quality Assurance, Section 2.6, Software Configuration Management. The software configuration management plan (SCMP) integrates technical and administrative controls to establish and maintain technical consistency among requirements, physical configuration, and documentation for the Saltwell PIC Skid Programmable Logic Controller (PLC) software during the Hanford application, operations and maintenance. This SCMP establishes the Saltwell PIC Skid PLC Software Baseline, status changes to that baseline, and ensures that software meets design and operational requirements and is tested in accordance with their design basis

  17. Sandia software guidelines, Volume 4: Configuration management

    Energy Technology Data Exchange (ETDEWEB)

    1992-06-01

    This volume is one in a series of Sandia Software Guidelines for use in producing quality software within Sandia National Laboratories. This volume is based on the IEEE standard and guide for software configuration management. The basic concepts and detailed guidance on implementation of these concepts are discussed for several software project types. Example planning documents for both projects and organizations are included.

  18. Tank monitor and control system (TMACS) software configuration management plan

    International Nuclear Information System (INIS)

    GLASSCOCK, J.A.

    1999-01-01

    This Software Configuration Management Plan (SCMP) describes the methodology for control of computer software developed and supported by the Systems Development and Integration (SD and I) organization of Lockheed Martin Services, Inc. (LMSI) for the Tank Monitor and Control System (TMACS). This plan controls changes to the software and configuration files used by TMACS. The controlled software includes the Gensym software package, Gensym knowledge base files developed for TMACS, C-language programs used by TMACS, the operating system on the production machine, language compilers, and all Windows NT commands and functions which affect the operating environment. The configuration files controlled include the files downloaded to the Acromag and Westronic field instruments

  19. Automated software configuration in the MONSOON system

    Science.gov (United States)

    Daly, Philip N.; Buchholz, Nick C.; Moore, Peter C.

    2004-09-01

    MONSOON is the next generation OUV-IR controller project being developed at NOAO. The design is flexible, emphasizing code re-use, maintainability and scalability as key factors. The software needs to support widely divergent detector systems ranging from multi-chip mosaics (for LSST, QUOTA, ODI and NEWFIRM) down to large single or multi-detector laboratory development systems. In order for this flexibility to be effective and safe, the software must be able to configure itself to the requirements of the attached detector system at startup. The basic building block of all MONSOON systems is the PAN-DHE pair which make up a single data acquisition node. In this paper we discuss the software solutions used in the automatic PAN configuration system.

  20. Secure Software Configuration Management Processes for nuclear safety software development environment

    International Nuclear Information System (INIS)

    Chou, I.-Hsin

    2011-01-01

    Highlights: → The proposed method emphasizes platform-independent security processes. → A hybrid process based on the nuclear SCM and security regulations is proposed. → Detailed descriptions and Process Flow Diagram are useful for software developers. - Abstract: The main difference between nuclear and generic software is that the risk factor is infinitely greater in nuclear software - if there is a malfunction in the safety system, it can result in significant economic loss, physical damage or threat to human life. However, secure software development environment have often been ignored in the nuclear industry. In response to the terrorist attacks on September 11, 2001, the US Nuclear Regulatory Commission (USNRC) revised the Regulatory Guide (RG 1.152-2006) 'Criteria for use of computers in safety systems of nuclear power plants' to provide specific security guidance throughout the software development life cycle. Software Configuration Management (SCM) is an essential discipline in the software development environment. SCM involves identifying configuration items, controlling changes to those items, and maintaining integrity and traceability of them. For securing the nuclear safety software, this paper proposes a Secure SCM Processes (S 2 CMP) which infuses regulatory security requirements into proposed SCM processes. Furthermore, a Process Flow Diagram (PFD) is adopted to describe S 2 CMP, which is intended to enhance the communication between regulators and developers.

  1. Customer configuration updating in a software supply network

    NARCIS (Netherlands)

    Jansen, S.R.L.

    2007-01-01

    Product software development is the activity of development, modification, reuse, re-engineering, maintenance, or any other activities that result in packaged configurations of software components or software-based services that are released for and traded in a specific market \\cite{XuBrinkkemper}.

  2. Tank monitor and control system (TMACS) software configuration management plan; TOPICAL

    International Nuclear Information System (INIS)

    GLASSCOCK, J.A.

    1999-01-01

    This Software Configuration Management Plan (SCMP) describes the methodology for control of computer software developed and supported by the Systems Development and Integration (SD and I) organization of Lockheed Martin Services, Inc. (LMSI) for the Tank Monitor and Control System (TMACS). This plan controls changes to the software and configuration files used by TMACS. The controlled software includes the Gensym software package, Gensym knowledge base files developed for TMACS, C-language programs used by TMACS, the operating system on the production machine, language compilers, and all Windows NT commands and functions which affect the operating environment. The configuration files controlled include the files downloaded to the Acromag and Westronic field instruments

  3. Light Duty Utility Arm computer software configuration management plan

    International Nuclear Information System (INIS)

    Philipp, B.L.

    1998-01-01

    This plan describes the configuration management for the Light Duty Utility Arm robotic manipulation arm control software. It identifies the requirement, associated documents, and the software control methodology. The Light Duty Utility Ann (LDUA) System is a multi-axis robotic manipulator arm and deployment vehicle, used to perform surveillance and characterization operations in support of remediation of defense nuclear wastes currently stored in the Hanford Underground Storage Tanks (USTs) through the available 30.5 cm (12 in.) risers. This plan describes the configuration management of the LDUA software

  4. Software Configurable Multichannel Transceiver

    Science.gov (United States)

    Freudinger, Lawrence C.; Cornelius, Harold; Hickling, Ron; Brooks, Walter

    2009-01-01

    Emerging test instrumentation and test scenarios increasingly require network communication to manage complexity. Adapting wireless communication infrastructure to accommodate challenging testing needs can benefit from reconfigurable radio technology. A fundamental requirement for a software-definable radio system is independence from carrier frequencies, one of the radio components that to date has seen only limited progress toward programmability. This paper overviews an ongoing project to validate the viability of a promising chipset that performs conversion of radio frequency (RF) signals directly into digital data for the wireless receiver and, for the transmitter, converts digital data into RF signals. The Software Configurable Multichannel Transceiver (SCMT) enables four transmitters and four receivers in a single unit the size of a commodity disk drive, programmable for any frequency band between 1 MHz and 6 GHz.

  5. The Software Life-Cycle Based Configuration Management Tasks for the KNICS Project

    International Nuclear Information System (INIS)

    Cheon, Se Woo; Kwon, Kee Choon

    2005-01-01

    Software configuration management (SCM) is an activity, which configures the form of a software system (e.g., design documents and programs) and systematically manages and controls the modifications used to compile the plans, development, and operations resulting from software development and maintenance. The SCM tool, NuSCM, has been specifically developed for the software life-cycle configuration management of developing the KNICS plant protection system (PPS). This paper presents the application of NuSCM to the KNICS project

  6. Saltwell Leak Detector Station Programmable Logic Controller (PLC) Software Configuration Management Plan (SCMP)

    International Nuclear Information System (INIS)

    WHITE, K.A.

    2000-01-01

    This document provides the procedures and guidelines necessary for computer software configuration management activities during the operation and maintenance phases of the Saltwell Leak Detector Stations as required by HNF-PRO-309/Rev.1, Computer Software Quality Assurance, Section 2.4, Software Configuration Management. The software configuration management plan (SCMP) integrates technical and administrative controls to establish and maintain technical consistency among requirements, physical configuration, and documentation for the Saltwell Leak Detector Station Programmable Logic Controller (PLC) software during the Hanford application, operations and maintenance. This SCMP establishes the Saltwell Leak Detector Station PLC Software Baseline, status changes to that baseline, and ensures that software meets design and operational requirements and is tested in accordance with their design basis

  7. Windows Calorimeter Control (WinCal) program computer software configuration management plan

    International Nuclear Information System (INIS)

    1997-01-01

    This document describes the system configuration management activities performed in support of the Windows Calorimeter Control (WinCal) system, in accordance with Site procedures based on Institute of Electrical and Electronic Engineers (IEEE) Standard 828-1990, Standard for Software Configuration Management Plans (IEEE 1990) and IEEE Standard 1042-1987, Guide to Software Configuration Management (IEEE 1987)

  8. Working Environment and Software Configuration Management Assimiliation using Traceability Enhancement Technique

    International Nuclear Information System (INIS)

    Iqbal, H.; Javed, A.; Majeed, M. N.

    2015-01-01

    Software Configuration Management (SCM) Systems are very useful in coordinating group effort in large and complex software systems. As a result of change in user requirement, market needs, tools, technology or new business goals emanate out, changes are continuously induced while developing the software product. For change management, Traceability technique and SCM are two prominent practices in the software development process. SCM helps in managing configuration items while traceability helps in tracing the knowledge about the configuration items. In this paper we propose a model of the SCM system with the working environment when changes are introduced in multiple artifacts and by which high quality products are developed. (author)

  9. Software control and system configuration management - A process that works

    Science.gov (United States)

    Petersen, K. L.; Flores, C., Jr.

    1983-01-01

    A comprehensive software control and system configuration management process for flight-crucial digital control systems of advanced aircraft has been developed and refined to insure efficient flight system development and safe flight operations. Because of the highly complex interactions among the hardware, software, and system elements of state-of-the-art digital flight control system designs, a systems-wide approach to configuration control and management has been used. Specific procedures are implemented to govern discrepancy reporting and reconciliation, software and hardware change control, systems verification and validation testing, and formal documentation requirements. An active and knowledgeable configuration control board reviews and approves all flight system configuration modifications and revalidation tests. This flexible process has proved effective during the development and flight testing of several research aircraft and remotely piloted research vehicles with digital flight control systems that ranged from relatively simple to highly complex, integrated mechanizations.

  10. Software control and system configuration management: A systems-wide approach

    Science.gov (United States)

    Petersen, K. L.; Flores, C., Jr.

    1984-01-01

    A comprehensive software control and system configuration management process for flight-crucial digital control systems of advanced aircraft has been developed and refined to insure efficient flight system development and safe flight operations. Because of the highly complex interactions among the hardware, software, and system elements of state-of-the-art digital flight control system designs, a systems-wide approach to configuration control and management has been used. Specific procedures are implemented to govern discrepancy reporting and reconciliation, software and hardware change control, systems verification and validation testing, and formal documentation requirements. An active and knowledgeable configuration control board reviews and approves all flight system configuration modifications and revalidation tests. This flexible process has proved effective during the development and flight testing of several research aircraft and remotely piloted research vehicles with digital flight control systems that ranged from relatively simple to highly complex, integrated mechanizations.

  11. Configuring the Orion Guidance, Navigation, and Control Flight Software for Automated Sequencing

    Science.gov (United States)

    Odegard, Ryan G.; Siliwinski, Tomasz K.; King, Ellis T.; Hart, Jeremy J.

    2010-01-01

    The Orion Crew Exploration Vehicle is being designed with greater automation capabilities than any other crewed spacecraft in NASA s history. The Guidance, Navigation, and Control (GN&C) flight software architecture is designed to provide a flexible and evolvable framework that accommodates increasing levels of automation over time. Within the GN&C flight software, a data-driven approach is used to configure software. This approach allows data reconfiguration and updates to automated sequences without requiring recompilation of the software. Because of the great dependency of the automation and the flight software on the configuration data, the data management is a vital component of the processes for software certification, mission design, and flight operations. To enable the automated sequencing and data configuration of the GN&C subsystem on Orion, a desktop database configuration tool has been developed. The database tool allows the specification of the GN&C activity sequences, the automated transitions in the software, and the corresponding parameter reconfigurations. These aspects of the GN&C automation on Orion are all coordinated via data management, and the database tool provides the ability to test the automation capabilities during the development of the GN&C software. In addition to providing the infrastructure to manage the GN&C automation, the database tool has been designed with capabilities to import and export artifacts for simulation analysis and documentation purposes. Furthermore, the database configuration tool, currently used to manage simulation data, is envisioned to evolve into a mission planning tool for generating and testing GN&C software sequences and configurations. A key enabler of the GN&C automation design, the database tool allows both the creation and maintenance of the data artifacts, as well as serving the critical role of helping to manage, visualize, and understand the data-driven parameters both during software development

  12. Software Configuration Management Plan for the Sodium Removal System

    International Nuclear Information System (INIS)

    HILL, L.F.

    2000-01-01

    This document establishers the Software Configuration Management Plan (SCMP) for the software associated with the control system of the Sodium Removal System (SRS) located in the Interim Examination and Maintenance (IEM Cell) Facility of the FFTF Flux Test

  13. ATLAS software configuration and build tool optimisation

    Science.gov (United States)

    Rybkin, Grigory; Atlas Collaboration

    2014-06-01

    ATLAS software code base is over 6 million lines organised in about 2000 packages. It makes use of some 100 external software packages, is developed by more than 400 developers and used by more than 2500 physicists from over 200 universities and laboratories in 6 continents. To meet the challenge of configuration and building of this software, the Configuration Management Tool (CMT) is used. CMT expects each package to describe its build targets, build and environment setup parameters, dependencies on other packages in a text file called requirements, and each project (group of packages) to describe its policies and dependencies on other projects in a text project file. Based on the effective set of configuration parameters read from the requirements files of dependent packages and project files, CMT commands build the packages, generate the environment for their use, or query the packages. The main focus was on build time performance that was optimised within several approaches: reduction of the number of reads of requirements files that are now read once per package by a CMT build command that generates cached requirements files for subsequent CMT build commands; introduction of more fine-grained build parallelism at package task level, i.e., dependent applications and libraries are compiled in parallel; code optimisation of CMT commands used for build; introduction of package level build parallelism, i. e., parallelise the build of independent packages. By default, CMT launches NUMBER-OF-PROCESSORS build commands in parallel. The other focus was on CMT commands optimisation in general that made them approximately 2 times faster. CMT can generate a cached requirements file for the environment setup command, which is especially useful for deployment on distributed file systems like AFS or CERN VMFS. The use of parallelism, caching and code optimisation significantly-by several times-reduced software build time, environment setup time, increased the efficiency of

  14. Software Configuration Management: The Quality Weakness

    International Nuclear Information System (INIS)

    Arrojo, E.; Garcia, P.

    1998-01-01

    At the moment it is very difficult to din any process in the industry where software is not involved. We trust software does minimize the possibility of process failures. In parallel, the quality and safety requirements of our processes have been improved to satisfactory levels. Let's look around us. Every day, thousands of calculations are carried out by our engineers using computer programs. Hundreds of processes are controlled automatically. Safety marging, limits, operation controls..., are derived from them. The tools begin to control our processes but, Who does control the tool? Once they have been installed and once they are running, are they always reliable? NO If you think that your current system are satisfactory, we propose you a game in this report. It is just a test. Which is your score?. Then we revise the concept of Configuration Management and we describe an ideal machine; the ''Perpetuum Mobile'' of the Configuration. We describe some rules to implement and improvement and we comment on the operative experience in ENUSA. (Author)

  15. Modularisation of Software Configuration Management

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak

    2000-01-01

    management, and outline how modularisation is natural and powerful also in this context. The analysis is partly based on experiences from case studies where small- to medium-sized development projects are using a prototype tool that supports modular software configuration management....

  16. EMMA: a new paradigm in configurable software

    International Nuclear Information System (INIS)

    Nogiec, J. M.; Trombly-Freytag, K.

    2017-01-01

    EMMA is a framework designed to create a family of configurable software systems, with emphasis on extensibility and flexibility. It is based on a loosely coupled, event driven architecture. The EMMA framework has been built upon the premise of composing software systems from independent components. It opens up opportunities for reuse of components and their functionality and composing them together in many different ways. As a result, it provides the developer of test and measurement applications with a lightweight alternative to microservices, while sharing their various advantages, including composability, loose coupling, encapsulation, and reuse.

  17. EMMA: a new paradigm in configurable software

    Science.gov (United States)

    Nogiec, J. M.; Trombly-Freytag, K.

    2017-10-01

    EMMA is a framework designed to create a family of configurable software systems, with emphasis on extensibility and flexibility. It is based on a loosely coupled, event driven architecture. The EMMA framework has been built upon the premise of composing software systems from independent components. It opens up opportunities for reuse of components and their functionality and composing them together in many different ways. It provides the developer of test and measurement applications with a lightweight alternative to microservices, while sharing their various advantages, including composability, loose coupling, encapsulation, and reuse.

  18. Good Practices for Software Configuration Management with MDA Metodology

    OpenAIRE

    Manuel Morejón Espinosa

    2012-01-01

    Software Configuration Management (SCM) forms part of the software development process. Its principal goal is to coordinate this development and minimize all possible errors. In order to meet its goal various activities are carried out, of which can be identified: items identification, change control, version control, audit and status reporting. Inside enterprise applications the software development can be guided from system model as methodology. The name of this methodology is Model Driven ...

  19. Study of evaluation techniques of software configuration management and reliability

    Energy Technology Data Exchange (ETDEWEB)

    Youn, Cheong; Baek, Y. W.; Kim, H. C.; Han, H. C.; Choi, C. R. [Chungnam National Univ., Taejon (Korea, Republic of)

    2001-03-15

    The Study of activities to solve software safety and quality must be executed in base of establishing software development process for digitalized nuclear plant. Especially study of software testing and Verification and Validation must executed. For this purpose methodologies and tools which can improve software qualities are evaluated and software Testing, V and V and Configuration Management which can be applied to software life cycle are investigated. This study establish a guideline that can be used to assure software safety and reliability requirements in digitalized nuclear plant systems.

  20. In-field inspection support software: A status report on the Common Inspection On-site Software Package (CIOSP) project

    International Nuclear Information System (INIS)

    Novatchev, Dimitre; Titov, Pavel; Siradjov, Bakhtiiar; Vlad, Ioan; Xiao Jing

    2001-01-01

    Full text: IAEA has invested much thought and effort into developing software that can assist inspectors during their inspection work. Experience with such applications has been steadily growing and IAEA has recently commissioned a next-generation software package. This kind of software accommodates inspection tasks that can vary substantially in function depending on the type of installation being inspected as well as ensures that the resulting software package has a wide range of usability and can preclude excessive development of plant-specific applications. The Common Inspection On-site Software Package is being developed in the Department of Safeguards to address the limitations of the existing software and to expand its coverage of the inspection process. CIOSP is 'common' in that it is aimed at providing support for as many facilities as possible with the minimum re-configuration. At the same time it has to cater to varying needs of individual facilities, different instrumentation and verification methods used. A component-based approach was taken to successfully tackle the challenges that the development of this software presented. CIOSP consists of the following major components: A framework into which individual plug-ins supporting various inspection activities can integrate at run-time; A central data store containing all facility configuration data and all data collected during inspections; A local data store, which resides on the inspector's computer, where the current inspection's data is stored; A set of services used by all plug-ins (i.e. data transformation, authentication, replication services etc.). This architecture allows for incremental development and extension of the software with plug-ins that support individual inspection activities. The core set of components along with the framework, the Inventory Verification, Book Examination and Records and Reports Comparison plug-ins have been developed. The development of the Short Notice Random

  1. Software Configuration Management Plan for the K West Basin Integrated Water Treatment System (IWTS) - Project A.9

    International Nuclear Information System (INIS)

    GREEN, J.W.

    2000-01-01

    This document provides a configuration control plan for the software associated with the operation and control of the Integrated Water Treatment System (IWTS). It establishes requirements for ensuring configuration item identification, configuration control, configuration status accounting, defect reporting and resolution of computer software. It is written to comply with HNF-SD-SNF-CM-001, Spent Nuclear Fuel Configuration Management Plan (Forehand 1998) and HNF-PRO-309 Computer Software Quality Assurance Requirements, and applicable sections of administrative procedure CM-6-037-00, SNF Project Process Automation Software and Equipment

  2. Guidance and Control Software Project Data - Volume 4: Configuration Management and Quality Assurance Documents

    Science.gov (United States)

    Hayhurst, Kelly J. (Editor)

    2008-01-01

    The Guidance and Control Software (GCS) project was the last in a series of software reliability studies conducted at Langley Research Center between 1977 and 1994. The technical results of the GCS project were recorded after the experiment was completed. Some of the support documentation produced as part of the experiment, however, is serving an unexpected role far beyond its original project context. Some of the software used as part of the GCS project was developed to conform to the RTCA/DO-178B software standard, "Software Considerations in Airborne Systems and Equipment Certification," used in the civil aviation industry. That standard requires extensive documentation throughout the software development life cycle, including plans, software requirements, design and source code, verification cases and results, and configuration management and quality control data. The project documentation that includes this information is open for public scrutiny without the legal or safety implications associated with comparable data from an avionics manufacturer. This public availability has afforded an opportunity to use the GCS project documents for DO-178B training. This report provides a brief overview of the GCS project, describes the 4-volume set of documents and the role they are playing in training, and includes configuration management and quality assurance documents from the GCS project. Volume 4 contains six appendices: A. Software Accomplishment Summary for the Guidance and Control Software Project; B. Software Configuration Index for the Guidance and Control Software Project; C. Configuration Management Records for the Guidance and Control Software Project; D. Software Quality Assurance Records for the Guidance and Control Software Project; E. Problem Report for the Pluto Implementation of the Guidance and Control Software Project; and F. Support Documentation Change Reports for the Guidance and Control Software Project.

  3. APIs for QoS configuration in Software Defined Networks

    DEFF Research Database (Denmark)

    Caba, Cosmin Marius; Soler, José

    2015-01-01

    The OpenFlow (OF) protocol is widely used in Software Defined Networking (SDN) to realize the communication between the controller and forwarding devices. OF allows great flexibility in managing traffic flows. However, OF alone is not enough to build more complex SDN services that require complete...... such as configuration of devices, ports, queues, etc. An Application Programming Interface (API) for dynamic configuration of QoS resources in the network devices is implemented herein, by using the capabilities of OVSDB. Further, the paper demonstrates the possibility to create network services with coarse granularity...... on top of the fine granular services exposed by the QoS configuration API at the SDNC. A series of tests emphasize the capabilities and the performance of the implemented QoS configuration API....

  4. Implementation of Software Configuration Management Process by Models: Practical Experiments and Learned Lessons

    Directory of Open Access Journals (Sweden)

    Bartusevics Arturs

    2014-12-01

    Full Text Available Nowadays software configuration management process is not only dilemma which system should be used for version control or how to merge changes from one source code branch to other. There are multiple tasks such as version control, build management, deploy management, status accounting, bug tracking and many others that should be solved to support full configuration management process according to most popular quality standards. The main scope of the mentioned process is to include only valid and tested software items to final version of product and prepare a new version as soon as possible. To implement different tasks of software configuration management process, a set of different tools, scripts and utilities should be used. The current paper provides a new model-based approach to implementation of configuration management. Using different models, a new approach helps to organize existing solutions and develop new ones by a parameterized way, thus increasing reuse of solutions. The study provides a general description of new model-based conception and definitions of all models needed to implement a new approach. The second part of the paper contains an overview of criteria, practical experiments and lessons learned from using new models in software configuration management. Finally, further works are defined based on results of practical experiments and lessons learned.

  5. CONFU: Configuration Fuzzing Testing Framework for Software Vulnerability Detection.

    Science.gov (United States)

    Dai, Huning; Murphy, Christian; Kaiser, Gail

    2010-01-01

    Many software security vulnerabilities only reveal themselves under certain conditions, i.e., particular configurations and inputs together with a certain runtime environment. One approach to detecting these vulnerabilities is fuzz testing. However, typical fuzz testing makes no guarantees regarding the syntactic and semantic validity of the input, or of how much of the input space will be explored. To address these problems, we present a new testing methodology called Configuration Fuzzing. Configuration Fuzzing is a technique whereby the configuration of the running application is mutated at certain execution points, in order to check for vulnerabilities that only arise in certain conditions. As the application runs in the deployment environment, this testing technique continuously fuzzes the configuration and checks "security invariants" that, if violated, indicate a vulnerability. We discuss the approach and introduce a prototype framework called ConFu (CONfiguration FUzzing testing framework) for implementation. We also present the results of case studies that demonstrate the approach's feasibility and evaluate its performance.

  6. Configurable software for satellite graphics

    Energy Technology Data Exchange (ETDEWEB)

    Hartzman, P D

    1977-12-01

    An important goal in interactive computer graphics is to provide users with both quick system responses for basic graphics functions and enough computing power for complex calculations. One solution is to have a distributed graphics system in which a minicomputer and a powerful large computer share the work. The most versatile type of distributed system is an intelligent satellite system in which the minicomputer is programmable by the application user and can do most of the work while the large remote machine is used for difficult computations. At New York University, the hardware was configured from available equipment. The level of system intelligence resulted almost completely from software development. Unlike previous work with intelligent satellites, the resulting system had system control centered in the satellite. It also had the ability to reconfigure software during realtime operation. The design of the system was done at a very high level using set theoretic language. The specification clearly illustrated processor boundaries and interfaces. The high-level specification also produced a compact, machine-independent virtual graphics data structure for picture representation. The software was written in a systems implementation language; thus, only one set of programs was needed for both machines. A user can program both machines in a single language. Tests of the system with an application program indicate that is has very high potential. A major result of this work is the demonstration that a gigantic investment in new hardware is not necessary for computing facilities interested in graphics.

  7. ACTS: from ATLAS software towards a common track reconstruction software

    Science.gov (United States)

    Gumpert, C.; Salzburger, A.; Kiehn, M.; Hrdinka, J.; Calace, N.; ATLAS Collaboration

    2017-10-01

    Reconstruction of charged particles’ trajectories is a crucial task for most particle physics experiments. The high instantaneous luminosity achieved at the LHC leads to a high number of proton-proton collisions per bunch crossing, which has put the track reconstruction software of the LHC experiments through a thorough test. Preserving track reconstruction performance under increasingly difficult experimental conditions, while keeping the usage of computational resources at a reasonable level, is an inherent problem for many HEP experiments. Exploiting concurrent algorithms and using multivariate techniques for track identification are the primary strategies to achieve that goal. Starting from current ATLAS software, the ACTS project aims to encapsulate track reconstruction software into a generic, framework- and experiment-independent software package. It provides a set of high-level algorithms and data structures for performing track reconstruction tasks as well as fast track simulation. The software is developed with special emphasis on thread-safety to support parallel execution of the code and data structures are optimised for vectorisation to speed up linear algebra operations. The implementation is agnostic to the details of the detection technologies and magnetic field configuration which makes it applicable to many different experiments.

  8. Reusable Rack Interface Controller Common Software for Various Science Research Racks on the International Space Station

    Science.gov (United States)

    Lu, George C.

    2003-01-01

    The purpose of the EXPRESS (Expedite the PRocessing of Experiments to Space Station) rack project is to provide a set of predefined interfaces for scientific payloads which allow rapid integration into a payload rack on International Space Station (ISS). VxWorks' was selected as the operating system for the rack and payload resource controller, primarily based on the proliferation of VME (Versa Module Eurocard) products. These products provide needed flexibility for future hardware upgrades to meet everchanging science research rack configuration requirements. On the International Space Station, there are multiple science research rack configurations, including: 1) Human Research Facility (HRF); 2) EXPRESS ARIS (Active Rack Isolation System); 3) WORF (Window Observational Research Facility); and 4) HHR (Habitat Holding Rack). The RIC (Rack Interface Controller) connects payloads to the ISS bus architecture for data transfer between the payload and ground control. The RIC is a general purpose embedded computer which supports multiple communication protocols, including fiber optic communication buses, Ethernet buses, EIA-422, Mil-Std-1553 buses, SMPTE (Society Motion Picture Television Engineers)-170M video, and audio interfaces to payloads and the ISS. As a cost saving and software reliability strategy, the Boeing Payload Software Organization developed reusable common software where appropriate. These reusable modules included a set of low-level driver software interfaces to 1553B. RS232, RS422, Ethernet buses, HRDL (High Rate Data Link), video switch functionality, telemetry processing, and executive software hosted on the FUC computer. These drivers formed the basis for software development of the HRF, EXPRESS, EXPRESS ARIS, WORF, and HHR RIC executable modules. The reusable RIC common software has provided extensive benefits, including: 1) Significant reduction in development flow time; 2) Minimal rework and maintenance; 3) Improved reliability; and 4) Overall

  9. Tests of Event Filter Configuration Software

    CERN Multimedia

    Wickens, F.J.

    TDAQ - Tests of Event Filter configuration software Within Trigger/DAQ a major consideration is how well the performance of the system components scale in going from the small set-ups used for development work to the final system with many hundreds of processors and links. In the case of the Event Filter, which makes the final stage of on-line event selection, plus on-line calibrations and monitoring, more than a thousand processors are envisaged. These processors will be divided into sub-farms, most will be remote from the detector and some could even be at institutes far from CERN. As part of the on-line system it is important that the software in the sub-farms can be reconfigured rapidly as runs start and stop, and that the system be fault tolerant. The flow of data inside a sub-farm involves many processes, for distribution and collection of results in addition to those for event processing itself. Supervision code written in Java has been developed to manage the processes within a cluster, with XML f...

  10. Software configuration management plan, 241-AY and 241-AZ tank farm MICON automation system

    International Nuclear Information System (INIS)

    Hill, L.F.

    1997-01-01

    This document establishes a Computer Software Configuration Management Plan (CSCM) for controlling software for the MICON Distributed Control System (DCS) located at the 241-AY and 241-AZ Aging Waste Tank Farm facilities in the 200 East Area. The MICON DCS software controls and monitors the instrumentation and equipment associated with plant systems and processes. A CSCM identifies and defines the configuration items in a system (section 3.1), controls the release and change of these items throughout the system life cycle (section 3.2), records and reports the status of configuration items and change requests (section 3.3), and verifies the completeness and correctness of the items (section 3.4). All software development before initial release, or before software is baselined, is considered developmental. This plan does not apply to developmental software. This plan applies to software that has been baselined and released. The MICON software will monitor and control the related instrumentation and equipment of the 241-AY and 241-AZ Tank Farm ventilation systems. Eventually, this software may also assume the monitoring and control of the tank sludge washing equipment and other systems as they are brought on line. This plan applies to the System Cognizant Manager and MICON Cognizant Engineer (who is also referred to herein as the system administrator) responsible for the software/hardware and administration of the MICON system. This document also applies to any other organizations within Tank Farms which are currently active on the system including system cognizant engineers, nuclear operators, technicians, and control room supervisors

  11. Project W-211, initial tank retrieval systems, retrieval control system software configuration management plan

    International Nuclear Information System (INIS)

    RIECK, C.A.

    1999-01-01

    This Software Configuration Management Plan (SCMP) provides the instructions for change control of the W-211 Project, Retrieval Control System (RCS) software after initial approval/release but prior to the transfer of custody to the waste tank operations contractor. This plan applies to the W-211 system software developed by the project, consisting of the computer human-machine interface (HMI) and programmable logic controller (PLC) software source and executable code, for production use by the waste tank operations contractor. The plan encompasses that portion of the W-211 RCS software represented on project-specific AUTOCAD drawings that are released as part of the C1 definitive design package (these drawings are identified on the drawing list associated with each C-1 package), and the associated software code. Implementation of the plan is required for formal acceptance testing and production release. The software configuration management plan does not apply to reports and data generated by the software except where specifically identified. Control of information produced by the software once it has been transferred for operation is the responsibility of the receiving organization

  12. Guidelines for evaluating software configuration management plans for digital instrumentation and control systems

    International Nuclear Information System (INIS)

    Cheon, Se Woo; Park, Jong Kyun; Lee, Ki Young; Lee, Jang Soo; Kim, Jang Yeon

    2001-08-01

    Software configuration management (SCM) is the process for identifying software configuration items (CIs), controlling the implementation and changes to software, recording and reporting the status of changes, and verifying the completeness and correctness of the released software. SCM consists of two major aspects: planning and implementation. Effective SCM involves planning for how activities are to be performed, and performing these activities in accordance with the Plan. This report first reviews the background of SCM that include key standards, SCM disciplines, SCM basic functions, baselines, software entity, SCM process, the implementation of SCM, and the tools of SCM. In turn, the report provides the guidelines for evaluating the SCM Plan for digital I and C systems of nuclear power plants. Most of the guidelines in the report are based on IEEE Std 828 and ANSI/IEEE Std 1042. According to BTP-14, NUREG-0800, the evaluation topics on the SCM Plan is classified into three categories: management, implementation, and resource characteristics

  13. Towards easing the configuration and new team member accommodation for open source software based portals

    Science.gov (United States)

    Fu, L.; West, P.; Zednik, S.; Fox, P. A.

    2013-12-01

    For simple portals such as vocabulary based services, which contain small amounts of data and require only hyper-textual representation, it is often an overkill to adopt the whole software stack of database, middleware and front end, or to use a general Web development framework as the starting point of development. Directly combining open source software is a much more favorable approach. However, our experience with the Coastal and Marine Spatial Planning Vocabulary (CMSPV) service portal shows that there are still issues such as system configuration and accommodating a new team member that need to be handled carefully. In this contribution, we share our experience in the context of the CMSPV portal, and focus on the tools and mechanisms we've developed to ease the configuration job and the incorporation process of new project members. We discuss the configuration issues that arise when we don't have complete control over how the software in use is configured and need to follow existing configuration styles that may not be well documented, especially when multiple pieces of such software need to work together as a combined system. As for the CMSPV portal, it is built on two pieces of open source software that are still under rapid development: a Fuseki data server and Epimorphics Linked Data API (ELDA) front end. Both lack mature documentation and tutorials. We developed comparison and labeling tools to ease the problem of system configuration. Another problem that slowed down the project is that project members came and went during the development process, so new members needed to start with a partially configured system and incomplete documentation left by old members. We developed documentation/tutorial maintenance mechanisms based on our comparison and labeling tools to make it easier for the new members to be incorporated into the project. These tools and mechanisms also provided benefit to other projects that reused the software components from the CMSPV

  14. Software configuration management plan for the Hanford site technical database

    International Nuclear Information System (INIS)

    GRAVES, N.J.

    1999-01-01

    The Hanford Site Technical Database (HSTD) is used as the repository/source for the technical requirements baseline and programmatic data input via the Hanford Site and major Hanford Project Systems Engineering (SE) activities. The Hanford Site SE effort has created an integrated technical baseline for the Hanford Site that supports SE processes at the Site and project levels which is captured in the HSTD. The HSTD has been implemented in Ascent Logic Corporation (ALC) Commercial Off-The-Shelf (COTS) package referred to as the Requirements Driven Design (RDD) software. This Software Configuration Management Plan (SCMP) provides a process and means to control and manage software upgrades to the HSTD system

  15. Optimal Switch Configuration in Software-Defined Networks

    Directory of Open Access Journals (Sweden)

    Béla GENGE

    2016-06-01

    Full Text Available The emerging Software-Defined Networks (SDN paradigm facilitates innovative applications and enables the seamless provisioning of resilient communications. Nevertheless, the installation of communication flows in SDN requires careful planning in order to avoid configuration errors and to fulfill communication requirements. In this paper we propose an approach that installs automatically and optimally static flows in SDN switches. The approach aims to select high capacity links and shortest path routing, and enforces communication link and switch capacity limitations. Experimental results demonstrate the effectiveness and scalability of the developed methodology.

  16. TCV software test and validation tools and technique. [Terminal Configured Vehicle program for commercial transport aircraft operation

    Science.gov (United States)

    Straeter, T. A.; Williams, J. R.

    1976-01-01

    The paper describes techniques for testing and validating software for the TCV (Terminal Configured Vehicle) program which is intended to solve problems associated with operating a commercial transport aircraft in the terminal area. The TCV research test bed is a Boeing 737 specially configured with digital computer systems to carry out automatic navigation, guidance, flight controls, and electronic displays research. The techniques developed for time and cost reduction include automatic documentation aids, an automatic software configuration, and an all software generation and validation system.

  17. A configurable component-based software system for magnetic field measurements

    Energy Technology Data Exchange (ETDEWEB)

    Nogiec, J.M.; DiMarco, J.; Kotelnikov, S.; Trombly-Freytag, K.; Walbridge, D.; Tartaglia, M.; /Fermilab

    2005-09-01

    A new software system to test accelerator magnets has been developed at Fermilab. The magnetic measurement technique involved employs a single stretched wire to measure alignment parameters and magnetic field strength. The software for the system is built on top of a flexible component-based framework, which allows for easy reconfiguration and runtime modification. Various user interface, data acquisition, analysis, and data persistence components can be configured to form different measurement systems that are tailored to specific requirements (e.g., involving magnet type or test stand). The system can also be configured with various measurement sequences or tests, each of them controlled by a dedicated script. It is capable of working interactively as well as executing a preselected sequence of tests. Each test can be parameterized to fit the specific magnet type or test stand requirements. The system has been designed with portability in mind and is capable of working on various platforms, such as Linux, Solaris, and Windows. It can be configured to use a local data acquisition subsystem or a remote data acquisition computer, such as a VME processor running VxWorks. All hardware-oriented components have been developed with a simulation option that allows for running and testing measurements in the absence of data acquisition hardware.

  18. Software environment and configuration for the DSP controlled NSLS booster power supplies

    International Nuclear Information System (INIS)

    Olsen, R.; Dabrowski, J.; Murray, J.

    1993-01-01

    The booster at the NSLS is being upgraded from 0.75 to 2 pulses per second by means of the installation of new dipole, quadrupole, and sextupole power supplies. The control system of these power supplies employs general purpose digital signal processing modules, and therefore, software support is required. This paper outlines the development system configuration, and the software environment

  19. Enabling System Evolution through Configuration Management on the Hardware/Software Boundary

    NARCIS (Netherlands)

    Krikhaar, R.L.; Mosterman, W.; Veerman, N.P.; Verhoef, C.

    2009-01-01

    As the use of software and electronics in modern products is omnipresent and continuously increasing, companies in the embedded systems industry face increasing complexity in controlling and enabling the evolution of their IT-intensive products. Traditionally, product configurations and their

  20. MDEP Generic Common Position No DICWG-01. Common position on the treatment of common cause failure caused by software within digital safety systems

    International Nuclear Information System (INIS)

    2013-01-01

    Common cause failures (CCF)2 have been a significant safety concern for nuclear power plant systems. The increasing dependence on software-in safety systems for nuclear power plants has increased the safety significance of CCF caused by software, when software in redundant channels or portions of safety systems has some common dependency. For example, the effect of systematic failures can lead to a loss of safety in many ways: unwanted actuations, a safety function is not provided when needed. Therefore, nuclear power plants should be systematically protected from the effects of common cause failures caused by software in DI and C safety systems. Software for nuclear power plant safety systems should be of the high quality necessary to help assure against the loss of safety (i.e. developed with high-quality engineering practices, commensurate quality assurance applied, with continuous improvement through corrective actions based on lessons learned from operating experience). However, demonstrating adequate software quality only through verification and validation activities and controls on the development process has proved to be problematic. Therefore, this common position provides guidance for the assessment of the potential for CCF for software. It is recognized that programmable logic devices do not execute software in the conventional sense; however, the application development process using these devices have many similarities with software development, and the deficiencies that may be introduced during the application development process may induce errors in the programmable logic devices that can result in common cause failures of these devices of a type similar to software common cause failure. Although deficiencies with the potential to give rise to software common cause failures can be introduced at all phases of the software life cycle, this common position will only consider the potential for software common cause failures within digital safety system

  1. Software Defined Common Processing System (SDCPS), Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Coherent Logix, Incorporated proposes the Software Defined Common Processing System (SDCPS) program to facilitate the development of a Software Defined Radio...

  2. Computer software configuration management plan for 200 East/West Liquid Effluent Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Graf, F.A. Jr.

    1995-02-27

    This computer software management configuration plan covers the control of the software for the monitor and control system that operates the Effluent Treatment Facility and its associated truck load in station and some key aspects of the Liquid Effluent Retention Facility that stores condensate to be processed. Also controlled is the Treated Effluent Disposal System`s pumping stations and monitors waste generator flows in this system as well as the Phase Two Effluent Collection System.

  3. Computer software configuration management plan for 200 East/West Liquid Effluent Facilities

    International Nuclear Information System (INIS)

    Graf, F.A. Jr.

    1995-01-01

    This computer software management configuration plan covers the control of the software for the monitor and control system that operates the Effluent Treatment Facility and its associated truck load in station and some key aspects of the Liquid Effluent Retention Facility that stores condensate to be processed. Also controlled is the Treated Effluent Disposal System's pumping stations and monitors waste generator flows in this system as well as the Phase Two Effluent Collection System

  4. Computer software configuration description, 241-AY and 241-AZ tank farm MICON automation system

    International Nuclear Information System (INIS)

    Winkelman, W.D.

    1998-01-01

    This document describes the configuration process, choices and conventions used during the Micon DCS configuration activities, and issues involved in making changes to the configuration. Includes the master listings of the Tag definitions, which should be revised to authorize any changes. Revision 3 provides additional information on the software used to provide communications with the W-320 project and incorporates minor changes to ensure the document alarm setpoint priorities correctly match operational expectations

  5. Software system development of NPP plant DiD risk monitor. Basic design of software configuration

    International Nuclear Information System (INIS)

    Yoshikawa, Hidekazu; Nakagawa, Takashi

    2015-01-01

    A new risk monitor system is under development which can be applied not only to prevent severe accident in daily operation but also to serve as to mitigate the radiological hazard just after severe accident happens and long term management of post-severe accident consequences. The fundamental method for the new risk monitor system is first given on how to configure the Plant Defense in-Depth (DiD) Risk Monitor by object-oriented software system based on functional modeling approach. In this paper, software system for the plant DiD risk monitor is newly developed by object oriented method utilizing Unified Modeling Language (UML). Usage of the developed DiD risk monitor is also introduced by showing examples for LOCA case of AP1000. (author)

  6. Computer software configuration description, 241-AY and 241-AZ tank farm MICON automation system

    International Nuclear Information System (INIS)

    Winkelman, W.D.

    1998-01-01

    This document describes the configuration process, choices and conventions used during the configuration activities, and issues involved in making changes to the configuration. Includes the master listings of the Tag definitions, which should be revised to authorize any changes. Revision 2 incorporates minor changes to ensure the document setpoints accurately reflect limits (including exhaust stack flow of 800 scfm) established in OSD-T-151-00019. The MICON DCS software controls and monitors the instrumentation and equipment associated with plant systems and processes

  7. Servicing HEP experiments with a complete set of ready integreated and configured common software components

    International Nuclear Information System (INIS)

    Roiser, Stefan; Gaspar, Ana; Perrin, Yves; Kruzelecki, Karol

    2010-01-01

    The LCG Applications Area at CERN provides basic software components for the LHC experiments such as ROOT, POOL, COOL which are developed in house and also a set of 'external' software packages (70) which are needed in addition such as Python, Boost, Qt, CLHEP, etc. These packages target many different areas of HEP computing such as data persistency, math, simulation, grid computing, databases, graphics, etc. Other packages provide tools for documentation, debugging, scripting languages and compilers. All these packages are provided in a consistent manner on different compilers, architectures and operating systems. The Software Process and Infrastructure project (SPI) [1] is responsible for the continous testing, coordination, release and deployment of these software packages. The main driving force for the actions carried out by SPI are the needs of the LHC experiments, but also other HEP experiments could profit from the set of consistent libraries provided and receive a stable and well tested foundation to build their experiment software frameworks. This presentation will first provide a brief description of the tools and services provided for the coordination, testing, release, deployment and presentation of LCG/AA software packages and then focus on a second set of tools provided for outside LHC experiments to deploy a stable set of HEP related software packages both as binary distribution or from source.

  8. Servicing HEP experiments with a complete set of ready integreated and configured common software components

    Energy Technology Data Exchange (ETDEWEB)

    Roiser, Stefan; Gaspar, Ana; Perrin, Yves [CERN, CH-1211 Geneva 23, PH Department, SFT Group (Switzerland); Kruzelecki, Karol, E-mail: stefan.roiser@cern.c, E-mail: ana.gaspar@cern.c, E-mail: yves.perrin@cern.c, E-mail: karol.kruzelecki@cern.c [CERN, CH-1211 Geneva 23, PH Department, LBC Group (Switzerland)

    2010-04-01

    The LCG Applications Area at CERN provides basic software components for the LHC experiments such as ROOT, POOL, COOL which are developed in house and also a set of 'external' software packages (70) which are needed in addition such as Python, Boost, Qt, CLHEP, etc. These packages target many different areas of HEP computing such as data persistency, math, simulation, grid computing, databases, graphics, etc. Other packages provide tools for documentation, debugging, scripting languages and compilers. All these packages are provided in a consistent manner on different compilers, architectures and operating systems. The Software Process and Infrastructure project (SPI) [1] is responsible for the continous testing, coordination, release and deployment of these software packages. The main driving force for the actions carried out by SPI are the needs of the LHC experiments, but also other HEP experiments could profit from the set of consistent libraries provided and receive a stable and well tested foundation to build their experiment software frameworks. This presentation will first provide a brief description of the tools and services provided for the coordination, testing, release, deployment and presentation of LCG/AA software packages and then focus on a second set of tools provided for outside LHC experiments to deploy a stable set of HEP related software packages both as binary distribution or from source.

  9. Software Configuration Management For Multiple Releases: Influence On Development Effort

    Directory of Open Access Journals (Sweden)

    Sławomir P. Maludziński

    2007-01-01

    Full Text Available Software Configuration Management (SCM evolves together with the discipline of softwareengineering. Teams working on software products become larger and are geographically distributedat multiple sites. Collaboration between such groups requires well evaluated SCMplans and strategies to easy cooperation and decrease software development cost by reducingtime spent on SCM activities – branching and merging, that is effort utilized on creation ofrevisions (’serial’ versions and variants (’parallel’ versions. This paper suggests that SCMpractices should be combined with modular design and code refactoring to reduce cost relatedto maintenance of the same code line. Teams which produce several variants of thesame code line at the same time should use approaches like components, modularization, orplug-ins over code alternations maintained on version branches. Findings described in thispaper were taken by teams in charge of development of radio communication systems inMotorola GEMS divisions. Each team collaborating on similar projects used different SCMstrategies to develop parts of this system.

  10. CMS Configuration Editor: GUI based application for user analysis job definition

    CERN Document Server

    De Cosa, Annapaola

    2010-01-01

    We present the user interface and the software architecture of the Configuration Editor that is used by CMS physicists to configure their physics analysis tasks. Analysis workflows typically involve execution of a sequence of algorithms, and these are implemented as software modules that are integrated within the CMS software framework (CMSSW). In particular, a set of common analysis tools is provided in the so-called CMS Physics Analysis Toolkit (PAT) and these need to be steered and configured during the execution of an analysis job. The Python scripting language is used to define the job configuration that drives the analysis workflow. Configuring analysis jobs can be quite a challenging task, particularly for newcomers, and therefore a graphical tool, called the Configuration Editor, has been developed to facilitate the creation and inspection of these configuration files. Typically, a user-defined analysis workflow can be produced starting from a standard configuration file, applying and configuring PAT ...

  11. Software Engineering Reviews and Audits

    CERN Document Server

    Summers, Boyd L

    2011-01-01

    Accurate software engineering reviews and audits have become essential to the success of software companies and military and aerospace programs. These reviews and audits define the framework and specific requirements for verifying software development efforts. Authored by an industry professional with three decades of experience, Software Engineering Reviews and Audits offers authoritative guidance for conducting and performing software first article inspections, and functional and physical configuration software audits. It prepares readers to answer common questions for conducting and perform

  12. The ALMA Common Software as a Basis for a Distributed Software Development

    Science.gov (United States)

    Raffi, Gianni; Chiozzi, Gianluca; Glendenning, Brian

    The Atacama Large Millimeter Array (ALMA) is a joint project involving astronomical organizations in Europe, North America and Japan. ALMA will consist of 64 12-m antennas operating in the millimetre and sub-millimetre wavelength range, with baselines of more than 10 km. It will be located at an altitude above 5000 m in the Chilean Atacama desert. The ALMA Computing group is a joint group with staff scattered on 3 continents and is responsible for all the control and data flow software related to ALMA, including tools ranging from support of proposal preparation to archive access of automatically created images. Early in the project it was decided that an ALMA Common Software (ACS) would be developed as a way to provide to all partners involved in the development a common software platform. The original assumption was that some key middleware like communication via CORBA and the use of XML and Java would be part of the project. It was intended from the beginning to develop this software in an incremental way based on releases, so that it would then evolve into an essential embedded part of all ALMA software applications. In this way we would build a basic unity and coherence into a system that will have been developed in a distributed fashion. This paper evaluates our progress after 1.5 year of work, following a few tests and preliminary releases. It analyzes the advantages and difficulties of such an ambitious approach, which creates an interface across all the various control and data flow applications.

  13. Creating a strategic plan for configuration management using computer aided software engineering (CASE) tools

    International Nuclear Information System (INIS)

    Smith, P.R.; Sarfaty, R.

    1993-01-01

    This paper provides guidance in the definition, documentation, measurement, enhancement of processes, and validation of a strategic plan for configuration management (CM). The approach and methodology used in establishing a strategic plan is the same for any enterprise, including the Department of Energy (DOE), commercial nuclear plants, the Department of Defense (DOD), or large industrial complexes. The principles and techniques presented are used world wide by some of the largest corporations. The authors used industry knowledge and the areas of their current employment to illustrate and provide examples. Developing a strategic configuration and information management plan for DOE Idaho Field Office (DOE-ID) facilities is discussed in this paper. A good knowledge of CM principles is the key to successful strategic planning. This paper will describe and define CM elements, and discuss how CM integrates the facility's physical configuration, design basis, and documentation. The strategic plan does not need the support of a computer aided software engineering (CASE) tool. However, the use of the CASE tool provides a methodology for consistency in approach, graphics, and database capability combined to form an encyclopedia and a method of presentation that is easily understood and aids the process of reengineering. CASE tools have much more capability than those stated above. Some examples are supporting a joint application development group (JAD) to prepare a software functional specification document and, if necessary, provide the capability to automatically generate software application code. This paper briefly discusses characteristics and capabilities of two CASE tools that use different methodologies to generate similar deliverables

  14. Software Defined Common Processing System (SDCPS), Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Coherent Logix, Incorporated (CLX) proposes the development of a Software Defined Common Processing System (SDCPS) that leverages the inherent advantages of an...

  15. Optimization of traffic distribution control in software-configurable infrastructure of virtual data center based on a simulation model

    Directory of Open Access Journals (Sweden)

    I. P. Bolodurina

    2017-01-01

    Full Text Available Currently, the proportion of use of cloud computing technology in today's business processes of companies is growing steadily. Despite the fact that it allows you to reduce the cost of ownership and operation of IT infrastructure, there are a number of problems related to the control of data centers. One such problem is the efficiency of the use of available companies compute and network resources. One of the directions of optimization is the process of traffic control of cloud applications and services in data centers. Given the multi-tier architecture of modern data center, this problem does not quite trivial. The advantage of modern virtual infrastructure is the ability to use software-configurable networks and software-configurable data storages. However, existing solutions with algorithmic optimization does not take into account a number of features forming network traffic with multiple classes of applications. Within the framework of the exploration solved the problem of optimizing the distribution of traffic cloud applications and services for the software-controlled virtual data center infrastructure. A simulation model describing the traffic in data center and software-configurable network segments involved in the processing of user requests for applications and services located network environment that includes a heterogeneous cloud platform and software-configurable data storages. The developed model has allowed to implement cloud applications traffic management algorithm and optimize access to the storage system through the effective use of the channel for data transmission. In experimental studies found that the application of the developed algorithm can reduce the response time of cloud applications and services, and as a result improve the performance of processing user requests and to reduce the number of failures.

  16. Offshore Vendors’ Software Development Team Configurations

    DEFF Research Database (Denmark)

    Chakraborty, Suranjan; Sarker, Saonee; Rai, Sudhanshu

    2012-01-01

    This research uses configuration theory and data collected from a major IT vendor organization to examine primary configurations of distributed teams in a global off-shoring context. The study indicates that off-shoring vendor organizations typically deploy three different types of configurations...

  17. 77 FR 50727 - Configuration Management Plans for Digital Computer Software Used in Safety Systems of Nuclear...

    Science.gov (United States)

    2012-08-22

    ... enhanced consensus practices for planning software configuration management (SCM) as described in the... testing of structures, systems, and components important to safety throughout the life of the unit. This...

  18. ATM Technology Demonstration-1 Phase II Boeing Configurable Graphical Display (CGD) Software Design Description

    Science.gov (United States)

    Wilber, George F.

    2017-01-01

    This Software Description Document (SDD) captures the design for developing the Flight Interval Management (FIM) system Configurable Graphics Display (CGD) software. Specifically this SDD describes aspects of the Boeing CGD software and the surrounding context and interfaces. It does not describe the Honeywell components of the CGD system. The SDD provides the system overview, architectural design, and detailed design with all the necessary information to implement the Boeing components of the CGD software and integrate them into the CGD subsystem within the larger FIM system. Overall system and CGD system-level requirements are derived from the CGD SRS (in turn derived from the Boeing System Requirements Design Document (SRDD)). Display and look-and-feel requirements are derived from Human Machine Interface (HMI) design documents and working group recommendations. This Boeing CGD SDD is required to support the upcoming Critical Design Review (CDR).

  19. Software solutions manage the definition, operation, maintenance and configuration control of the National Ignition Facility

    International Nuclear Information System (INIS)

    Dobson, Darwin; Churby, Al; Krieger, Ed; Maloy, Donna; White, Kevin

    2012-01-01

    Highlights: ► NIF is a complex experimental facility composed of ∼4 million components. ► We describe business tools to define, build, operate, and maintain all components. ► CAD tools generate virtual models and assemblies under configuration control. ► Items requiring preventive, reactive, and/or calibration maintenance are tracked. ► Radiological or hazardous materials undergo additional controls. - Abstract: The National Ignition Facility (NIF) is the world's largest laser composed of millions of individual parts brought together to form one massive assembly. Maintaining control of the physical definition, status and configuration of this structure is a monumental undertaking yet critical to the validity of experimental data and the safe operation of the facility. A major programmatic challenge is to deploy software solutions to effectively manage the definition, build, operation, and maintenance, and configuration control of all components of NIF. The strategy for meeting this challenge involves deploying and integrating an enterprise application suite of solutions consisting of both Commercial-Off-The-Shelf (COTS) products and custom developed software.This paper describes how this strategy has been implemented along with a discussion on the successes realized and the ongoing challenges associated with this approach.

  20. Implementation of Successful Practices Using an Iterative Development Methodology for an AEGIS Configuration Management Software Application

    National Research Council Canada - National Science Library

    Colston, Sharon

    1998-01-01

    This paper documents a two-and-a-half year software development project of the Combat Systems Configuration Management Branch of the Combat Systems Department at Naval Surface Warfare Center, Dahlgren Division (NSWCDD...

  1. iSDS: a self-configurable software-defined storage system for enterprise

    Science.gov (United States)

    Chen, Wen-Shyen Eric; Huang, Chun-Fang; Huang, Ming-Jen

    2018-01-01

    Storage is one of the most important aspects of IT infrastructure for various enterprises. But, enterprises are interested in more than just data storage; they are interested in such things as more reliable data protection, higher performance and reduced resource consumption. Traditional enterprise-grade storage satisfies these requirements at high cost. It is because traditional enterprise-grade storage is usually designed and constructed by customised field-programmable gate array to achieve high-end functionality. However, in this ever-changing environment, enterprises request storage with more flexible deployment and at lower cost. Moreover, the rise of new application fields, such as social media, big data, video streaming service etc., makes operational tasks for administrators more complex. In this article, a new storage system called intelligent software-defined storage (iSDS), based on software-defined storage, is described. More specifically, this approach advocates using software to replace features provided by traditional customised chips. To alleviate the management burden, it also advocates applying machine learning to automatically configure storage to meet dynamic requirements of workloads running on storage. This article focuses on the analysis feature of iSDS cluster by detailing its architecture and design.

  2. Informatics in Radiology (infoRAD): personal computer security: part 2. Software Configuration and file protection.

    Science.gov (United States)

    Caruso, Ronald D

    2004-01-01

    Proper configuration of software security settings and proper file management are necessary and important elements of safe computer use. Unfortunately, the configuration of software security options is often not user friendly. Safe file management requires the use of several utilities, most of which are already installed on the computer or available as freeware. Among these file operations are setting passwords, defragmentation, deletion, wiping, removal of personal information, and encryption. For example, Digital Imaging and Communications in Medicine medical images need to be anonymized, or "scrubbed," to remove patient identifying information in the header section prior to their use in a public educational or research environment. The choices made with respect to computer security may affect the convenience of the computing process. Ultimately, the degree of inconvenience accepted will depend on the sensitivity of the files and communications to be protected and the tolerance of the user. Copyright RSNA, 2004

  3. The Control and Configuration Software of the ATLAS Data Acquisition System: Upgrades for LHC Run 2

    CERN Document Server

    Aleksandrov, Igor; The ATLAS collaboration; Avolio, Giuseppe; Caprini, Mihai; Corso-Radu, Alina; D'ascanio, Matteo; De Castro Vargas Fernandes, Julio; Kazarov, Andrei; Kolobara, Bernard; Lankford, Andrew; Laurent, Florian; Lehmann Miotto, Giovanna; Magnoni, Luca; Papaevgeniou, Lykourgos; Ryabov, Yury; Santos, Alejandro; Seixas, Jose; Soloviev, Igor; Unel, Gokhan; Yasu, Yoshiji

    2016-01-01

    The Trigger and Data Acquisition (TDAQ) system of the ATLAS detector at the Large Hadron Collider (LHC) at CERN is composed of a large number of distributed hardware and software components which in a coordinated manner provide the data-taking functionality of the overall system. The Controls and Configuration (CC) software offers services to configure, control and monitor the TDAQ system. It is a framework which provides essentially the glue that holds the various sub-systems together. While the overall architecture, established at the end of the 90’s, has proven to be solid and flexible, many software components (from core services, like the Run Control and the error management system, to end- user tools) have undergone a complete redesign or re-implementation during the LHC’s Long Shutdown I period. The upgrades were driven by the need to fold-in the additional requirements that appeared in the course of LHC’s Run 1, to profit from new technologies and to re-factorize and cleanup the code. This paper...

  4. Common-path configuration in total internal reflection digital holography microscopy.

    Science.gov (United States)

    Calabuig, Alejandro; Matrecano, Marcella; Paturzo, Melania; Ferraro, Pietro

    2014-04-15

    Total Internal Reflection Digital Holographic Microscopy (TIRDHM) is recognized to be a powerful tool for retrieving quantitative phase images of cell-substrate interfaces, adhesions, and tissue structures close to the prism surface. In this Letter, we develop an improved TIRDHM system, taking advantage of a refractive index mismatch between the prism and the sample substrate, to allow phase-shifting DH with just a single-beam interferometric configuration. Instead of the traditional off-axis method, phase-shift method is used to retrieve amplitude and phase images in coherent light and TIR modality. Essentially, the substrate-prism interface acts like a beam splitter generating a reference beam, where the phase-shift dependence on the incident angle is exploited in this common-path configuration. With the aim to demonstrate the technique's validity, some experiments are performed to establish the advantage of this compact and simple configuration, in which the reference arm in the setup is avoided.

  5. Software solutions manage the definition, operation, maintenance and configuration control of the National Ignition Facility

    Energy Technology Data Exchange (ETDEWEB)

    Dobson, D; Churby, A; Krieger, E; Maloy, D; White, K

    2011-07-25

    The National Ignition Facility (NIF) is the world's largest laser composed of millions of individual parts brought together to form one massive assembly. Maintaining control of the physical definition, status and configuration of this structure is a monumental undertaking yet critical to the validity of the shot experiment data and the safe operation of the facility. The NIF business application suite of software provides the means to effectively manage the definition, build, operation, maintenance and configuration control of all components of the National Ignition Facility. State of the art Computer Aided Design software applications are used to generate a virtual model and assemblies. Engineering bills of material are controlled through the Enterprise Configuration Management System. This data structure is passed to the Enterprise Resource Planning system to create a manufacturing bill of material. Specific parts are serialized then tracked along their entire lifecycle providing visibility to the location and status of optical, target and diagnostic components that are key to assessing pre-shot machine readiness. Nearly forty thousand items requiring preventive, reactive and calibration maintenance are tracked through the System Maintenance & Reliability Tracking application to ensure proper operation. Radiological tracking applications ensure proper stewardship of radiological and hazardous materials and help provide a safe working environment for NIF personnel.

  6. Software solutions manage the definition, operation, maintenance and configuration control of the National Ignition Facility

    International Nuclear Information System (INIS)

    Dobson, D.; Churby, A.; Krieger, E.; Maloy, D.; White, K.

    2011-01-01

    The National Ignition Facility (NIF) is the world's largest laser composed of millions of individual parts brought together to form one massive assembly. Maintaining control of the physical definition, status and configuration of this structure is a monumental undertaking yet critical to the validity of the shot experiment data and the safe operation of the facility. The NIF business application suite of software provides the means to effectively manage the definition, build, operation, maintenance and configuration control of all components of the National Ignition Facility. State of the art Computer Aided Design software applications are used to generate a virtual model and assemblies. Engineering bills of material are controlled through the Enterprise Configuration Management System. This data structure is passed to the Enterprise Resource Planning system to create a manufacturing bill of material. Specific parts are serialized then tracked along their entire lifecycle providing visibility to the location and status of optical, target and diagnostic components that are key to assessing pre-shot machine readiness. Nearly forty thousand items requiring preventive, reactive and calibration maintenance are tracked through the System Maintenance and Reliability Tracking application to ensure proper operation. Radiological tracking applications ensure proper stewardship of radiological and hazardous materials and help provide a safe working environment for NIF personnel.

  7. Bone histomorphometry using free and commonly available software.

    Science.gov (United States)

    Egan, Kevin P; Brennan, Tracy A; Pignolo, Robert J

    2012-12-01

    Histomorphometric analysis is a widely used technique to assess changes in tissue structure and function. Commercially available programs that measure histomorphometric parameters can be cost-prohibitive. In this study, we compared an inexpensive method of histomorphometry to a current proprietary software program. Image J and Adobe Photoshop(®) were used to measure static and kinetic bone histomorphometric parameters. Photomicrographs of Goldner's trichrome-stained femurs were used to generate black-and-white image masks, representing bone and non-bone tissue, respectively, in Adobe Photoshop(®) . The masks were used to quantify histomorphometric parameters (bone volume, tissue volume, osteoid volume, mineralizing surface and interlabel width) in Image J. The resultant values obtained using Image J and the proprietary software were compared and differences found to be statistically non-significant. The wide-ranging use of histomorphometric analysis for assessing the basic morphology of tissue components makes it important to have affordable and accurate measurement options available for a diverse range of applications. Here we have developed and validated an approach to histomorphometry using commonly and freely available software that is comparable to a much more costly, commercially available software program. © 2012 Blackwell Publishing Limited.

  8. Configuration management plan. System definition and project development. Repository Based Software Engineering (RBSE) program

    Science.gov (United States)

    Mckay, Charles

    1991-01-01

    This is the configuration management Plan for the AdaNet Repository Based Software Engineering (RBSE) contract. This document establishes the requirements and activities needed to ensure that the products developed for the AdaNet RBSE contract are accurately identified, that proposed changes to the product are systematically evaluated and controlled, that the status of all change activity is known at all times, and that the product achieves its functional performance requirements and is accurately documented.

  9. Intellectual Model-Based Configuration Management Conception

    Directory of Open Access Journals (Sweden)

    Bartusevics Arturs

    2014-07-01

    Full Text Available Software configuration management is one of the most important disciplines within the software development project, which helps control the software evolution process and allows including into the end project only tested and validated changes. To achieve this, software management completes certain tasks. Concrete tools are used for technical implementation of tasks, such as version control systems, servers of continuous integration, compilers, etc. A correct configuration management process usually requires several tools, which mutually exchange information by generating various kinds of transfers. When it comes to introducing the configuration management process, often there are situations when tool installation is started, yet at that given moment there is no general picture of the total process. The article offers a model-based configuration management concept, which foresees the development of an abstract model for the configuration management process that later is transformed to lower abstraction level models and tools are indicated to support the technical process. A solution of this kind allows a more rational introduction and configuration of tools

  10. Conceptualizing Embedded Configuration

    DEFF Research Database (Denmark)

    Oddsson, Gudmundur Valur; Hvam, Lars; Lysgaard, Ole

    2006-01-01

    and services. The general idea can be named embedded configuration. In this article we intend to conceptualize embedded configuration, what it is and is not. The difference between embedded configuration, sales configuration and embedded software is explained. We will look at what is needed to make embedded...... configuration systems. That will include requirements to product modelling techniques. An example with consumer electronics will illuminate the elements of embedded configuration in settings that most can relate to. The question of where embedded configuration would be relevant is discussed, and the current...

  11. HLT configuration management system

    CERN Document Server

    Daponte, Vincenzo

    2015-01-01

    The CMS High Level Trigger (HLT) is implemented running a streamlined version of the CMS offline reconstruction software running on thousands of CPUs. The CMS software is written mostly in C++, using Python as its configuration language through an embedded CPython interpreter. The configuration of each process is made up of hundreds of modules, organized in sequences and paths. As an example, the HLT configurations used for 2011 data taking comprised over 2200 different modules, organized in more than 400 independent trigger paths. The complexity of the HLT configurations and the large number of configuration produced require the design of a suitable data management system. The present work describes the designed solution to manage the considerable number of configurations developed and to assist the editing of new configurations. The system is required to be remotely accessible and OS-independent as well as easly maintainable easy to use. To meet these requirements a three-layers architecture has been choose...

  12. AZ-101 Mixer Pump Demonstration Data Acquisition System and Gamma Cart Data Acquisition Control System Software Configuration Management Plan

    International Nuclear Information System (INIS)

    WHITE, D.A.

    1999-01-01

    This Software Configuration Management Plan (SCMP) provides the instructions for change control of the AZ1101 Mixer Pump Demonstration Data Acquisition System (DAS) and the Sludge Mobilization Cart (Gamma Cart) Data Acquisition and Control System (DACS)

  13. Comparative Performance and Model Agreement of Three Common Photovoltaic Array Configurations.

    Science.gov (United States)

    Boyd, Matthew T

    2018-02-01

    Three grid-connected monocrystalline silicon arrays on the National Institute of Standards and Technology (NIST) campus in Gaithersburg, MD have been instrumented and monitored for 1 yr, with only minimal gaps in the data sets. These arrays range from 73 kW to 271 kW, and all use the same module, but have different tilts, orientations, and configurations. One array is installed facing east and west over a parking lot, one in an open field, and one on a flat roof. Various measured relationships and calculated standard metrics have been used to compare the relative performance of these arrays in their different configurations. Comprehensive performance models have also been created in the modeling software pvsyst for each array, and its predictions using measured on-site weather data are compared to the arrays' measured outputs. The comparisons show that all three arrays typically have monthly performance ratios (PRs) above 0.75, but differ significantly in their relative output, strongly correlating to their operating temperature and to a lesser extent their orientation. The model predictions are within 5% of the monthly delivered energy values except during the winter months, when there was intermittent snow on the arrays, and during maintenance and other outages.

  14. Software configuration plan for the 1,000 CFM portable exhauster's small logic control system

    International Nuclear Information System (INIS)

    Kaiser, T.D.

    1998-01-01

    This document describes the formal documentation for maintaining the control system associated with the 1,000 CFM portable exhauster's. The objective of the software configuration control plan is to provide assurances that the portable exhauster's control system will be operable for the duration of 241-C-106 and 241-AY-102 operations (project 320). The design was based upon the criteria documented in the portable exhauster functional design criteria (HNF-SD-WM-DB-035) and procurement specification (HNF-S-0490) for the exhauster interlock systems

  15. Visualization of the CMS python configuration system

    International Nuclear Information System (INIS)

    Erdmann, M; Fischer, R; Klimkovich, T; Mueller, G; Steggemann, J; Hegner, B; Hinzmann, A

    2010-01-01

    The job configuration system of the CMS experiment is based on the Python programming language. Software modules and their order of execution are both represented by Python objects. In order to investigate and verify configuration parameters and dependencies naturally appearing in modular software, CMS employs a graphical tool. This tool visualizes the configuration objects, their dependencies, and the information flow. Furthermore it can be used for documentation purposes. The underlying software concepts as well as the visualization are presented.

  16. Visualization of the CMS python configuration system

    Energy Technology Data Exchange (ETDEWEB)

    Erdmann, M; Fischer, R; Klimkovich, T; Mueller, G; Steggemann, J [RWTH Aachen University, Physikalisches Institut 3A, 52062 Aachen (Germany); Hegner, B [CERN, CH-1211 Geneva 23 (Switzerland); Hinzmann, A, E-mail: andreas.hinzmann@cern.c

    2010-04-01

    The job configuration system of the CMS experiment is based on the Python programming language. Software modules and their order of execution are both represented by Python objects. In order to investigate and verify configuration parameters and dependencies naturally appearing in modular software, CMS employs a graphical tool. This tool visualizes the configuration objects, their dependencies, and the information flow. Furthermore it can be used for documentation purposes. The underlying software concepts as well as the visualization are presented.

  17. Detailed requirements document for common software of shuttle program information management system

    Science.gov (United States)

    Everette, J. M.; Bradfield, L. D.; Horton, C. L.

    1975-01-01

    Common software was investigated as a method for minimizing development and maintenance cost of the shuttle program information management system (SPIMS) applications while reducing the time-frame of their development. Those requirements satisfying these criteria are presented along with the stand-alone modules which may be used directly by applications. The SPIMS applications operating on the CYBER 74 computer, are specialized information management systems which use System 2000 as a data base manager. Common software provides the features to support user interactions on a CRT terminal using form input and command response capabilities. These features are available as subroutines to the applications.

  18. SIMPLIFIED CHARGED PARTICLE BEAM TRANSPORT MODELING USING COMMONLY AVAILABLE COMMERCIAL SOFTWARE

    Energy Technology Data Exchange (ETDEWEB)

    D. Douglas; K. Beard; J. Eldred; P. Evtushenko; A. Jenkins; W. Moore; L. Osborne; D. Sexton; C. Tennant

    2007-06-18

    Particle beam modeling in accelerators has been the focus of considerable effort since the 1950s. Many generations of tools have resulted from this process, each leveraging both prior experience and increases in computer power. However, continuing innovation in accelerator technology results in systems that are not well described by existing tools, so the software development process is on-going. We discuss a novel response to this situation, which was encountered when Jefferson Lab began operation of its energy-recovering linacs. These machines were not readily described with legacy soft-ware; therefore a model was built using Microsoft Excel. This interactive simulation can query data from the accelerator, use it to compute machine parameters, analyze difference orbit data, and evaluate beam properties. It can also derive new accelerator tunings and rapidly evaluate the impact of changes in machine configuration. As it is spreadsheet-based, it can be easily user-modified in response to changing requirements. Examples for the JLab IR Upgrade FEL are presented.

  19. SIMPLIFIED CHARGED PARTICLE BEAM TRANSPORT MODELING USING COMMONLY AVAILABLE COMMERCIAL SOFTWARE

    International Nuclear Information System (INIS)

    D. Douglas; K. Beard; J. Eldred; P. Evtushenko; A. Jenkins; W. Moore; L. Osborne; D. Sexton; C. Tennant

    2007-01-01

    Particle beam modeling in accelerators has been the focus of considerable effort since the 1950s. Many generations of tools have resulted from this process, each leveraging both prior experience and increases in computer power. However, continuing innovation in accelerator technology results in systems that are not well described by existing tools, so the software development process is on-going. We discuss a novel response to this situation, which was encountered when Jefferson Lab began operation of its energy-recovering linacs. These machines were not readily described with legacy soft-ware; therefore a model was built using Microsoft Excel. This interactive simulation can query data from the accelerator, use it to compute machine parameters, analyze difference orbit data, and evaluate beam properties. It can also derive new accelerator tunings and rapidly evaluate the impact of changes in machine configuration. As it is spreadsheet-based, it can be easily user-modified in response to changing requirements. Examples for the JLab IR Upgrade FEL are presented

  20. Common Data Acquisition Systems (DAS) Software Development for Rocket Propulsion Test (RPT) Test Facilities

    Science.gov (United States)

    Hebert, Phillip W., Sr.; Davis, Dawn M.; Turowski, Mark P.; Holladay, Wendy T.; Hughes, Mark S.

    2012-01-01

    The advent of the commercial space launch industry and NASA's more recent resumption of operation of Stennis Space Center's large test facilities after thirty years of contractor control resulted in a need for a non-proprietary data acquisition systems (DAS) software to support government and commercial testing. The software is designed for modularity and adaptability to minimize the software development effort for current and future data systems. An additional benefit of the software's architecture is its ability to easily migrate to other testing facilities thus providing future commonality across Stennis. Adapting the software to other Rocket Propulsion Test (RPT) Centers such as MSFC, White Sands, and Plumbrook Station would provide additional commonality and help reduce testing costs for NASA. Ultimately, the software provides the government with unlimited rights and guarantees privacy of data to commercial entities. The project engaged all RPT Centers and NASA's Independent Verification & Validation facility to enhance product quality. The design consists of a translation layer which provides the transparency of the software application layers to underlying hardware regardless of test facility location and a flexible and easily accessible database. This presentation addresses system technical design, issues encountered, and the status of Stennis development and deployment.

  1. Automatic creation of simulation configuration

    International Nuclear Information System (INIS)

    Oudot, G.; Poizat, F.

    1993-01-01

    SIPA, which stands for 'Simulator for Post Accident', includes: 1) a sophisticated software oriented workshop SWORD (which stands for 'Software Workshop Oriented towards Research and Development') designed in the ADA language including integrated CAD system and software tools for automatic generation of simulation software and man-machine interface in order to operate run-time simulation; 2) a 'simulator structure' based on hardware equipment and software for supervision and communications; 3) simulation configuration generated by SWORD, operated under the control of the 'simulator structure' and run on a target computer. SWORD has already been used to generate two simulation configurations (French 900 MW and 1300 MW nuclear power plants), which are now fully operational on the SIPA training simulator. (Z.S.) 1 ref

  2. Prediction of Protein Configurational Entropy (Popcoen).

    Science.gov (United States)

    Goethe, Martin; Gleixner, Jan; Fita, Ignacio; Rubi, J Miguel

    2018-03-13

    A knowledge-based method for configurational entropy prediction of proteins is presented; this methodology is extremely fast, compared to previous approaches, because it does not involve any type of configurational sampling. Instead, the configurational entropy of a query fold is estimated by evaluating an artificial neural network, which was trained on molecular-dynamics simulations of ∼1000 proteins. The predicted entropy can be incorporated into a large class of protein software based on cost-function minimization/evaluation, in which configurational entropy is currently neglected for performance reasons. Software of this type is used for all major protein tasks such as structure predictions, proteins design, NMR and X-ray refinement, docking, and mutation effect predictions. Integrating the predicted entropy can yield a significant accuracy increase as we show exemplarily for native-state identification with the prominent protein software FoldX. The method has been termed Popcoen for Prediction of Protein Configurational Entropy. An implementation is freely available at http://fmc.ub.edu/popcoen/ .

  3. Simulator configuration maintenance

    International Nuclear Information System (INIS)

    2006-01-01

    Requirements and recommendations of this section defines NPP personnel activity aimed to the provision of the simulator configuration compliance with the current configuration of the power-generating unit-prototype, standard and technical requirements and describe a monitoring procedure for a set of simulator software and hardware, training, organizational and technical documents

  4. Flexible event reconstruction software chains with the ALICE High-Level Trigger

    International Nuclear Information System (INIS)

    Ram, D; Breitner, T; Szostak, A

    2012-01-01

    The ALICE High-Level Trigger (HLT) has a large high-performance computing cluster at CERN whose main objective is to perform real-time analysis on the data generated by the ALICE experiment and scale it down to at-most 4GB/sec - which is the current maximum mass-storage bandwidth available. Data-flow in this cluster is controlled by a custom designed software framework. It consists of a set of components which can communicate with each other via a common control interface. The software framework also supports the creation of different configurations based on the detectors participating in the HLT. These configurations define a logical data processing “chain” of detector data-analysis components. Data flows through this software chain in a pipelined fashion so that several events can be processed at the same time. An instance of such a chain can run and manage a few thousand physics analysis and data-flow components. The HLT software and the configuration scheme used in the 2011 heavy-ion runs of ALICE, has been discussed in this contribution.

  5. Offshore Vendors' Software Development Team Configuration

    DEFF Research Database (Denmark)

    Chakraborty, Suranjan; Sarker, Saonee; Rai, Sudhanshu

    2011-01-01

    were compared on their inherent process-related and resource-related flexibilities. The thick-at-client configuration emerged as the one that offers superior flexibility (in all dimensions).However, additional analysis also revealed contingencies apart from flexibility that may influence...

  6. Combined application of Product Lifecycle and Software Configuration Management systems for ITER remote handling

    International Nuclear Information System (INIS)

    Muhammad, Ali; Esque, Salvador; Aha, Liisa; Mattila, Jouni; Siuko, Mikko; Vilenius, Matti; Jaervenpaeae, Jorma; Irving, Mike; Damiani, Carlo; Semeraro, Luigi

    2009-01-01

    The advantages of Product Lifecycle Management (PLM) systems are widely understood among the industry and hence a PLM system is already in use by International Thermonuclear Experimental Reactor (ITER) Organization (IO). However, with the increasing involvement of software in the development, the role of Software Configuration Management (SCM) systems have become equally important. The SCM systems can be useful to meet the higher demands on Safety Engineering (SE), Quality Assurance (QA), Validation and Verification (V and V) and Requirements Management (RM) of the developed software tools. In an experimental environment, such as ITER, the new remote handling requirements emerge frequently. This means the development of new tools or the modification of existing tools and the development of new remote handling procedures or the modification of existing remote handling procedures. PLM and SCM systems together can be of great advantage in the development and maintenance of such remote handling system. In this paper, we discuss how PLM and SCM systems can be integrated together and play their role during the development and maintenance of ITER remote handling system. We discuss the possibility to investigate such setup at DTP2 (Divertor Test Platform 2), which is the full scale mock-up facility to verify the ITER divertor remote handling and maintenance concepts.

  7. Common aspects and differences in the behaviour of classical configuration versus canard configuration aircraft in the presence of vertical gusts, assuming the hypothesis of an elastic fuselage

    Directory of Open Access Journals (Sweden)

    Octavian PREOTU

    2011-06-01

    Full Text Available The paper analyzes, in parallel, common aspects and differences in the behavior of classical configuration versus canard configuration aircraft in the presence of vertical gusts, assuming the hypothesis of an elastic fuselage. The effects of the main constructional dimensions of the horizontal empennage on lift cancelling and horizontal empennage control are being analyzed

  8. ACTS: from ATLAS software towards a common track reconstruction software

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00349786; The ATLAS collaboration; Salzburger, Andreas; Kiehn, Moritz; Hrdinka, Julia; Calace, Noemi

    2017-01-01

    Reconstruction of charged particles' trajectories is a crucial task for most particle physics experiments. The high instantaneous luminosity achieved at the LHC leads to a high number of proton-proton collisions per bunch crossing, which has put the track reconstruction software of the LHC experiments through a thorough test. Preserving track reconstruction performance under increasingly difficult experimental conditions, while keeping the usage of computational resources at a reasonable level, is an inherent problem for many HEP experiments. Exploiting concurrent algorithms and using multivariate techniques for track identification are the primary strategies to achieve that goal. Starting from current ATLAS software, the ACTS project aims to encapsulate track reconstruction software into a generic, framework- and experiment-independent software package. It provides a set of high-level algorithms and data structures for performing track reconstruction tasks as well as fast track simulation. The software is de...

  9. Languages for Software-Defined Networks

    Science.gov (United States)

    2013-02-01

    switches, firewalls, and middleboxes) with closed and proprietary configuration inter- faces. Software - Defined Networks ( SDN ) are poised to change...how- ever, have seen growing interest in software - defined networks ( SDNs ), in which a logically-centralized controller manages the packet-processing...switches, firewalls, and middleboxes) with closed and proprietary configuration interfaces. Software - Defined Networks ( SDN ) are poised to change this

  10. OntoSoft: A Software Commons for Geosciences

    Science.gov (United States)

    Gil, Y.

    2015-12-01

    The goal of the EarthCube OntoSoft project is to enable the creation of a germinal ecosystem for software stewardship in geosciences that will empower scientists to manage their software as valuable scientific assets in an open transparent mode that enables broader access to that software by other scientists, software professionals, students, and decision makers. Our work to date includes: 1) an ontology for describing scientific software metadata, 2) a scientific software repository that contains more than 600 entries that can be searched and compared across metadata fields, 3) an intelligent user interface that guides scientists to publish software. We have also developed a training program where scientists learn to describe and cite software in their papers in addition to data and provenance. This training program is part of a Geoscience Papers of the Future Initiative, where scientists learn as they are writing a journal paper that can be submitted to a Special Section of the AGU Earth and Space Science Journal.

  11. The nightly build and test system for LCG AA and LHCb software

    International Nuclear Information System (INIS)

    Kruzelecki, Karol; Roiser, Stefan; Degaudenzi, Hubert

    2010-01-01

    The core software stack both from the LCG Application Area and LHCb consists of more than 25 C++/Fortran/Python projects built for about 20 different configurations on Linux, Windows and MacOSX. To these projects, one can also add about 70 external software packages (Boost, Python, Qt, CLHEP, ...) which also have to be built for the same configurations. It order to reduce the time of the development cycle and assure the quality, a framework has been developed for the daily (in fact nightly) build and test of the software. Performing the build and the tests on several configurations and platforms increases the efficiency of the unit and integration tests. Main features: - flexible and fine grained setup (full, partial build) through a web interface; - possibility to build several 'slots' with different configurations; - precise and highly granular reports on a web server; - support for CMT projects (but not only) with their cross-dependencies; - scalable client-server architecture for the control machine and its build machines; - copy of the results in a common place to allow early view of the software stack. The nightly build framework is written in Python for portability and it is easily extensible to accommodate new build procedures.

  12. Configuration management at NEK

    International Nuclear Information System (INIS)

    Podhraski, M.

    1999-01-01

    Configuration Management (CM) objectives at NEK are to ensure consistency between Design Requirements, Physical Plant Configuration and Configuration Information. Software applications, supporting Design Change, Work Control and Document Control Processes, are integrated in one module-oriented Management Information System (MIS). Master Equipment Component List (MECL) database is central MIS module. Through a combination of centralized database and process migrated activities it is ensured that the CM principles and requirements (accurate, current design data matching plant's physical configuration while complying to applicable requirements), are followed and fulfilled.(author)

  13. Usability challenges in an Ethiopian software development organization

    DEFF Research Database (Denmark)

    Teka, Degif; Dittrich, Yvonne; Kifle, Mesfin

    2016-01-01

    Usability and user centered design (UCD) are central to software development. In developing countries, the gap between IT development and the local use situation is larger than in western countries. However, usability is neither well addressed in software practice nor at the policy making level...... in Ethiopia. Software practitioners focus on functional requirements, meeting deadlines and budget. The software development industry in Ethiopia is in its early stage. The article aims at understanding usability practices in an Ethiopian software development company. Developers, system analysts, product...... configuration, their experience, cultural knowledge and common sense regarding the users' situation guided the design. Prototypes and fast delivery of working versions helped in getting user feedback even if early user focus proved to be a challenge as communication between developers and users suffered from...

  14. Software Engineering Guidebook

    Science.gov (United States)

    Connell, John; Wenneson, Greg

    1993-01-01

    The Software Engineering Guidebook describes SEPG (Software Engineering Process Group) supported processes and techniques for engineering quality software in NASA environments. Three process models are supported: structured, object-oriented, and evolutionary rapid-prototyping. The guidebook covers software life-cycles, engineering, assurance, and configuration management. The guidebook is written for managers and engineers who manage, develop, enhance, and/or maintain software under the Computer Software Services Contract.

  15. CMS Configuration Editor: GUI based application for user analysis job

    International Nuclear Information System (INIS)

    Cosa, A de

    2011-01-01

    We present the user interface and the software architecture of the Configuration Editor for the CMS experiment. The analysis workflow is organized in a modular way integrated within the CMS framework that organizes in a flexible way user analysis code. The Python scripting language is adopted to define the job configuration that drives the analysis workflow. It could be a challenging task for users, especially for newcomers, to develop analysis jobs managing the configuration of many required modules. For this reason a graphical tool has been conceived in order to edit and inspect configuration files. A set of common analysis tools defined in the CMS Physics Analysis Toolkit (PAT) can be steered and configured using the Config Editor. A user-defined analysis workflow can be produced starting from a standard configuration file, applying and configuring PAT tools according to the specific user requirements. CMS users can adopt this tool, the Config Editor, to create their analysis visualizing in real time which are the effects of their actions. They can visualize the structure of their configuration, look at the modules included in the workflow, inspect the dependences existing among the modules and check the data flow. They can visualize at which values parameters are set and change them according to what is required by their analysis task. The integration of common tools in the GUI needed to adopt an object-oriented structure in the Python definition of the PAT tools and the definition of a layer of abstraction from which all PAT tools inherit.

  16. System Configuration Management Implementation Procedure for the Canister Storage Building (CSB)

    International Nuclear Information System (INIS)

    GARRISON, R.C.

    2000-01-01

    This document implements the procedure for providing configuration control for the monitoring and control systems associated with the operation of the Canister Storage Building (CSB). It identifies and defines the configuration items in the monitoring and control systems, provides configuration control of these items throughout the system life cycle, provides configuration status accounting, physical protection and control, and verifies the completeness and correctness of the items. It is written to comply with HNF-SD-SNF-CM-001, Spent Nuclear Fuel Configuration Management Plan (Forehand 1998), HNF-PRO-309, Computer Software Quality Assurance Requirements, HNF-PRO-2778, IRM Application Software System Life Cycle Standards, and applicable sections of administrative procedure AP-CM-6-037-00, SNF Project Process Automation Software and Equipment Configuration Management

  17. Some Aspects of Process Computers Configuration Control in Nuclear Power Plant Krsko - Process Computer Signal Configuration Database (PCSCDB)

    International Nuclear Information System (INIS)

    Mandic, D.; Kocnar, R.; Sucic, B.

    2002-01-01

    During the operation of NEK and other nuclear power plants it has been recognized that certain issues related to the usage of digital equipment and associated software in NPP technological process protection, control and monitoring, is not adequately addressed in the existing programs and procedures. The term and the process of Process Computers Configuration Control joins three 10CFR50 Appendix B quality requirements of Process Computers application in NPP: Design Control, Document Control and Identification and Control of Materials, Parts and Components. This paper describes Process Computer Signal Configuration Database (PCSCDB), that was developed and implemented in order to resolve some aspects of Process Computer Configuration Control related to the signals or database points that exist in the life cycle of different Process Computer Systems (PCS) in Nuclear Power Plant Krsko. PCSCDB is controlled, master database, related to the definition and description of the configurable database points associated with all Process Computer Systems in NEK. PCSCDB holds attributes related to the configuration of addressable and configurable real time database points and attributes related to the signal life cycle references and history data such as: Input/Output signals, Manually Input database points, Program constants, Setpoints, Calculated (by application program or SCADA calculation tools) database points, Control Flags (example: enable / disable certain program feature) Signal acquisition design references to the DCM (Document Control Module Application software for document control within Management Information System - MIS) and MECL (Master Equipment and Component List MIS Application software for identification and configuration control of plant equipment and components) Usage of particular database point in particular application software packages, and in the man-machine interface features (display mimics, printout reports, ...) Signals history (EEAR Engineering

  18. Assessment of the Unstructured Grid Software TetrUSS for Drag Prediction of the DLR-F4 Configuration

    Science.gov (United States)

    Pirzadeh, Shahyar Z.; Frink, Neal T.

    2002-01-01

    An application of the NASA unstructured grid software system TetrUSS is presented for the prediction of aerodynamic drag on a transport configuration. The paper briefly describes the underlying methodology and summarizes the results obtained on the DLR-F4 transport configuration recently presented in the first AIAA computational fluid dynamics (CFD) Drag Prediction Workshop. TetrUSS is a suite of loosely coupled unstructured grid CFD codes developed at the NASA Langley Research Center. The meshing approach is based on the advancing-front and the advancing-layers procedures. The flow solver employs a cell-centered, finite volume scheme for solving the Reynolds Averaged Navier-Stokes equations on tetrahedral grids. For the present computations, flow in the viscous sublayer has been modeled with an analytical wall function. The emphasis of the paper is placed on the practicality of the methodology for accurately predicting aerodynamic drag data.

  19. Mastering System Center 2012 Configuration Manager

    CERN Document Server

    Rachui, Steve; Martinez, Santos; Daalmans, Peter

    2012-01-01

    Expert coverage of Microsoft's highly anticipated network software deployment tool The latest version of System Center Configuration Manager (SCCM) is a dramatic update of its predecessor Configuration Manager 2007, and this book offers intermediate-to-advanced coverage of how the new SCCM boasts a simplified hierarchy, role-based security, a new console, flexible application deployment, and mobile management. You'll explore planning and installation, migrating from SCCM 2007, deploying software and operating systems, security, monitoring and troubleshooting, and automating and customizing SCC

  20. Common characteristics of open source software development and applicability for drug discovery: a systematic review.

    Science.gov (United States)

    Ardal, Christine; Alstadsæter, Annette; Røttingen, John-Arne

    2011-09-28

    Innovation through an open source model has proven to be successful for software development. This success has led many to speculate if open source can be applied to other industries with similar success. We attempt to provide an understanding of open source software development characteristics for researchers, business leaders and government officials who may be interested in utilizing open source innovation in other contexts and with an emphasis on drug discovery. A systematic review was performed by searching relevant, multidisciplinary databases to extract empirical research regarding the common characteristics and barriers of initiating and maintaining an open source software development project. Common characteristics to open source software development pertinent to open source drug discovery were extracted. The characteristics were then grouped into the areas of participant attraction, management of volunteers, control mechanisms, legal framework and physical constraints. Lastly, their applicability to drug discovery was examined. We believe that the open source model is viable for drug discovery, although it is unlikely that it will exactly follow the form used in software development. Hybrids will likely develop that suit the unique characteristics of drug discovery. We suggest potential motivations for organizations to join an open source drug discovery project. We also examine specific differences between software and medicines, specifically how the need for laboratories and physical goods will impact the model as well as the effect of patents.

  1. Stereo Navi 2.0: software for stereotaxic surgery of the common marmoset (Callithrix jacchus).

    Science.gov (United States)

    Tokuno, Hironobu; Tanaka, Ikuko; Umitsu, Yoshitomo; Nakamura, Yasuhisa

    2009-11-01

    Recently, we reported our web-accessible digital brain atlas of the common marmoset (Callithrix jacchus) at http://marmoset-brain.org:2008. Using digital images obtained during construction of this website, we developed stand-alone software for navigation of electrodes or injection needles for stereotaxic electrophysiological or anatomical experiments in vivo. This software enables us to draw lines on exchangeable section images, measure the length and angle of lines, superimpose a stereotaxic reference grid on the image, and send the image to the system clipboard. The software, Stereo Navi 2.0, is freely available at our brain atlas website.

  2. Virtual Private Lan Services Over IP/MPLS Networks and Router Configurations

    Directory of Open Access Journals (Sweden)

    Pınar KIRCI

    2015-06-01

    Full Text Available The rising number of users and ever growing traffic rates over the networks reveal the need of higher bandwidth and transmission rates. At every packet transmission process, routers need to route the packets by looking at the routing tables, this fact leads to an increase at the load of the routers and at the amount of time consumed during the processes. Today, users need high level security, faster data transmission and easy managed network structures because of the increasing technology usage. MPLS network structures can provide these requirements with their QoS feature. In our work, at first a topology structure is constructed with the routers that are used in Alcatel-Lucent laboratories. OSPF (Open Shortest Path First routing protocol and MPLS (Multiprotocol Label Switching technologies are used over the topology. Afterwards, E-pipe (Ethernet Pipe and VPLS (Virtual private LAN services configurations are performed over the routers. To illustrate the current network data traffic, three tests are performed in the study. Routers’ configurations are performed by Secure-CRT and still developing i-Gen software. With i-Gen, many routers’ configurations can be performed with a user friendly interface. Instead of performing the configurations one by one with Secure CRT, the user can perform the routers’ configurations easily by entering the needed values for the system with i-Gen software. So, with the new and developing i-Gen software, the users’ workload is minimized and streamlined. In our work, Secure-CRT software which is mostly preferred for router configurations at Windows operating system and i-Gen software which is developed by Alcatel-Lucent are considered. Both of the router configuration softwares are worked on and gained results are expounded. Consequently, instead of using time consuming Secure-CRT software, with utilizing new developed i-Gen software, the users’ work load is minimized.

  3. Common Data Acquisition Systems (DAS) Software Development for Rocket Propulsion Test (RPT) Test Facilities - A General Overview

    Science.gov (United States)

    Hebert, Phillip W., Sr.; Hughes, Mark S.; Davis, Dawn M.; Turowski, Mark P.; Holladay, Wendy T.; Marshall, PeggL.; Duncan, Michael E.; Morris, Jon A.; Franzl, Richard W.

    2012-01-01

    The advent of the commercial space launch industry and NASA's more recent resumption of operation of Stennis Space Center's large test facilities after thirty years of contractor control resulted in a need for a non-proprietary data acquisition system (DAS) software to support government and commercial testing. The software is designed for modularity and adaptability to minimize the software development effort for current and future data systems. An additional benefit of the software's architecture is its ability to easily migrate to other testing facilities thus providing future commonality across Stennis. Adapting the software to other Rocket Propulsion Test (RPT) Centers such as MSFC, White Sands, and Plumbrook Station would provide additional commonality and help reduce testing costs for NASA. Ultimately, the software provides the government with unlimited rights and guarantees privacy of data to commercial entities. The project engaged all RPT Centers and NASA's Independent Verification & Validation facility to enhance product quality. The design consists of a translation layer which provides the transparency of the software application layers to underlying hardware regardless of test facility location and a flexible and easily accessible database. This presentation addresses system technical design, issues encountered, and the status of Stennis' development and deployment.

  4. Establishing the Common Community Physics Package by Transitioning the GFS Physics to a Collaborative Software Framework

    Science.gov (United States)

    Xue, L.; Firl, G.; Zhang, M.; Jimenez, P. A.; Gill, D.; Carson, L.; Bernardet, L.; Brown, T.; Dudhia, J.; Nance, L. B.; Stark, D. R.

    2017-12-01

    The Global Model Test Bed (GMTB) has been established to support the evolution of atmospheric physical parameterizations in NCEP global modeling applications. To accelerate the transition to the Next Generation Global Prediction System (NGGPS), a collaborative model development framework known as the Common Community Physics Package (CCPP) is created within the GMTB to facilitate engagement from the broad community on physics experimentation and development. A key component to this Research to Operation (R2O) software framework is the Interoperable Physics Driver (IPD) that hooks the physics parameterizations from one end to the dynamical cores on the other end with minimum implementation effort. To initiate the CCPP, scientists and engineers from the GMTB separated and refactored the GFS physics. This exercise demonstrated the process of creating IPD-compliant code and can serve as an example for other physics schemes to do the same and be considered for inclusion into the CCPP. Further benefits to this process include run-time physics suite configuration and considerably reduced effort for testing modifications to physics suites through GMTB's physics test harness. The implementation will be described and the preliminary results will be presented at the conference.

  5. Hanford Environmental Information System Configuration Management Plan

    International Nuclear Information System (INIS)

    1996-06-01

    The Hanford Environmental Information System (HEIS) Configuration Management Plan establishes the software and data configuration control requirements for the HEIS and project-related databases maintained within the Environmental Restoration Contractor's data management department

  6. Verification and validation process for the safety software in KNICS

    International Nuclear Information System (INIS)

    Kwon, Kee-Choon; Lee, Jang-Soo; Kim, Jang-Yeol

    2004-01-01

    This paper describes the Verification and Validation (V and V ) process for safety software of Programmable Logic Controller (PLC), Digital Reactor Protection System (DRPS), and Engineered Safety Feature-Component Control System (ESF-CCS) that are being developed in Korea Nuclear Instrumentation and Control System (KNICS) projects. Specifically, it presents DRPS V and V experience according to the software development life cycle. The main activities of DRPS V and V process are preparation of software planning documentation, verification of Software Requirement Specification (SRS), Software Design Specification (SDS) and codes, and testing of the integrated software and the integrated system. In addition, they include software safety analysis and software configuration management. SRS V and V of DRPS are technical evaluation, licensing suitability evaluation, inspection and traceability analysis, formal verification, preparing integrated system test plan, software safety analysis, and software configuration management. Also, SDS V and V of RPS are technical evaluation, licensing suitability evaluation, inspection and traceability analysis, formal verification, preparing integrated software test plan, software safety analysis, and software configuration management. The code V and V of DRPS are traceability analysis, source code inspection, test case and test procedure generation, software safety analysis, and software configuration management. Testing is the major V and V activity of software integration and system integration phase. Software safety analysis at SRS phase uses Hazard Operability (HAZOP) method, at SDS phase it uses HAZOP and Fault Tree Analysis (FTA), and at implementation phase it uses FTA. Finally, software configuration management is performed using Nu-SCM (Nuclear Software Configuration Management) tool developed by KNICS project. Through these activities, we believe we can achieve the functionality, performance, reliability and safety that are V

  7. Computational environment and software configuration management of the 1996 performance assessment for the Waste Isolation Pilot Plant

    International Nuclear Information System (INIS)

    Froehlich, G.K.; Williamson, C.M.; Ogden, H.C.

    2000-01-01

    The US Department of Energy (DOE) Waste Isolation Pilot Plant (WIPP), located in southeast New Mexico, is a deep geologic repository for the permanent disposal of transuranic waste generated by DOE defense-related activities. Sandia National Laboratories (SNL), in its role as scientific advisor to the DOE, is responsible for evaluating the long-term performance of the WIPP. This risk-based Performance Assessment (PA) is accomplished in part through the use of numerous scientific modeling codes, which rely for some of their inputs on data gathered during characterization of the site. The PA is subject to formal requirements set forth in federal regulations. In particular, the components of the calculation fall under the configuration management and software quality assurance aegis of the American Society of Mechanical Engineers (ASME) Nuclear Quality Assurance (NQA) requirements. This paper describes SNL's implementation of the NQA requirements regarding configuration management. The complexity of the PA calculation is described, and the rationale for developing a flexible, robust run-control process is discussed. The run-control implementation is described, and its integration with the configuration-management system is then explained, to show how a calculation requiring 37,000 CPU-hours, and involving 225,000 output files totaling 95 GB, was accomplished in 5 months by two individuals, with full traceability and reproducibility

  8. Computational environment and software configuration management of the 1996 performance assessment for the Waste Isolation Pilot Plant

    International Nuclear Information System (INIS)

    Froehlich, Gary K.; Williamson, Charles Michael; Ogden, Harvey C.

    2000-01-01

    The US Department of Energy (DOE) Waste Isolation Pilot Plant (WIPP), located in southeast New Mexico, is a deep geologic repository for the permanent disposal of transuranic waste generated by DOE defense-related activities. Sandia National Laboratories (SNL), in its role as scientific advisor to the DOE, is responsible for evaluating the long-term performance of the WIPP. This risk-based Performance Assessment (PA) is accomplished in part through the use of numerous scientific modeling codes, which rely for some of their inputs on data gathered during characterization of the site. The PA is subject to formal requirements set forth in federal regulations. In particular, the components of the calculation fall under the configuration management and software quality assurance aegis of the American Society of Mechanical Engineers (ASME) Nuclear Quality Assurance (NQA) requirements. This paper describes SNL's implementation of the NQA requirements regarding configuration management. The complexity of the PA calculation is described, and the rationale for developing a flexible, robust run-control process is discussed. The run-control implementation is described, and its integration with the configuration-management system is then explained, to show how a calculation requiring 37,000 CPU-hours, and involving 225,000 output files totaling 95 Gigabytes, was accomplished in 5 months by 2 individuals, with full traceability and reproducibility

  9. Free Software Licenses and Other Free Licenses: Genetic Code of Digital Common Goods

    OpenAIRE

    Marco Ciurcina

    2017-01-01

    This article explores the history and describes the main features of free software licenses and other free licenses in an attempt to shed light on the reasons for their success in promoting individual behaviors converging towards the collective construction of digital commons.

  10. Common software for controlling and monitoring the upgraded CMS Level-1 trigger

    CERN Document Server

    Codispoti, Giuseppe

    2017-01-01

    The Large Hadron Collider restarted in 2015 with a higher centre-of-mass energy of 13 TeV. The instantaneous luminosity is expected to increase significantly in the coming years. An upgraded Level-1 trigger system was deployed in the CMS experiment in order to maintain the same efficiencies for searches and precision measurements as those achieved in 2012. This system must be controlled and monitored coherently through software, with high operational efficiency.The legacy system was composed of a large number of custom data processor boards; correspondingly, only a small fraction of the software was common between the different subsystems. The upgraded system is composed of a set of general purpose boards, that follow the MicroTCA specification, and transmit data over optical links, resulting in a more homogeneous system. The associated software is based on generic components corresponding to the firmware blocks that are shared across different cards, regardless of the role that the card plays in the system. ...

  11. First International Workshop on Variability in Software Architecture (VARSA 2011)

    NARCIS (Netherlands)

    Galster, Matthias; Avgeriou, Paris; Weyns, Danny; Mannisto, Tomi

    2011-01-01

    Variability is the ability of a software artifact to be changed for a specific context. Mechanisms to accommodate variability include software product lines, configuration wizards and tools in commercial software, configuration interfaces of software components, or the dynamic runtime composition of

  12. Free Software Licenses and Other Free Licenses: Genetic Code of Digital Common Goods

    Directory of Open Access Journals (Sweden)

    Marco Ciurcina

    2017-06-01

    Full Text Available This article explores the history and describes the main features of free software licenses and other free licenses in an attempt to shed light on the reasons for their success in promoting individual behaviors converging towards the collective construction of digital commons.

  13. Honeywell Modular Automation System Computer Software Documentation

    International Nuclear Information System (INIS)

    STUBBS, A.M.

    2000-01-01

    The purpose of this Computer Software Document (CSWD) is to provide configuration control of the Honeywell Modular Automation System (MAS) in use at the Plutonium Finishing Plant (PFP). This CSWD describes hardware and PFP developed software for control of stabilization furnaces. The Honeywell software can generate configuration reports for the developed control software. These reports are described in the following section and are attached as addendum's. This plan applies to PFP Engineering Manager, Thermal Stabilization Cognizant Engineers, and the Shift Technical Advisors responsible for the Honeywell MAS software/hardware and administration of the Honeywell System

  14. Device configuration-management system

    International Nuclear Information System (INIS)

    Nowell, D.M.

    1981-01-01

    The Fusion Chamber System, a major component of the Magnetic Fusion Test Facility, contains several hundred devices which report status to the Supervisory Control and Diagnostic System for control and monitoring purposes. To manage the large number of diversity of devices represented, a device configuration management system was required and developed. Key components of this software tool include the MFTF Data Base; a configuration editor; and a tree structure defining the relationships between the subsystem devices. This paper will describe how the configuration system easily accomodates recognizing new devices, restructuring existing devices, and modifying device profile information

  15. Automated Transportation Management System (ATMS) Configuration Management Plan. Revision 1

    International Nuclear Information System (INIS)

    Weidert, R.S.

    1994-01-01

    This document describes the Software Configuration Management (SCM) approach and procedures to be utilized in developing and maintaining the Automated Transportation Management System (ATMS). The configuration management procedures are necessary to ensure that any changes made to software and related documentation are consistent with ATMS goals and contained securely in a central library. This plan applies to all software and associated documentation used in producing ATMS V1.0 and ATMS V2.0 system

  16. Software quality assurance and software safety in the Biomed Control System

    International Nuclear Information System (INIS)

    Singh, R.P.; Chu, W.T.; Ludewigt, B.A.; Marks, K.M.; Nyman, M.A.; Renner, T.R.; Stradtner, R.

    1989-01-01

    The Biomed Control System is a hardware/software system used for the delivery, measurement and monitoring of heavy-ion beams in the patient treatment and biology experiment rooms in the Bevalac at the Lawrence Berkeley Laboratory (LBL). This paper describes some aspects of this system including historical background philosophy, configuration management, hardware features that facilitate software testing, software testing procedures, the release of new software quality assurance, safety and operator monitoring. 3 refs

  17. Radiative transfer configuration factor catalog: A listing of relations for common geometries

    International Nuclear Information System (INIS)

    Howell, John R.; Menguec, M. Pinar

    2011-01-01

    An on-line compilation of radiation configuration factors for over 300 common geometries is provided as a supplementary material from the JQSRT web site at doi: (10.1016/j.jqsrt.2010.10.002). The factors are gathered from references across the radiative transfer and illumination engineering literature, as well as from applications in such diverse fields from combustion systems to human factors engineering. These factors are useful in standard surface-surface radiation exchange calculations, and are based on the assumptions that the surfaces exchanging radiation are diffuse, and that the radiosity from each surface is uniform across that surface. The catalog is updated annually, and can be downloaded from JQSRT in .PDF format.

  18. Ragnarok: An Architecture Based Software Development Environment

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak

    of the development process. The main contributions presented in the thesis have evolved from work with two of the hypotheses: These address the problems of management of evolution, and overview, comprehension and navigation respectively. The first main contribution is the Architectural Software Configuration...... Management Model: A software configuration management model where the abstractions and hierarchy of the logical aspect of software architecture forms the basis for version control and configuration management. The second main contribution is the Geographic Space Architecture Visualisation Model......: A visualisation model where entities in a software architecture are organised geographically in a two-dimensional plane, their visual appearance determined by processing a subset of the data in the entities, and interaction with the project's underlying data performed by direct manipulation of the landscape...

  19. DYNAMIC CONFIGURATION OF THE COMPUTING NODES OF THE ALICE O2 SYSTEM

    CERN Document Server

    Pugdeethosapol, Krittaphat

    2015-01-01

    The ALICE (A Large Ion Collider Experiment) Collaboration is preparing major upgrades for the detectors in 2020 in order to take advantage of the increase of collision rate at up to 50 KHz in the LHC for Pb-Pb beams. Together with these upgrades, the ALICE Online and Offline computing systems are being redesigned and upgraded to a new common system called O2. The O2 system is made of a software framework and a computing facility. The concept of the framework consists of implementing an online reconstruction and archiving of the data of all reconstructed collisions to permanent data storage. The main objective is to achieve a high-throughput system on heterogeneous computing platforms. Our KMUTT team has taken the responsibility of designing of accomplishing the design of the Control, Configuration, and Monitoring (CCM) of the computing infrastructure. This thesis is focusing on Configuration. The configuration module should allow dynamic configuration of processes and environment parameters during runtime. ...

  20. Open Source Software and the Intellectual Commons.

    Science.gov (United States)

    Dorman, David

    2002-01-01

    Discusses the Open Source Software method of software development and its relationship to control over information content. Topics include digital library resources; reference services; preservation; the legal and economic status of information; technical standards; access to digital data; control of information use; and copyright and patent laws.…

  1. CONFIGURATION GENERATOR MODEL

    International Nuclear Information System (INIS)

    Alsaed, A.

    2004-01-01

    ''The Disposal Criticality Analysis Methodology Topical Report'' prescribes an approach to the methodology for performing postclosure criticality analyses within the monitored geologic repository at Yucca Mountain, Nevada. An essential component of the methodology is the ''Configuration Generator Model for In-Package Criticality'' that provides a tool to evaluate the probabilities of degraded configurations achieving a critical state. The configuration generator model is a risk-informed, performance-based process for evaluating the criticality potential of degraded configurations in the monitored geologic repository. The method uses event tree methods to define configuration classes derived from criticality scenarios and to identify configuration class characteristics (parameters, ranges, etc.). The probabilities of achieving the various configuration classes are derived in part from probability density functions for degradation parameters. The NRC has issued ''Safety Evaluation Report for Disposal Criticality Analysis Methodology Topical Report, Revision 0''. That report contained 28 open items that required resolution through additional documentation. Of the 28 open items, numbers 5, 6, 9, 10, 18, and 19 were concerned with a previously proposed software approach to the configuration generator methodology and, in particular, the k eff regression analysis associated with the methodology. However, the use of a k eff regression analysis is not part of the current configuration generator methodology and, thus, the referenced open items are no longer considered applicable and will not be further addressed

  2. The Auto control System Based on InTouch Configuration software for High-gravity Oil Railway Tank Feeding

    Directory of Open Access Journals (Sweden)

    Xu De-Kai

    2015-01-01

    Full Text Available This paper provides automatic design for high-gravity oil railway tank feeding system of some refinery uses distributive control system. The system adopts the automatic system of Modicon TSX Quantum or PLC as monitor and control level and uses a PC-based plat form as principal computer running on the Microsoft Windows2000. An automatic control system is developed in the environment of InTouch configuration software. This system implements automatic high-gravity oil tank feeding with pump controlling function. And it combines automatic oil feeding controlling, pump controlling and tank monitoring function to implement the automation of oil feeding with rations and automatic control.

  3. NASA Data Acquisition System Software Development for Rocket Propulsion Test Facilities

    Science.gov (United States)

    Herbert, Phillip W., Sr.; Elliot, Alex C.; Graves, Andrew R.

    2015-01-01

    Current NASA propulsion test facilities include Stennis Space Center in Mississippi, Marshall Space Flight Center in Alabama, Plum Brook Station in Ohio, and White Sands Test Facility in New Mexico. Within and across these centers, a diverse set of data acquisition systems exist with different hardware and software platforms. The NASA Data Acquisition System (NDAS) is a software suite designed to operate and control many critical aspects of rocket engine testing. The software suite combines real-time data visualization, data recording to a variety formats, short-term and long-term acquisition system calibration capabilities, test stand configuration control, and a variety of data post-processing capabilities. Additionally, data stream conversion functions exist to translate test facility data streams to and from downstream systems, including engine customer systems. The primary design goals for NDAS are flexibility, extensibility, and modularity. Providing a common user interface for a variety of hardware platforms helps drive consistency and error reduction during testing. In addition, with an understanding that test facilities have different requirements and setups, the software is designed to be modular. One engine program may require real-time displays and data recording; others may require more complex data stream conversion, measurement filtering, or test stand configuration management. The NDAS suite allows test facilities to choose which components to use based on their specific needs. The NDAS code is primarily written in LabVIEW, a graphical, data-flow driven language. Although LabVIEW is a general-purpose programming language; large-scale software development in the language is relatively rare compared to more commonly used languages. The NDAS software suite also makes extensive use of a new, advanced development framework called the Actor Framework. The Actor Framework provides a level of code reuse and extensibility that has previously been difficult

  4. Software as a service approach to sensor simulation software deployment

    Science.gov (United States)

    Webster, Steven; Miller, Gordon; Mayott, Gregory

    2012-05-01

    Traditionally, military simulation has been problem domain specific. Executing an exercise currently requires multiple simulation software providers to specialize, deploy, and configure their respective implementations, integrate the collection of software to achieve a specific system behavior, and then execute for the purpose at hand. This approach leads to rigid system integrations which require simulation expertise for each deployment due to changes in location, hardware, and software. Our alternative is Software as a Service (SaaS) predicated on the virtualization of Night Vision Electronic Sensors (NVESD) sensor simulations as an exemplary case. Management middleware elements layer self provisioning, configuration, and integration services onto the virtualized sensors to present a system of services at run time. Given an Infrastructure as a Service (IaaS) environment, enabled and managed system of simulations yields a durable SaaS delivery without requiring user simulation expertise. Persistent SaaS simulations would provide on demand availability to connected users, decrease integration costs and timelines, and benefit the domain community from immediate deployment of lessons learned.

  5. Software Testing Requires Variability

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak

    2003-01-01

    Software variability is the ability of a software system or artefact to be changed, customized or configured for use in a particular context. Variability in software systems is important from a number of perspectives. Some perspectives rightly receive much attention due to their direct economic...... impact in software production. As is also apparent from the call for papers these perspectives focus on qualities such as reuse, adaptability, and maintainability....

  6. Technical Basis for Evaluating Software-Related Common-Cause Failures

    Energy Technology Data Exchange (ETDEWEB)

    Muhlheim, Michael David [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Wood, Richard [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-04-01

    The instrumentation and control (I&C) system architecture at a nuclear power plant (NPP) incorporates protections against common-cause failures (CCFs) through the use of diversity and defense-in-depth. Even for well-established analog-based I&C system designs, the potential for CCFs of multiple systems (or redundancies within a system) constitutes a credible threat to defeating the defense-in-depth provisions within the I&C system architectures. The integration of digital technologies into the I&C systems provides many advantages compared to the aging analog systems with respect to reliability, maintenance, operability, and cost effectiveness. However, maintaining the diversity and defense-in-depth for both the hardware and software within the digital system is challenging. In fact, the introduction of digital technologies may actually increase the potential for CCF vulnerabilities because of the introduction of undetected systematic faults. These systematic faults are defined as a “design fault located in a software component” and at a high level, are predominately the result of (1) errors in the requirement specification, (2) inadequate provisions to account for design limits (e.g., environmental stress), or (3) technical faults incorporated in the internal system (or architectural) design or implementation. Other technology-neutral CCF concerns include hardware design errors, equipment qualification deficiencies, installation or maintenance errors, instrument loop scaling and setpoint mistakes.

  7. Pybus - A Python Software Bus

    International Nuclear Information System (INIS)

    Lavrijsen, Wim T.L.P.

    2004-01-01

    A software bus, just like its hardware equivalent, allows for the discovery, installation, configuration, loading, unloading, and run-time replacement of software components, as well as channeling of inter-component communication. Python, a popular open-source programming language, encourages a modular design on software written in it, but it offers little or no component functionality. However, the language and its interpreter provide sufficient hooks to implement a thin, integral layer of component support. This functionality can be presented to the developer in the form of a module, making it very easy to use. This paper describes a Pythonmodule, PyBus, with which the concept of a ''software bus'' can be realized in Python. It demonstrates, within the context of the ATLAS software framework Athena, how PyBus can be used for the installation and (run-time) configuration of software, not necessarily Python modules, from a Python application in a way that is transparent to the end-user

  8. OCRWM procedure for reporting software baseline change information

    International Nuclear Information System (INIS)

    1994-07-01

    The purpose of this procedure is to establish a requirement and method for participant organizations to report software baseline change information to the M ampersand O Configuration Management (CM) organization for inclusion in the OCRWM Configuration Information System (CIS). (The requirements for performing software configuration management (SCM) are found in the OCRWM Quality Assurance Requirements and Description (QARD) document and in applicable DOE orders, and not in this procedure.) This procedure provides a linkage between each participant's SCM system and the CIS, which may be accessed for identification, descriptive, and contact information pertaining to software released by a participant. Such information from the CIS will enable retrieval of details and copies of software code and documentation from the participant SCM system

  9. The SOFIA Mission Control System Software

    Science.gov (United States)

    Heiligman, G. M.; Brock, D. R.; Culp, S. D.; Decker, P. H.; Estrada, J. C.; Graybeal, J. B.; Nichols, D. M.; Paluzzi, P. R.; Sharer, P. J.; Pampell, R. J.; Papke, B. L.; Salovich, R. D.; Schlappe, S. B.; Spriestersbach, K. K.; Webb, G. L.

    1999-05-01

    The Stratospheric Observatory for Infrared Astronomy (SOFIA) will be delivered with a computerized mission control system (MCS). The MCS communicates with the aircraft's flight management system and coordinates the operations of the telescope assembly, mission-specific subsystems, and the science instruments. The software for the MCS must be reliable and flexible. It must be easily usable by many teams of observers with widely differing needs, and it must support non-intrusive access for education and public outreach. The technology must be appropriate for SOFIA's 20-year lifetime. The MCS software development process is an object-oriented, use case driven approach. The process is iterative: delivery will be phased over four "builds"; each build will be the result of many iterations; and each iteration will include analysis, design, implementation, and test activities. The team is geographically distributed, coordinating its work via Web pages, teleconferences, T.120 remote collaboration, and CVS (for Internet-enabled configuration management). The MCS software architectural design is derived in part from other observatories' experience. Some important features of the MCS are: * distributed computing over several UNIX and VxWorks computers * fast throughput of time-critical data * use of third-party components, such as the Adaptive Communications Environment (ACE) and the Common Object Request Broker Architecture (CORBA) * extensive configurability via stored, editable configuration files * use of several computer languages so developers have "the right tool for the job". C++, Java, scripting languages, Interactive Data Language (from Research Systems, Int'l.), XML, and HTML will all be used in the final deliverables. This paper reports on work in progress, with the final product scheduled for delivery in 2001. This work was performed for Universities Space Research Association for NASA under contract NAS2-97001.

  10. Knowledge-Based Software Management

    International Nuclear Information System (INIS)

    Sally Schaffner; Matthew Bickley; Brian Bevins; Leon Clancy; Karen White

    2003-01-01

    Management of software in a dynamic environment such as is found at Jefferson Lab can be a daunting task. Software development tasks are distributed over a wide range of people with varying skill levels. The machine configuration is constantly changing requiring upgrades to software at both the hardware control level and the operator control level. In order to obtain high quality support from vendor service agreements, which is vital to maintaining 24/7 operations, hardware and software must be kept at industry's current levels. This means that periodic upgrades independent of machine configuration changes must take place. It is often difficult to identify and organize the information needed to guide the process of development, upgrades and enhancements. Dependencies between support software and applications need to be consistently identified to prevent introducing errors during upgrades and to allow adequate testing to be planned and performed. Developers also need access to information regarding compilers, make files and organized distribution directories. This paper describes a system under development at Jefferson Lab which will provide software developers and managers this type of information in a timely user-friendly fashion. The current status and future plans for the system will be detailed

  11. Defense-in-depth for common cause failure of nuclear power plant safety system software

    International Nuclear Information System (INIS)

    Tian Lu

    2012-01-01

    This paper briefly describes the development of digital I and C system in nuclear power plant, and analyses the viewpoints of NRC and other nuclear safety authorities on Software Common Cause Failure (SWCCF). In view of the SWCCF issue introduced by the digitized platform adopted in nuclear power plant safety system, this paper illustrated a diversified defence strategy for computer software and hardware. A diversified defence-in-depth solution is provided for digital safety system of nuclear power plant. Meanwhile, analysis on problems may be faced during application of nuclear safety license are analyzed, and direction of future nuclear safety I and C system development are put forward. (author)

  12. Lessons Learned in Designing User-configurable Modular Robotics

    DEFF Research Database (Denmark)

    Lund, Henrik Hautop

    2013-01-01

    User-configurable robotics allows users to easily configure robotic systems to perform task-fulfilling behaviors as desired by the users. With a user configurable robotic system, the user can easily modify the physical and func-tional aspect in terms of hardware and software components of a robotic...... with the semi-autonomous com-ponents of the user-configurable robotic system in interaction with the given environment. Components constituting such a user-configurable robotic system can be characterized as modules in a modular robotic system. Several factors in the definition and implementation...

  13. Agile Acceptance Test-Driven Development of Clinical Decision Support Advisories: Feasibility of Using Open Source Software.

    Science.gov (United States)

    Basit, Mujeeb A; Baldwin, Krystal L; Kannan, Vaishnavi; Flahaven, Emily L; Parks, Cassandra J; Ott, Jason M; Willett, Duwayne L

    2018-04-13

    Moving to electronic health records (EHRs) confers substantial benefits but risks unintended consequences. Modern EHRs consist of complex software code with extensive local configurability options, which can introduce defects. Defects in clinical decision support (CDS) tools are surprisingly common. Feasible approaches to prevent and detect defects in EHR configuration, including CDS tools, are needed. In complex software systems, use of test-driven development and automated regression testing promotes reliability. Test-driven development encourages modular, testable design and expanding regression test coverage. Automated regression test suites improve software quality, providing a "safety net" for future software modifications. Each automated acceptance test serves multiple purposes, as requirements (prior to build), acceptance testing (on completion of build), regression testing (once live), and "living" design documentation. Rapid-cycle development or "agile" methods are being successfully applied to CDS development. The agile practice of automated test-driven development is not widely adopted, perhaps because most EHR software code is vendor-developed. However, key CDS advisory configuration design decisions and rules stored in the EHR may prove amenable to automated testing as "executable requirements." We aimed to establish feasibility of acceptance test-driven development of clinical decision support advisories in a commonly used EHR, using an open source automated acceptance testing framework (FitNesse). Acceptance tests were initially constructed as spreadsheet tables to facilitate clinical review. Each table specified one aspect of the CDS advisory's expected behavior. Table contents were then imported into a test suite in FitNesse, which queried the EHR database to automate testing. Tests and corresponding CDS configuration were migrated together from the development environment to production, with tests becoming part of the production regression test

  14. Software as a Service - Common Service Bus (SAAS-CSB)

    OpenAIRE

    Swaminathan, R.; Karnavel, K.

    2013-01-01

    Software-as-a-Service (SaaS) is a form of cloud computing that relieves the user from the concern of hardware, software installation and management. It is an emerging business model that delivers software applications to the users through Web-based technology. Software vendors have varying requirements and SaaS applications most typically support such requirements. The various applications used by unique customers in a single instance are known as Multi-Tenancy. There would be a delay in serv...

  15. A systematic approach for component-based software development

    NARCIS (Netherlands)

    Guareis de farias, Cléver; van Sinderen, Marten J.; Ferreira Pires, Luis

    2000-01-01

    Component-based software development enables the construction of software artefacts by assembling prefabricated, configurable and independently evolving building blocks, called software components. This paper presents an approach for the development of component-based software artefacts. This

  16. Ground control station software design for micro aerial vehicles

    Science.gov (United States)

    Walendziuk, Wojciech; Oldziej, Daniel; Binczyk, Dawid Przemyslaw; Slowik, Maciej

    2017-08-01

    This article describes the process of designing the equipment part and the software of a ground control station used for configuring and operating micro unmanned aerial vehicles (UAV). All the works were conducted on a quadrocopter model being a commonly accessible commercial construction. This article contains a characteristics of the research object, the basics of operating the micro aerial vehicles (MAV) and presents components of the ground control station model. It also describes the communication standards for the purpose of building a model of the station. Further part of the work concerns the software of the product - the GIMSO application (Generally Interactive Station for Mobile Objects), which enables the user to manage the actions and communication and control processes from the UAV. The process of creating the software and the field tests of a station model are also presented in the article.

  17. Survey on Projects at DLR Simulation and Software Technology with Focus on Software Engineering and HPC

    OpenAIRE

    Schreiber, Andreas; Basermann, Achim

    2013-01-01

    We introduce the DLR institute “Simulation and Software Technology” (SC) and present current activities regarding software engineering and high performance computing (HPC) in German or international projects. Software engineering at SC focusses on data and knowledge management as well as tools for studies and experiments. We discuss how we apply software configuration management, validation and verification in our projects. Concrete research topics are traceability of (software devel...

  18. Dtest Testing Software

    Science.gov (United States)

    Jain, Abhinandan; Cameron, Jonathan M.; Myint, Steven

    2013-01-01

    This software runs a suite of arbitrary software tests spanning various software languages and types of tests (unit level, system level, or file comparison tests). The dtest utility can be set to automate periodic testing of large suites of software, as well as running individual tests. It supports distributing multiple tests over multiple CPU cores, if available. The dtest tool is a utility program (written in Python) that scans through a directory (and its subdirectories) and finds all directories that match a certain pattern and then executes any tests in that directory as described in simple configuration files.

  19. Long-distance configuration of FPGA based on serial communication

    International Nuclear Information System (INIS)

    Liu Xiang; Song Kezhu; Zhang Sifeng

    2010-01-01

    To solve FPGA configuration in some nuclear electronics, which works in radioactivity environment, the article introduces a way of long-distance configuration with PC and CPLD, based on serial communication. Taking CYCLONE series FPGA and EPCS configuration chip from ALTERA for example, and using the AS configuration mode, we described our design from the aspects of basic theory, hardware connection, software function and communication protocol. With this design, we could configure several FPGAs in the distance of 100 meters, or we could configure on FPGA in the distance of 150 meters. (authors)

  20. A historical dataset of software engineering conferences

    NARCIS (Netherlands)

    Vasilescu, B.N.; Serebrenik, A.; Mens, T.

    2013-01-01

    The Mining Software Repositories community typically focuses on data from software configuration management tools, mailing lists, and bug tracking repositories to uncover interesting and actionable information about the evolution of software systems. However, the techniques employed and the

  1. Code organization and configuration management

    International Nuclear Information System (INIS)

    Wellisch, J.P.; Ashby, S.; Williams, C.; Osborne, I.

    2001-01-01

    Industry experts are increasingly focusing on team productivity as the key to success. The base of the team effort is the four-fold structure of software in terms of logical organisation, physical organisation, managerial organisation, and dynamical structure. The authors describe the ideas put into action within the CMS software for organising software into sub-systems and packages, and to establish configuration management in a multi-project environment. The authors use a structure that allows to maximise the independence of software development in individual areas, and at the same time emphasises the overwhelming importance of the interdependencies between the packages and components in the system. The authors comment on release procedures, and describe the inter-relationship between release, development, integration, and testing

  2. Software Quality Assurance activities of ITER CODAC

    Energy Technology Data Exchange (ETDEWEB)

    Pande, Sopan, E-mail: sopan.pande@iter.org [ITER Organization, Route de Vinon sur Verdon, 13115 St Paul Lez Durance (France); DiMaio, Franck; Kim, Changseung; Kim, Joohan; Klotz, Wolf-Dieter; Makijarvi, Petri; Stepanov, Denis; Wallander, Anders [ITER Organization, Route de Vinon sur Verdon, 13115 St Paul Lez Durance (France)

    2013-10-15

    Highlights: ► Comprehensive and consistent software engineering and quality assurance of CODAC. ► Applicable to all CODAC software projects executed by ITER DAs and contractors. ► Configurable plans for cost effective application of SQA processes. ► CODAC software plans SQAP, SVVP, SDP, and SCMP. ► CODAC software processes based on IEEE 12207-2008. -- Abstract: Software as an integral part of the plant system I and C is crucial in the manufacturing and integrated operation of ITER plant systems. Software Quality Assurance is necessary to ensure the development and maintenance of consistently high quality I and C software throughout the lifetime of ITER. CODAC decided to follow IEEE 12207-2008 software lifecycle processes for Software Engineering and Software Quality Assurance. Software Development Plan, Software Configuration Management Plan and Software Verification and Validation Plan are the mainstay of Software Quality Assurance which is documented in the Software Quality Assurance Plan. This paper describes the Software Quality Assurance (SQA) activities performed by CODAC. The SQA includes development and maintenance of above plans, processes and resources. With the help of Verification and Validation Teams they gather evidence of process conformance and product conformance, and record process data for quality audits and perform process improvements.

  3. Software Quality Assurance activities of ITER CODAC

    International Nuclear Information System (INIS)

    Pande, Sopan; DiMaio, Franck; Kim, Changseung; Kim, Joohan; Klotz, Wolf-Dieter; Makijarvi, Petri; Stepanov, Denis; Wallander, Anders

    2013-01-01

    Highlights: ► Comprehensive and consistent software engineering and quality assurance of CODAC. ► Applicable to all CODAC software projects executed by ITER DAs and contractors. ► Configurable plans for cost effective application of SQA processes. ► CODAC software plans SQAP, SVVP, SDP, and SCMP. ► CODAC software processes based on IEEE 12207-2008. -- Abstract: Software as an integral part of the plant system I and C is crucial in the manufacturing and integrated operation of ITER plant systems. Software Quality Assurance is necessary to ensure the development and maintenance of consistently high quality I and C software throughout the lifetime of ITER. CODAC decided to follow IEEE 12207-2008 software lifecycle processes for Software Engineering and Software Quality Assurance. Software Development Plan, Software Configuration Management Plan and Software Verification and Validation Plan are the mainstay of Software Quality Assurance which is documented in the Software Quality Assurance Plan. This paper describes the Software Quality Assurance (SQA) activities performed by CODAC. The SQA includes development and maintenance of above plans, processes and resources. With the help of Verification and Validation Teams they gather evidence of process conformance and product conformance, and record process data for quality audits and perform process improvements

  4. Sensitivity analysis on the effect of software-induced common cause failure probability in the computer-based reactor trip system unavailability

    International Nuclear Information System (INIS)

    Kamyab, Shahabeddin; Nematollahi, Mohammadreza; Shafiee, Golnoush

    2013-01-01

    Highlights: ► Importance and sensitivity analysis has been performed for a digitized reactor trip system. ► The results show acceptable trip unavailability, for software failure probabilities below 1E −4 . ► However, the value of Fussell–Vesley indicates that software common cause failure is still risk significant. ► Diversity and effective test is founded beneficial to reduce software contribution. - Abstract: The reactor trip system has been digitized in advanced nuclear power plants, since the programmable nature of computer based systems has a number of advantages over non-programmable systems. However, software is still vulnerable to common cause failure (CCF). Residual software faults represent a CCF concern, which threat the implemented achievements. This study attempts to assess the effectiveness of so-called defensive strategies against software CCF with respect to reliability. Sensitivity analysis has been performed by re-quantifying the models upon changing the software failure probability. Importance measures then have been estimated in order to reveal the specific contribution of software CCF in the trip failure probability. The results reveal the importance and effectiveness of signal and software diversity as applicable strategies to ameliorate inefficiencies due to software CCF in the reactor trip system (RTS). No significant change has been observed in the rate of RTS failure probability for the basic software CCF greater than 1 × 10 −4 . However, the related Fussell–Vesley has been greater than 0.005, for the lower values. The study concludes that consideration of risk associated with the software based systems is a multi-variant function which requires compromising among them in more precise and comprehensive studies

  5. Configuration management plan for Machine Interface Test System (MITS)

    International Nuclear Information System (INIS)

    O'Neill, C.K.

    1980-01-01

    The discipline required by this plan will apply from the establishment of a configuration baseline until completion of the final test in the MITS. The plan applies to configured items of hardware and software as well as to the specifications and drawings for these items. The plan encompasses establishment of the facility baseline, interface definition, classes of change, change control, change paper, organizational responsibilities and relationships, test configuration (as opposed to facility), and configuration data retention

  6. The Ragnarok Software Development Environment

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak

    1999-01-01

    Ragnarok is an experimental software development environment that focuses on enhanced support for managerial activities in large scale software development taking the daily work of the software developer as its point of departure. The main emphasis is support in three areas: management, navigation......, and collaboration. The leitmotif is the software architecture, which is extended to handle managerial data in addition to source code; this extended software architecture is put under tight version- and configuration management control and furthermore used as basis for visualisation. Preliminary results of using...

  7. MODEL OF FUNCTIONING OF TELECOMMUNICATION EQUIPMENT FOR SOFTWARE-CONFIGURATED NETWORKS

    Directory of Open Access Journals (Sweden)

    Konstantin E. Samouylov

    2018-03-01

    Full Text Available A mathematical model of the functioning of the switch of a software defined networks is constructed in the form of a queuing network consisting of two queuing systems: the first simulates an input data buffer and a device for reading information from the header of the packet; the second is a table for addressing the switch of a software defined networks. The receipt of data in software defined networks has a probabilistic character in their deterministic processing in communication channels and switching nodes. Therefore, this mathematical model of the functioning of the switch of a software defined networks was built on the basis of queuing systems and networks. The stream of requests flowing into the network was divided into two Poisson streams of various types of applications, the first of which corresponded to the packets that came to the control port of the switch (from the controller, and the second flow to the remaining packets arriving on the switch. The flow corresponding to the packets arriving at the switch from the controller has a relative priority over the flow from the remaining arriving packets As a result, formulas were obtained for calculating the performance indicators of this telecommunications equipment such as average waiting queues for priority and non-priority applications, the probability of loss of applications for each phase of the switch. Based on the received quality of service indicators for this telecommunications equipment, it is possible to assess the stability of switches in software defined networks for various information impacts.

  8. Software evolution with XVCL

    DEFF Research Database (Denmark)

    Zhang, Weishan; Jarzabek, Stan; Zhang, Hongyu

    2004-01-01

    This chapter introduces software evolution with XVCL (XML-based Variant Configuration Language), which is an XML-based metaprogramming technique. As the software evolves, a large number of variants may arise, especially whtn such kinds of evolutions are related to multiple platforms as shown in our...... case study. Handling variants and tracing the impact of variants across the development lifecycle is a challenge. This chapter shows how we can maintain different versions of software in a reuse-based way....

  9. Human-machine interface software package

    International Nuclear Information System (INIS)

    Liu, D.K.; Zhang, C.Z.

    1992-01-01

    The Man-Machine Interface software Package (MMISP) is designed to configure the console software of PLS 60 Mev LINAC control system. The control system of PLS 60 Mev LINAC is a distributed control system which includes the main computer (Intel 310) four local station, and two sets of industrial level console computer. The MMISP provides the operator with the display page editor, various I/O configuration such as digital signals In/Out, analog signal In/Out, waveform TV graphic display, and interactive with operator through graphic picture display, voice explanation, and touch panel. This paper describes its function and application. (author)

  10. Qualification of safety-critical software for digital reactor safety system in nuclear power plants

    International Nuclear Information System (INIS)

    Kwon, Kee-Choon; Park, Gee-Yong; Kim, Jang-Yeol; Lee, Jang-Soo

    2013-01-01

    This paper describes the software qualification activities for the safety-critical software of the digital reactor safety system in nuclear power plants. The main activities of the software qualification processes are the preparation of software planning documentations, verification and validation (V and V) of the software requirements specifications (SRS), software design specifications (SDS) and codes, and the testing of the integrated software and integrated system. Moreover, the software safety analysis and software configuration management are involved in the software qualification processes. The V and V procedure for SRS and SDS contains a technical evaluation, licensing suitability evaluation, inspection and traceability analysis, formal verification, software safety analysis, and an evaluation of the software configuration management. The V and V processes for the code are a traceability analysis, source code inspection, test case and test procedure generation. Testing is the major V and V activity of the software integration and system integration phases. The software safety analysis employs a hazard operability method and software fault tree analysis. The software configuration management in each software life cycle is performed by the use of a nuclear software configuration management tool. Through these activities, we can achieve the functionality, performance, reliability, and safety that are the major V and V objectives of the safety-critical software in nuclear power plants. (author)

  11. Representation of Industrial Knowledge - as a Basis for Developing and Maintaning Product Configurators

    DEFF Research Database (Denmark)

    Haug, Anders

    2008-01-01

    Abstract A product configurator is a software-based expert system that supports the user in the creation of product specifications by restricting how different components and properties may be combined. The use of product configurators has for several years provided many engineering-oriented comp......Abstract A product configurator is a software-based expert system that supports the user in the creation of product specifications by restricting how different components and properties may be combined. The use of product configurators has for several years provided many engineering......, not all configuration projects are successful, but in fact many fail or experience great problems during the course of the project. An important factor for the success of a configuration project is the quality of the methods, techniques and tools applied when extracting, representing and documenting...

  12. Creating and Testing Simulation Software

    Science.gov (United States)

    Heinich, Christina M.

    2013-01-01

    The goal of this project is to learn about the software development process, specifically the process to test and fix components of the software. The paper will cover the techniques of testing code, and the benefits of using one style of testing over another. It will also discuss the overall software design and development lifecycle, and how code testing plays an integral role in it. Coding is notorious for always needing to be debugged due to coding errors or faulty program design. Writing tests either before or during program creation that cover all aspects of the code provide a relatively easy way to locate and fix errors, which will in turn decrease the necessity to fix a program after it is released for common use. The backdrop for this paper is the Spaceport Command and Control System (SCCS) Simulation Computer Software Configuration Item (CSCI), a project whose goal is to simulate a launch using simulated models of the ground systems and the connections between them and the control room. The simulations will be used for training and to ensure that all possible outcomes and complications are prepared for before the actual launch day. The code being tested is the Programmable Logic Controller Interface (PLCIF) code, the component responsible for transferring the information from the models to the model Programmable Logic Controllers (PLCs), basic computers that are used for very simple tasks.

  13. Adapting Configuration Management for Agile Teams Balancing Sustainability and Speed

    CERN Document Server

    Moreira, Mario E

    2009-01-01

    Adapting Configuration Management for Agile Teams provides very tangible approaches on how Configuration Management with its practices and infrastructure can be adapted and managed in order to directly benefit agile teams. Written by Mario E. Moreira, author of Software Configuration Management Implementation Roadmap , columnist for CM Crossroads online community and writer for the Agile Journal, this unique book provides concrete guidance on tailoring CM for Agile projects without sacrificing the principles of Configuration Management.

  14. Data-efficient performance learning for configurable systems

    DEFF Research Database (Denmark)

    Guo, Jianmei; Yang, Dingyu; Siegmund, Norbert

    2017-01-01

    results on 10 real-world configurable systems demonstrate the effectiveness and practicality of DECART. In particular, DECART achieves a prediction accuracy of 90% or higher based on a small sample, whose size is linear in the number of features. In addition, we propose a sample quality metric......Many software systems today are configurable, offering customization of functionality by feature selection. Understanding how performance varies in terms of feature selection is key for selecting appropriate configurations that meet a set of given requirements. Due to a huge configuration space...... and the possibly high cost of performance measurement, it is usually not feasible to explore the entire configuration space of a configurable system exhaustively. It is thus a major challenge to accurately predict performance based on a small sample of measured system variants. To address this challenge, we...

  15. Software Defined Networking Demands on Software Technologies

    DEFF Research Database (Denmark)

    Galinac Grbac, T.; Caba, Cosmin Marius; Soler, José

    2015-01-01

    Software Defined Networking (SDN) is a networking approach based on a centralized control plane architecture with standardised interfaces between control and data planes. SDN enables fast configuration and reconfiguration of the network to enhance resource utilization and service performances....... This new approach enables a more dynamic and flexible network, which may adapt to user needs and application requirements. To this end, systemized solutions must be implemented in network software, aiming to provide secure network services that meet the required service performance levels. In this paper......, we review this new approach to networking from an architectural point of view, and identify and discuss some critical quality issues that require new developments in software technologies. These issues we discuss along with use case scenarios. Here in this paper we aim to identify challenges...

  16. The irace package: Iterated racing for automatic algorithm configuration

    Directory of Open Access Journals (Sweden)

    Manuel López-Ibáñez

    2016-01-01

    Full Text Available Modern optimization algorithms typically require the setting of a large number of parameters to optimize their performance. The immediate goal of automatic algorithm configuration is to find, automatically, the best parameter settings of an optimizer. Ultimately, automatic algorithm configuration has the potential to lead to new design paradigms for optimization software. The irace package is a software package that implements a number of automatic configuration procedures. In particular, it offers iterated racing procedures, which have been used successfully to automatically configure various state-of-the-art algorithms. The iterated racing procedures implemented in irace include the iterated F-race algorithm and several extensions and improvements over it. In this paper, we describe the rationale underlying the iterated racing procedures and introduce a number of recent extensions. Among these, we introduce a restart mechanism to avoid premature convergence, the use of truncated sampling distributions to handle correctly parameter bounds, and an elitist racing procedure for ensuring that the best configurations returned are also those evaluated in the highest number of training instances. We experimentally evaluate the most recent version of irace and demonstrate with a number of example applications the use and potential of irace, in particular, and automatic algorithm configuration, in general.

  17. Apple Configurator 2 (version 2.3)

    OpenAIRE

    Lara Lasner-Frater

    2018-01-01

    Apple Configurator 2 (AC2) is a free mass-deployment utility that allows you to update multiple iPads, iPhones, iPod Touch devices, and Apple TVs at the same time, including apps, website links, iBooks, and software updates.

  18. Software Validation in ATLAS

    International Nuclear Information System (INIS)

    Hodgkinson, Mark; Seuster, Rolf; Simmons, Brinick; Sherwood, Peter; Rousseau, David

    2012-01-01

    The ATLAS collaboration operates an extensive set of protocols to validate the quality of the offline software in a timely manner. This is essential in order to process the large amounts of data being collected by the ATLAS detector in 2011 without complications on the offline software side. We will discuss a number of different strategies used to validate the ATLAS offline software; running the ATLAS framework software, Athena, in a variety of configurations daily on each nightly build via the ATLAS Nightly System (ATN) and Run Time Tester (RTT) systems; the monitoring of these tests and checking the compilation of the software via distributed teams of rotating shifters; monitoring of and follow up on bug reports by the shifter teams and periodic software cleaning weeks to improve the quality of the offline software further.

  19. SWATCH Common software for controlling and monitoring the upgraded CMS Level-1 trigger

    CERN Document Server

    Lazaridis, Christos; Bunkowski, Karol; Codispoti, Giuseppe; Dirkx, Glenn; Ghabrous Larrea, Carlos; Lingemann, Joschka; Kreczko, Lukasz; Thea, Alessandro; Williams, Tom

    2017-01-01

    The Large Hadron Collider at CERN restarted in 2015 with a higher centre-of-mass energy of 13 TeV. The instantaneous luminosity is expected to increase significantly in the coming years. An upgraded Level-1 trigger system is being deployed in the CMS experiment in order to maintain the same efficiencies for searches and precision measurements as those achieved in the previous run. This system must be controlled and monitored coherently through software, with high operational efficiency.The legacy system is composed of approximately 4000 data processor boards, of several custom application-specific designs. These boards are organised into several subsystems; each subsystem receives data from different detector systems (calorimeters, barrel/endcap muon detectors), or with differing granularity. These boards have been controlled and monitored by a medium-sized distributed system of over 40 computers and 200 processes. Only a small fraction of the control and monitoring software was common between the different s...

  20. Visual data mining and analysis of software repositories

    NARCIS (Netherlands)

    Voinea, S.L.; Telea, A.C.

    2007-01-01

    In this article we describe an ongoing effort to integrate information visualization techniques into the process of configuration management for software systems. Our focus is to help software engineers manage the evolution of large and complex software systems by offering them effective and

  1. CVSgrab : Mining the History of Large Software Projects

    NARCIS (Netherlands)

    Voinea, S.L.; Telea, A.

    2006-01-01

    Many software projects use Software Configuration Management systems to support their development process. Such systems accumulate in time large amounts of information useful for process accounting and auditing. We study how software developers can get insight in this information in order to

  2. Improving the Customer Configuration Update Process by Explicitly Managing Software Knowledge

    NARCIS (Netherlands)

    Slinger, S.R.L.

    2006-01-01

    The implementation and continuous support of a software product at a customer with evolving requirements is a complex task for a product software vendor. There are many customers for the vendor to serve, all of whom might require their own version or variant of the application. Furthermore, the

  3. An XML-based configuration system for MAST PCS

    International Nuclear Information System (INIS)

    Storrs, J.; McArdle, G.

    2008-01-01

    MAST PCS, a port of General Atomics' generic Plasma Control System, is a large software system comprising many source files in C and IDL. Application parameters can affect multiple source files in complex ways, making code development and maintenance difficult. The MAST PCS configuration system aims to make the task of the application developer easier, through the use of XML-based configuration files and a configuration tool which processes them. It is presented here as an example of a useful technique with wide application

  4. DevLore - A Firmware Library and Web-Based Configuration Control Tool for Accelerator Systems Under Constant Development

    International Nuclear Information System (INIS)

    Richard Evans; Kevin Jordan; Deborah Gruber; Daniel Sexton

    2005-01-01

    The Free Electron Laser Project at Jefferson Lab is based on a comparatively small accelerator driver. As it's systems continues to grow and evolve, strict configuration control has not been a programmatic goal. Conversely, as the IR-Demo FEL and the 10kW IR FEL have been built and operated, hardware and software changes have been regular part of the machine development process. With relatively small component counts for sub-systems, changes occur without requiring much formal documentation and in-situ alterations are common-place in the name of supporting operations. This paper presents an overview of the web-based software tool called DevLore which was first developed to be a library for embedded programming and then became a tremendously effective tool for tracking all changes made to the machine hardware and software

  5. An intelligent sales assistant for configurable products

    OpenAIRE

    Molina, Martin

    2001-01-01

    Some of the recent proposals of web-based applications are oriented to provide advanced search services through virtual shops. Within this context, this paper proposes an advanced type of software application that simulates how a sales assistant dialogues with a consumer to dynamically configure a product according to particular needs. The paper presents the general knowl- edge model that uses artificial intelligence and knowledge-based techniques to simulate the configuration process. Finall...

  6. Studying and simulating transformer configuration to improve power quality

    Directory of Open Access Journals (Sweden)

    Oscar J. Peña Huaringa

    2011-06-01

    Full Text Available This paper presents a study and simulation of transformer configurations to improve power quality; it provides theoretical support based on the expansion of the Fourier series and analysis of symmetrical components. A test system was set up in the laboratory, taking measurements and checking configuration effectiveness in reducing the system’s harmonic content. The configurations were modelled with PSCAD / EMTDC software, using two 6 pulse rectifiers as test loads and two variable speed drives.

  7. XVCL: XML-based Variant Configuration Language

    DEFF Research Database (Denmark)

    Jarzabek, Stan; Basset, Paul; Zhang, Hongyu

    2003-01-01

    XVCL (XML-based Variant Configuration Language) is a meta-programming technique and tool that provides effective reuse mechanisms. XVCL is an open source software developed at the National University of Singapore. Being a modern and versatile version of Bassett's frames, a technology that has...

  8. A Managerial Perspective on Common Identity-based and Common Bond-based Groups in Non-governmental Organizations. Patterns of Interaction, Attachment and Social Network Configuration

    Directory of Open Access Journals (Sweden)

    Elena - Mădălina VĂTĂMĂNESCU

    2014-10-01

    Full Text Available The paper approaches the common identity and common bond theories in analyzing the group patterns of interaction, their causes, processes and outcomes from a managerial perspective. The distinction between identity and bond referred to people’s different reasons for being in a group, stressing out whether they like the group as a whole — identity-based attachment, or they like individuals in the group — bond-based attachment.  While members of the common identity groups reported feeling more attached to their group as a whole than to their fellow group members and tended to perceive others in the group as interchangeable, in bond-based attachment, people felt connected to each other and less to the group as a whole, loyalty or attraction to the group stemming from their attraction primarily to certain members in the group. At this level, the main question concerned with the particularities of common identity-based or common bond-based groups regarding social interaction, the participatory architecture of the group, the levels of personal and work engagement in acting like a cohesive group. In order to address pertinently this issue, the current work was focused on a qualitative research which comprised in-depth (semi-structured interviews with several project coordinators from non-governmental organizations (NGOs. Also, to make the investigation more complex and clear, the research relied on the social network analysis which was indicative of the group dynamics and configuration, highlighting the differences between common identity-based and common bond-based groups.

  9. Software platform virtualization in chemistry research and university teaching.

    Science.gov (United States)

    Kind, Tobias; Leamy, Tim; Leary, Julie A; Fiehn, Oliver

    2009-11-16

    Modern chemistry laboratories operate with a wide range of software applications under different operating systems, such as Windows, LINUX or Mac OS X. Instead of installing software on different computers it is possible to install those applications on a single computer using Virtual Machine software. Software platform virtualization allows a single guest operating system to execute multiple other operating systems on the same computer. We apply and discuss the use of virtual machines in chemistry research and teaching laboratories. Virtual machines are commonly used for cheminformatics software development and testing. Benchmarking multiple chemistry software packages we have confirmed that the computational speed penalty for using virtual machines is low and around 5% to 10%. Software virtualization in a teaching environment allows faster deployment and easy use of commercial and open source software in hands-on computer teaching labs. Software virtualization in chemistry, mass spectrometry and cheminformatics is needed for software testing and development of software for different operating systems. In order to obtain maximum performance the virtualization software should be multi-core enabled and allow the use of multiprocessor configurations in the virtual machine environment. Server consolidation, by running multiple tasks and operating systems on a single physical machine, can lead to lower maintenance and hardware costs especially in small research labs. The use of virtual machines can prevent software virus infections and security breaches when used as a sandbox system for internet access and software testing. Complex software setups can be created with virtual machines and are easily deployed later to multiple computers for hands-on teaching classes. We discuss the popularity of bioinformatics compared to cheminformatics as well as the missing cheminformatics education at universities worldwide.

  10. Software qualification for digital safety system in KNICS project

    International Nuclear Information System (INIS)

    Kwon, Kee-Choon; Lee, Dong-Young; Choi, Jong-Gyun

    2012-01-01

    In order to achieve technical self-reliance in the area of nuclear instrumentation and control, the Korea Nuclear Instrumentation and Control System (KNICS) project had been running for seven years from 2001. The safety-grade Programmable Logic Controller (PLC) and the digital safety system were developed by KNICS project. All the software of the PLC and digital safety system were developed and verified following the software development life cycle Verification and Validation (V and V) procedure. The main activities of the V and V process are preparation of software planning documentations, verification of the Software Requirement Specification (SRS), Software Design Specification (SDS) and codes, and a testing of the software components, the integrated software, and the integrated system. In addition, a software safety analysis and a software configuration management are included in the activities. For the software safety analysis at the SRS and SDS phases, the software Hazard Operability (HAZOP) was performed and then the software fault tree analysis was applied. The software fault tree analysis was applied to a part of software module with some critical defects identified by the software HAZOP in SDS phase. The software configuration management was performed using the in-house tool developed in the KNICS project. (author)

  11. Some aspects of configuration management at Nuclear Power Plant Krsko

    International Nuclear Information System (INIS)

    Heruc, Z.; Podhraski, M.

    2000-01-01

    Configuration Management (CM) objectives at Neck are to ensure consistency between Design Requirements, Physical Plant Configuration and Configuration Information. Software applications, supporting Design Change, Work Control and Document Control Processes, are integrated in one module-oriented Management Information System (MIS). From configuration management perspective, Master Equipment Component List (MECL) database is the central MIS module. Through a combination of a centralized database and process migrated activities (modifications, plant operation, maintenance, document control etc.), it is encored that the CM principles and requirements (accurate, current design dana matching plant's physical configuration while complying to applicable requirements), are followed and fulfilled. (author)

  12. Examining software complexity and quality for scientific software

    International Nuclear Information System (INIS)

    Kelly, D.; Shepard, T.

    2005-01-01

    Research has not found a simple relationship between software complexity and software quality, and particularly no relationship between commonly used software complexity metrics and the occurrence of software faults. A study with an example of scientific software from the nuclear power industry illustrates the importance of addressing cognitive complexity, the complexity related to understanding the intellectual content of the software. Simple practices such as aptly-named variables contributes more to high quality software than limiting code sizes. This paper examines the research into complexity and quality and reports on a longitudinal study using the example of nuclear software. (author)

  13. A LabVIEW® based generic CT scanner control software platform.

    Science.gov (United States)

    Dierick, M; Van Loo, D; Masschaele, B; Boone, M; Van Hoorebeke, L

    2010-01-01

    UGCT, the Centre for X-ray tomography at Ghent University (Belgium) does research on X-ray tomography and its applications. This includes the development and construction of state-of-the-art CT scanners for scientific research. Because these scanners are built for very different purposes they differ considerably in their physical implementations. However, they all share common principle functionality. In this context a generic software platform was developed using LabVIEW® in order to provide the same interface and functionality on all scanners. This article describes the concept and features of this software, and its potential for tomography in a research setting. The core concept is to rigorously separate the abstract operation of a CT scanner from its actual physical configuration. This separation is achieved by implementing a sender-listener architecture. The advantages are that the resulting software platform is generic, scalable, highly efficient, easy to develop and to extend, and that it can be deployed on future scanners with minimal effort.

  14. PyBus -- A Python Software Bus

    OpenAIRE

    Lavrijsen, W

    2005-01-01

    A software bus, just like its hardware equivalent, allows for the discovery, installation, configuration, loading, unloading, and run-time replacement of software components, as well as channeling of inter-component communication. Python, a popular open-source programming language, encourages a modular design on software written in it, but it offers little or no component functionality. However, the language and its interpreter provide sufficient hooks to implement a thin, integral layer...

  15. Strategic Business-IT alignment of application software packages: Bridging the Information Technology gap

    Directory of Open Access Journals (Sweden)

    Wandi Kruger

    2012-09-01

    Full Text Available An application software package implementation is a complex endeavour, and as such it requires the proper understanding, evaluation and redefining of the current business processes to ensure that the implementation delivers on the objectives set at the start of the project. Numerous factors exist that may contribute to the unsuccessful implementation of application software packages. However, the most significant contributor to the failure of an application software package implementation lies in the misalignment of the organisation’s business processes with the functionality of the application software package. Misalignment is attributed to a gap that exists between the business processes of an organisation and what functionality the application software package has to offer to translate the business processes of an organisation into digital form when implementing and configuring an application software package. This gap is commonly referred to as the information technology (IT gap. This study proposes to define and discuss the IT gap. Furthermore this study will make recommendations for aligning the business processes with the functionality of the application software package (addressing the IT gap. The end result of adopting these recommendations will be more successful application software package implementations.

  16. QCI Common

    Energy Technology Data Exchange (ETDEWEB)

    2016-11-18

    There are many common software patterns and utilities for the ORNL Quantum Computing Institute that can and should be shared across projects. Otherwise we find duplication of code which adds unwanted complexity. This is a software product seeks to alleviate this by providing common utilities such as object factories, graph data structures, parameter input mechanisms, etc., for other software products within the ORNL Quantum Computing Institute. This work enables pure basic research, has no export controlled utilities, and has no real commercial value.

  17. A File Based Visualization of Software Evolution

    NARCIS (Netherlands)

    Voinea, S.L.; Telea, A.

    2006-01-01

    Software Configuration Management systems are important instruments for supporting development of large software projects. They accumulate large amounts of evolution data that can be used for process accounting and auditing. We study how visualization can help developers and managers to get insight

  18. Software verification and validation methodology for advanced digital reactor protection system using diverse dual processors to prevent common mode failure

    International Nuclear Information System (INIS)

    Son, Ki Chang; Shin, Hyun Kook; Lee, Nam Hoon; Baek, Seung Min; Kim, Hang Bae

    2001-01-01

    The Advanced Digital Reactor Protection System (ADRPS) with diverse dual processors is being developed by the National Research Lab of KOPEC for ADRPS development. One of the ADRPS goals is to develop digital Plant Protection System (PPS) free of Common Mode Failure (CMF). To prevent CMF, the principle of diversity is applied to both hardware design and software design. For the hardware diversity, two different types of CPUs are used for Bistable Processor and Local Coincidence Logic Processor. The VME based Single Board Computers (SBC) are used for the CPU hardware platforms. The QNX Operating System (OS) and the VxWorks OS are used for software diversity. Rigorous Software Verification and Validation (V and V) is also required to prevent CMF. In this paper, software V and V methodology for the ADRPS is described to enhance the ADRPS software reliability and to assure high quality of the ADRPS software

  19. Software system safety

    Science.gov (United States)

    Uber, James G.

    1988-01-01

    Software itself is not hazardous, but since software and hardware share common interfaces there is an opportunity for software to create hazards. Further, these software systems are complex, and proven methods for the design, analysis, and measurement of software safety are not yet available. Some past software failures, future NASA software trends, software engineering methods, and tools and techniques for various software safety analyses are reviewed. Recommendations to NASA are made based on this review.

  20. Reliability of software

    International Nuclear Information System (INIS)

    Kopetz, H.

    1980-01-01

    Common factors and differences in the reliability of hardware and software; reliability increase by means of methods of software redundancy. Maintenance of software for long term operating behavior. (HP) [de

  1. Software quality assurance - seven years experience

    International Nuclear Information System (INIS)

    Malsbury, J.A.

    1987-01-01

    This paper describes seven years experience with software quality assurance at PPPL. It covers the early attempts of 1980 and 1981 to establish software quality assurance; the first attempt of 1982 to develop a complete software quality assurance plan; the significant modifications of this plan in 1985; and the future. In addition, the paper describes the role of the Quality Assurance organization within each plan. The scope of this paper is limited to discussions of the software development procedures used in the seven year period. Other software quality topics, such as configuration control or problem identification and resolution, are not discussed

  2. A General Water Resources Regulation Software System in China

    Science.gov (United States)

    LEI, X.

    2017-12-01

    To avoid iterative development of core modules in water resource normal regulation and emergency regulation and improve the capability of maintenance and optimization upgrading of regulation models and business logics, a general water resources regulation software framework was developed based on the collection and analysis of common demands for water resources regulation and emergency management. It can provide a customizable, secondary developed and extensible software framework for the three-level platform "MWR-Basin-Province". Meanwhile, this general software system can realize business collaboration and information sharing of water resources regulation schemes among the three-level platforms, so as to improve the decision-making ability of national water resources regulation. There are four main modules involved in the general software system: 1) A complete set of general water resources regulation modules allows secondary developer to custom-develop water resources regulation decision-making systems; 2) A complete set of model base and model computing software released in the form of Cloud services; 3) A complete set of tools to build the concept map and model system of basin water resources regulation, as well as a model management system to calibrate and configure model parameters; 4) A database which satisfies business functions and functional requirements of general water resources regulation software can finally provide technical support for building basin or regional water resources regulation models.

  3. The NOvA software testing framework

    International Nuclear Information System (INIS)

    Tamsett, M; Group, C

    2015-01-01

    The NOvA experiment at Fermilab is a long-baseline neutrino experiment designed to study vε appearance in a vμ beam. NOvA has already produced more than one million Monte Carlo and detector generated files amounting to more than 1 PB in size. This data is divided between a number of parallel streams such as far and near detector beam spills, cosmic ray backgrounds, a number of data-driven triggers and over 20 different Monte Carlo configurations. Each of these data streams must be processed through the appropriate steps of the rapidly evolving, multi-tiered, interdependent NOvA software framework. In total there are greater than 12 individual software tiers, each of which performs a different function and can be configured differently depending on the input stream. In order to regularly test and validate that all of these software stages are working correctly NOvA has designed a powerful, modular testing framework that enables detailed validation and benchmarking to be performed in a fast, efficient and accessible way with minimal expert knowledge. The core of this system is a novel series of python modules which wrap, monitor and handle the underlying C++ software framework and then report the results to a slick front-end web-based interface. This interface utilises modern, cross-platform, visualisation libraries to render the test results in a meaningful way. They are fast and flexible, allowing for the easy addition of new tests and datasets. In total upwards of 14 individual streams are regularly tested amounting to over 70 individual software processes, producing over 25 GB of output files. The rigour enforced through this flexible testing framework enables NOvA to rapidly verify configurations, results and software and thus ensure that data is available for physics analysis in a timely and robust manner. (paper)

  4. LDUA software custodian's notebook

    International Nuclear Information System (INIS)

    Aftanas, B.L.

    1998-01-01

    This plan describes the activities to be performed and controls to be applied to the process of specifying, obtaining, and qualifying the control and data acquisition software for the Light Duty Utility Arm (LDUA) System. It serves the purpose of a software quality assurance plan, a verification and validation plan, and a configuration management plan. This plan applies to all software that is an integral part of the LDUA control and data acquisition system, that is, software that is installed in the computers that are part of the LDUA system as it is deployed in the field. This plan applies to the entire development process, including: requirements; design; implementation; and operations and maintenance. This plan does not apply to any software that is not integral with the LDUA system. This plan has-been prepared in accordance with WHC-CM-6-1 Engineering Practices, EP-2.1; WHC-CM-3-10 Software Practices; and WHC-CM-4-2, QR 19.0, Software Quality Assurance Requirements

  5. System Configuration Management Implementation Procedure for the Cold Vacuum Drying Facility Monitoring and Control System

    International Nuclear Information System (INIS)

    ANGLESEY, M.O.

    2000-01-01

    The purpose of this document is to establish the System Configuration Management Implementation Procedure (SCMIP) for the Cold Vacuum Drying Facility (CVDF) Monitoring and Control System (MCS). This procedure provides configuration management for the process control system. The process control system consists of equipment hardware and software that controls and monitors the instrumentation and equipment associated with the CVDF processes. Refer to SNF-3090, Cold Vacuum Drying Facility Monitoring and Control System Design Description, HNF-3553, Annex B, Safety Analysis Report for the Cold Vacuum Drying Facility, and AP-CM-6-037-00, SNF Project Process Automation Software and Equipment Configuration. This SCMIP identifies and defines the system configuration items in the control system, provides configuration control throughout the system life cycle, provides configuration status accounting, physical protection and control, and verifies the completeness and correctness of these items

  6. Framework for Small-Scale Experiments in Software Engineering: Guidance and Control Software Project: Software Engineering Case Study

    Science.gov (United States)

    Hayhurst, Kelly J.

    1998-01-01

    Software is becoming increasingly significant in today's critical avionics systems. To achieve safe, reliable software, government regulatory agencies such as the Federal Aviation Administration (FAA) and the Department of Defense mandate the use of certain software development methods. However, little scientific evidence exists to show a correlation between software development methods and product quality. Given this lack of evidence, a series of experiments has been conducted to understand why and how software fails. The Guidance and Control Software (GCS) project is the latest in this series. The GCS project is a case study of the Requirements and Technical Concepts for Aviation RTCA/DO-178B guidelines, Software Considerations in Airborne Systems and Equipment Certification. All civil transport airframe and equipment vendors are expected to comply with these guidelines in building systems to be certified by the FAA for use in commercial aircraft. For the case study, two implementations of a guidance and control application were developed to comply with the DO-178B guidelines for Level A (critical) software. The development included the requirements, design, coding, verification, configuration management, and quality assurance processes. This paper discusses the details of the GCS project and presents the results of the case study.

  7. Modelling Configuration Knowledge in Heterogeneous Product Families

    DEFF Research Database (Denmark)

    Queva, Matthieu Stéphane Benoit; Männistö, Tomi; Ricci, Laurent

    2011-01-01

    Product configuration systems play an important role in the development of Mass Customisation. The configuration of complex product families may nowadays involve multiple design disciplines, e.g. hardware, software and services. In this paper, we present a conceptual approach for modelling...... the variability in such heterogeneous product families. Our approach is based on a framework that aims to cater for the different stakeholders involved in the modelling and management of the product family. The modelling approach is centred around the concepts of views, types and constraints and is illustrated...... by a motivation example. Furthermore, as a proof of concept, a prototype has been implemented for configuring a non-trivial heterogeneous product family....

  8. Siroco, a configurable robot control system

    International Nuclear Information System (INIS)

    Tejedor, B.G.; Maraggi, G.J.B.

    1988-01-01

    The SIROCO (Configurable Robot Control System) is an electronic system designed to work in applications where mechanized remote control equipment and robots are necessary especially in Nuclear Power Plants. The structure of the system (hardware and software) determines the following user characteristics: a) Reduction in the time spent in NDT and in radiation doses absorbed, due to remote control operation; b) possibility for full automation in NDT, c) the system can simultaneously control up to six axes and can generate movements in remote areas; and d) possibility for equipment unification, due to SIROCO being a configurable system. (author)

  9. EOS MLS Level 2 Data Processing Software Version 3

    Science.gov (United States)

    Livesey, Nathaniel J.; VanSnyder, Livesey W.; Read, William G.; Schwartz, Michael J.; Lambert, Alyn; Santee, Michelle L.; Nguyen, Honghanh T.; Froidevaux, Lucien; wang, Shuhui; Manney, Gloria L.; hide

    2011-01-01

    This software accepts the EOS MLS calibrated measurements of microwave radiances products and operational meteorological data, and produces a set of estimates of atmospheric temperature and composition. This version has been designed to be as flexible as possible. The software is controlled by a Level 2 Configuration File that controls all aspects of the software: defining the contents of state and measurement vectors, defining the configurations of the various forward models available, reading appropriate a priori spectroscopic and calibration data, performing retrievals, post-processing results, computing diagnostics, and outputting results in appropriate files. In production mode, the software operates in a parallel form, with one instance of the program acting as a master, coordinating the work of multiple slave instances on a cluster of computers, each computing the results for individual chunks of data. In addition, to do conventional retrieval calculations and producing geophysical products, the Level 2 Configuration File can instruct the software to produce files of simulated radiances based on a state vector formed from a set of geophysical product files taken as input. Combining both the retrieval and simulation tasks in a single piece of software makes it far easier to ensure that identical forward model algorithms and parameters are used in both tasks. This also dramatically reduces the complexity of the code maintenance effort.

  10. KTM Tokamak operation scenarios software infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Pavlov, V.; Baystrukov, K.; Golobkov, YU.; Ovchinnikov, A.; Meaentsev, A.; Merkulov, S.; Lee, A. [National Research Tomsk Polytechnic University, Tomsk (Russian Federation); Tazhibayeva, I.; Shapovalov, G. [National Nuclear Center (NNC), Kurchatov (Kazakhstan)

    2014-10-15

    One of the largest problems for tokamak devices such as Kazakhstan Tokamak for Material Testing (KTM) is the operation scenarios' development and execution. Operation scenarios may be varied often, so a convenient hardware and software solution is required for scenario management and execution. Dozens of diagnostic and control subsystems with numerous configuration settings may be used in an experiment, so it is required to automate the subsystem configuration process to coordinate changes of the related settings and to prevent errors. Most of the diagnostic and control subsystems software at KTM was unified using an extra software layer, describing the hardware abstraction interface. The experiment sequence was described using a command language. The whole infrastructure was brought together by a universal communication protocol supporting various media, including Ethernet and serial links. The operation sequence execution infrastructure was used at KTM to carry out plasma experiments.

  11. The study of a space configuration using space syntax analysis Case study: an elderly housing

    Science.gov (United States)

    Mariana, Yosica; Triwardhani, Arindra J.; Isnaeni Djimantoro, Michael

    2017-12-01

    The improvement in various aspect leads to prolong the life span of human life, which increasing the number of elderly in the urban areas in return. But the increasing population is not supported by the provision of adequate housing facilities for them. Most of the elderly house in Jakarta, is designed just like for common people without relizing thatthey had physical and mentally degradation following the age. Therefore, the elderly house need to design with special attention to their daily activity mobility which applied in effective room configuration. The connectivity between the activities is most important element to order the room configuration. This research conduct to search the room configuration in elderly house which can improve their productivity and live quality by using the space syntax theory. The research methods by using the syntactic plug-in in Grasshooper software and analyse the integration, choice, control value and entrophy in the activity configuration. The result show that the effective and efficient for elderly house is cluster centralized pattern. The lobby and reception take the important role as the integration aspect and the spatial awareness according to elderly activity.

  12. A new approach for ATLAS Athena job configuration

    CERN Document Server

    Lampl, Walter; The ATLAS collaboration

    2018-01-01

    The offline software framework of the ATLAS experiment (Athena) consists of many small components of various types like Algorithm, Tool or Service. To assemble these components into an executable application for event processing, a dedicated configuration step is necessary. The configuration of a particular job depends on the workflow (simulation, reconstruction, high-level trigger, overlay, calibration, analysis ...) and the input data (real or simulated data, beam-energy, ...) leading to a large number of possible configurations. The configuration step is done by executing python code. The resulting configuration depends on optionally pre-set flags as well as meta-data about the data to be processed that is found by peeking into the input file and even into databases. For the python configuration code, there is almost no structure enforced, leaving the full power of python to the user. While this approach did work, it also proved to be error prone and complicated to use. It also leads to jobs containing mor...

  13. Licensing of safety critical software for nuclear reactors. Common position of seven European nuclear regulators and authorised technical support organisations

    International Nuclear Information System (INIS)

    2007-01-01

    premise that a safety plan exists and has been agreed upon by all parties involved. The intent herein is to give guidance on how to produce the evidence and the documentation for the safety demonstration and for the contents for the safety plan. It is therefore implied that all the evidence and documentation recommended by this report, among others that the regulator may request, should be made available to the regulator. The safety plan should include a safety demonstration strategy. It is a stepwise verification which includes: - an analysis of each individual software and hardware component with its specified features, and - integrated tests of the software on a hardware system using a 'typical' configuration. Only properties at the component level can be demonstrated by this plant independent type approval. It must be remembered that a program can be correct for one set of data, and be erroneous for another. Hence assessment and testing of the plant specific software remains essential. As described earlier, in a first stage, the task force selected a set of specific technical issue areas, which were felt to be of utmost importance to the licensing process. In a second stage phase, each of these issue areas was studied and discussed in detail until a common position was reached. These issue areas were partitioned into two sets: 'Generic Licensing Issues' and 'Life Cycle Phase Licensing Issues'. Issues in the second set are related to a specific stage of the computer based system design and development process, while those of the former have more general implications and apply to several stages or to the whole system life cycle

  14. TMACS test procedure TP005: Sensor configuration, logging, and data conversion. Revision 4

    International Nuclear Information System (INIS)

    Washburn, S.J.

    1994-01-01

    The TMACS Software Project Test Procedures translate the project's acceptance criteria into test steps. Software releases are certified when the affected Test Procedures are successfully performed and the customers authorize installation of these changes. This Test Procedure addresses the sensor configuration, conversion and logging requirements of the TMACS. The features to be tested are as follows: sensor configuration data; conversion of continuous sensor data to engineering units; conversion of digital data to discrete states; discrete sensor data logging; and continuous sensor data logging

  15. Software Considerations for Subscale Flight Testing of Experimental Control Laws

    Science.gov (United States)

    Murch, Austin M.; Cox, David E.; Cunningham, Kevin

    2009-01-01

    The NASA AirSTAR system has been designed to address the challenges associated with safe and efficient subscale flight testing of research control laws in adverse flight conditions. In this paper, software elements of this system are described, with an emphasis on components which allow for rapid prototyping and deployment of aircraft control laws. Through model-based design and automatic coding a common code-base is used for desktop analysis, piloted simulation and real-time flight control. The flight control system provides the ability to rapidly integrate and test multiple research control laws and to emulate component or sensor failures. Integrated integrity monitoring systems provide aircraft structural load protection, isolate the system from control algorithm failures, and monitor the health of telemetry streams. Finally, issues associated with software configuration management and code modularity are briefly discussed.

  16. Air Force Space Command. Space and Missile Systems Center Standard. Configuration Management

    Science.gov (United States)

    2008-06-13

    Engineering Drawing Practices IEEE STD 610.12 Glossary of Software Engineering Terminology, September 28,1990 ISO /IEC 12207 Software Life...item, regardless of media, formally designated and fixed at a specific time during the configuration item’s life cycle. (Source: ISO /IEC 12207

  17. Design and configuration of VME EPICS driver for He RFQ LLRF control system

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, Tae-Sung; Jeong, Hae-Seong; Kim, Seong-Gu; Song, Young-Gi; Kim, Han-Sung; Seol, Kyung-Tae; Kwon, Hyeok-Jung; Cho, Yong-Sub [KOMAC, Gyeongju (Korea, Republic of)

    2015-05-15

    In Helium Radio-Frequency Quadrupole (He RFQ) development, the role of the high-power Radio-Frequency (RF) is very important because it is responsible for stable delivery and efficient acceleration of the beam. Since the amplitude control requirements of LLRF system are ±1 % (amplitude), we need a precise remote control system for this reason. This system is referred to as Low-Level RF (LLRF) control system. This paper describes the basic configuration tasks performed by hardware side and the software side to build the LLRF control system, and describes the future work of the He RFQ LLRF control system based on this paper. LLRF control system development at the He RFQ development stage is important. LLRF control system development requires the exact configuration of hardware and software. For each of the Layer configuration is completed on the software side and hardware modules: vxworks operating system installation, EPICS BASE compilation, module source code compiled, object file loading and execution on vxworks, EPICS IOC operation check, etc.

  18. Configurable 3D rotational X-ray reconstruction

    NARCIS (Netherlands)

    Nguyen, Xuan Huy

    2012-01-01

    This report is one of the deliverables of the project "Configurable 3D Rotational X-ray Reconstruction", carried out by the author as the final part of the Professional Doctorate in Engineering (PDEng) degree program in Software Technology provided by Eindhoven University of Technology and Stan

  19. Fuel control device for various gas turbine configurations

    Energy Technology Data Exchange (ETDEWEB)

    Stearns, C F; Tutherly, H W

    1980-09-29

    The hydromechanic fuel control device can be adapted for various engine configurations as for example turbofan-, turbopro-, and turboshaft engines by providing those elements which are common for all engine configurations in the main housing and a detachable block for each individual configuration with all control elements and flow channels necessary for the respective configuration.

  20. SAGA: A project to automate the management of software production systems

    Science.gov (United States)

    Campbell, Roy H.; Laliberte, D.; Render, H.; Sum, R.; Smith, W.; Terwilliger, R.

    1987-01-01

    The Software Automation, Generation and Administration (SAGA) project is investigating the design and construction of practical software engineering environments for developing and maintaining aerospace systems and applications software. The research includes the practical organization of the software lifecycle, configuration management, software requirements specifications, executable specifications, design methodologies, programming, verification, validation and testing, version control, maintenance, the reuse of software, software libraries, documentation, and automated management.

  1. Automatic creation of simulation configuration. The SIPA workshop: SWORD

    International Nuclear Information System (INIS)

    Oudot, G.; Valembois, A.

    1994-01-01

    SWORD (Software Workshop Oriented towards Research and Development) is not only a software management system but also and mainly a software development system. The SWORD workshop is organised in hierarchical levels: (1) the automatic or manual creation of elementary models based on FORTRAN ANSI standard language; these models have interface variables structured in so-called connection points; Automatic model generators are used for the simulation of standard, repeated equipment: HYTHERNET covers the simulation of hydraulic, thermal, chemistry and activity; CONTRONET covers the simulation of I and C system, i.e. logic, protection and control systems; The capture of system topology for both generators is carried out on a graphic workstation under CAD system, (2) The models assembly generator, in charge of linking models (via connection points) and organizing their calling sequence in order to create a simulation application, (3) The configurations in charge of creation of external environment and of links between models assembly and external environment (connection with control desk, plant computer system, safety parameter display etc.), (4) The configuration generator which exports the simulation configuration to the target machine and generates the appropriate command for compilations and link editions; The workshop Administration ensures management, consistency checks are carried out at each step with warnings generated when applicable, and automatic chaining of the appropriate commands according to engineer request are available. (orig.) (4 refs., 4 figs.)

  2. Mars Science Laboratory Flight Software Boot Robustness Testing Project Report

    Science.gov (United States)

    Roth, Brian

    2011-01-01

    On the surface of Mars, the Mars Science Laboratory will boot up its flight computers every morning, having charged the batteries through the night. This boot process is complicated, critical, and affected by numerous hardware states that can be difficult to test. The hardware test beds do not facilitate testing a long duration of back-to-back unmanned automated tests, and although the software simulation has provided the necessary functionality and fidelity for this boot testing, there has not been support for the full flexibility necessary for this task. Therefore to perform this testing a framework has been build around the software simulation that supports running automated tests loading a variety of starting configurations for software and hardware states. This implementation has been tested against the nominal cases to validate the methodology, and support for configuring off-nominal cases is ongoing. The implication of this testing is that the introduction of input configurations that have yet proved difficult to test may reveal boot scenarios worth higher fidelity investigation, and in other cases increase confidence in the robustness of the flight software boot process.

  3. Device Configuration Handler for Accelerator Control Applications at Jefferson Lab

    International Nuclear Information System (INIS)

    Bickley, Matt; Chevtsov, P.; Larrieu, T.

    2003-01-01

    The accelerator control system at Jefferson Lab uses hundreds of physical devices with such popular instrument bus interfaces as Industry Pack (IPAC), GPIB, RS-232, etc. To properly handle all these components, control computers (IOCs) must be provided with the correct information about the unique memory addresses of the used interface cards, interrupt numbers (if any), data communication channels and protocols. In these conditions, the registration of a new control device in the control system is not an easy task for software developers. Because the device configuration is distributed, it requires the detailed knowledge about not only the new device but also the configuration of all other devices on the existing system. A configuration handler implemented at Jefferson Lab centralizes the information about all control devices making their registration user-friendly and very easy to use. It consists of a device driver framework and the device registration software developed on the basis of ORACLE database and freely available scripting tools (perl, php)

  4. Application of REVEAL-W to risk-based configuration control

    International Nuclear Information System (INIS)

    Dezfuli, H.; Meyer, J.; Modarres, M.

    1994-01-01

    Over the past two years, the concept of risk-based configuration control has been introduced to the US Nuclear Regulatory Commission and the nuclear industry. Converting much of the current, deterministically based regulation of nuclear power plants to risk-based regulation can result in lower levels of risk while relieving unnecessary burdens on power plant operators and regulatory staff. To achieve the potential benefits of risk-based configuration control, the risk models developed for nuclear power plants should be (1) flexible enough to effectively support necessary risk calculations, and (2) transparent enough to encourage their use by all parties. To address these needs, SCIENTECH, Inc., has developed the PC-based REVEAL W (formerly known as SMART). This graphic-oriented and user-friendly application software allows the user to develop transparent complex logic models based on the concept of the master plant logic diagram. The logic model is success-oriented and compact. The analytical capability built into REVEAL W is generic, so the software can support different types of risk-based evaluations, such as probabilistic safety assessment, accident sequence precursor analysis, design evaluation and configuration management. In this paper, we focus on the application of REVEAL W to support risk-based configuration control of nuclear power plants. (author)

  5. Numerical Study of Traffic Pollutant Dispersion within Different Street Canyon Configurations

    Directory of Open Access Journals (Sweden)

    Yucong Miao

    2014-01-01

    Full Text Available The objective of this study is to numerically study flow and traffic exhaust dispersion in urban street canyons with different configurations to find out the urban-planning strategies to ease the air pollution. The Computational Fluid Dynamics (CFD model used in this study—Open Source Field Operation and Manipulation (OpenFOAM software package—was firstly validated against the wind-tunnel experiment data by using three different k-ε turbulence models. And then the patterns of flow and dispersion within three different kinds of street canyon configuration under the perpendicular approaching flow were numerically studied. The result showed that the width and height of building can dramatically affect the pollution level inside the street canyon. As the width or height of building increases, the pollution at the pedestrian level increases. And the asymmetric configuration (step-up or step-down street canyon could provide better ventilation. It is recommended to design a street canyon with nonuniform configurations. And the OpenFOAM software package can be used as a reliable tool to study flows and dispersions around buildings.

  6. Configuration management

    International Nuclear Information System (INIS)

    Beavers, R.R.; Sumiec, K.F.

    1989-01-01

    Increasing regulatory and industry attention has been focused on properly controlling electrical design changes. These changes can be controlled by using configuration management techniques. Typically, there are ongoing modifications to various process systems or additions due to new requirements at every power plant. Proper control of these changes requires that an organized method be used to ensure that all important parameters of the electrical auxiliary systems are analyzed and that these parameters are evaluated accurately. This process, commonly referred to as configuration management, is becoming more important on both fossil and nuclear plants. Recent NRC- and utility-initiated inspections have identified problems due to incomplete analysis of changes to electrical auxiliary systems at nuclear stations

  7. Developing Reusable and Reconfigurable Real-Time Software using Aspects and Components

    OpenAIRE

    Tešanović, Aleksandra

    2006-01-01

    Our main focus in this thesis is on providing guidelines, methods, and tools for design, configuration, and analysis of configurable and reusable real-time software, developed using a combination of aspect-oriented and component-based software development. Specifically, we define a reconfigurable real-time component model (RTCOM) that describes how a real-time component, supporting aspects and enforcing information hiding, could efficiently be designed and implemented. In this context, we out...

  8. Design and implement of BESIII online histogramming software

    International Nuclear Information System (INIS)

    Li Fei; Wang Liang; Liu Yingjie; Chinese Academy of Sciences, Beijing; Zhu Kejun; Zhao Jingwei

    2007-01-01

    The online histogramming software is an important part of the BESIII DAQ (Data Acquisition) system. This article introduces the main requirements and design of the online histogramming software and presents how to produce, transmit and gather histograms in the distributed environment in the current software implement. The article also illustrate one smart, simple and easy to expand way of setup with xml configure database. (authors)

  9. Aircraft Design Software

    Science.gov (United States)

    1997-01-01

    Successful commercialization of the AirCraft SYNThesis (ACSYNT) tool has resulted in the creation of Phoenix Integration, Inc. ACSYNT has been exclusively licensed to the company, an outcome of a seven year, $3 million effort to provide unique software technology to a focused design engineering market. Ames Research Center formulated ACSYNT and in working with the Virginia Polytechnic Institute CAD Laboratory, began to design and code a computer-aided design for ACSYNT. Using a Joint Sponsored Research Agreement, Ames formed an industry-government-university alliance to improve and foster research and development for the software. As a result of the ACSYNT Institute, the software is becoming a predominant tool for aircraft conceptual design. ACSYNT has been successfully applied to high- speed civil transport configuration, subsonic transports, and supersonic fighters.

  10. An advanced configuration management system for full scope power plant simulators

    International Nuclear Information System (INIS)

    Storm, J.; Goemann, A.

    1996-01-01

    In August 1993 KSG Kraftwerks-Simulator-Gesellschaft, Germany, awarded a contract to STN ATLAS Elektronik for the delivery of two full scope replica training simulators for the German BWR plants Isar 1 and Philipsburg 1, known as the double simulator project S30 (S31/S32). For both projects a computer based Configuration Management System (CMS) was required to overcome deficiencies of older simulator systems in terms of limited upgrade and maintenance capabilities and incomplete documentation. The CMS allows complete control over the entire simulator system covering all software- and hardware-items and therewith exceed quality assurance requirements as defined in ISO 9000-3 which gives recommendations for software configuration management only. The system is realized under the project using the UNIX based relational database system EMPRESS and is in use as a development- and maintenance-tool to improve simulator quality and ensure simulator configuration integrity

  11. Viewport: An object-oriented approach to integrate workstation software for tile and stack mode display

    OpenAIRE

    Ghosh, Srinka; Andriole, Katherine P.; Avrin, David E.

    1997-01-01

    Diagnostic workstation design has migrated towards display presentation in one of two modes: tiled images or stacked images. It is our impression that the workstation setup or configuration in each of these two modes is rather distinct. We sought to establish a commonality to simplify software design, and to enable a single descriptor method to facilitate folder manager development of “hanging” protocols. All current workstation designs use a combination of “off-screen” and “on-screen” memory...

  12. The nightly build and test system for LCG AA and LHCb software

    CERN Document Server

    Kruzelecki, K; Degaudenzi, H

    2010-01-01

    The core software stack both from the LCG Application Area and LHCb consists of more than 25 C++/Fortran/Python projects build for about 20 different configurations on Linux, Windows and MacOSX. To these projects, one can also add about 70 external software packages (Boost, Python, Qt, CLHEP, ...) which have also to be build for the same configurations. It order to reduce the time of the development cycle and increase the quality insurance, a framework has been developed for the daily (nightly actually) build and test of the software. Performing the build and the tests on several configurations and platform allows to increase the efficiency of the unit and integration tests. Main features: - flexible and fine grained setup (full, partial build) through a web interface; - possibility to build several “slots” with different configurations; - precise and highly granular reports on a web server; - support for CMT projects (but not only) with their cross-dependencies; - scalable client-server architecture for ...

  13. Space shuttle configuration accounting functional design specification

    Science.gov (United States)

    1974-01-01

    An analysis is presented of the requirements for an on-line automated system which must be capable of tracking the status of requirements and engineering changes and of providing accurate and timely records. The functional design specification provides the definition, description, and character length of the required data elements and the interrelationship of data elements to adequately track, display, and report the status of active configuration changes. As changes to the space shuttle program levels II and III configuration are proposed, evaluated, and dispositioned, it is the function of the configuration management office to maintain records regarding changes to the baseline and to track and report the status of those changes. The configuration accounting system will consist of a combination of computers, computer terminals, software, and procedures, all of which are designed to store, retrieve, display, and process information required to track proposed and proved engineering changes to maintain baseline documentation of the space shuttle program levels II and III.

  14. A CMake-based build and configuration framework

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    The LHCb experiment has been using the CMT build and configuration tool for its software since the first versions, mainly because of its multi-platform build support and its powerful configuration management functionality. Still, CMT has some limitations in terms of build performance and the increased complexity added to the tool to cope with new use cases added latterly. Therefore, we have been looking for a viable alternative to it and we have investigated the possibility of adopting the CMake tool, which does a very good job for building and is getting very popular in the HEP community. The result of this study is a CMake-based framework which provides most of the special configuration features available natively only in CMT, with the advantages of better performances, flexibility and portability.

  15. Agile software assessment

    OpenAIRE

    Nierstrasz Oscar; Lungu Mircea

    2012-01-01

    Informed decision making is a critical activity in software development but it is poorly supported by common development environments which focus mainly on low level programming tasks. We posit the need for agile software assessment which aims to support decision making by enabling rapid and effective construction of software models and custom analyses. Agile software assessment entails gathering and exploiting the broader context of software information related to the system at hand as well ...

  16. Advances in software science and technology

    CERN Document Server

    Hikita, Teruo; Kakuda, Hiroyasu

    1993-01-01

    Advances in Software Science and Technology, Volume 4 provides information pertinent to the advancement of the science and technology of computer software. This book discusses the various applications for computer systems.Organized into two parts encompassing 10 chapters, this volume begins with an overview of the historical survey of programming languages for vector/parallel computers in Japan and describes compiling methods for supercomputers in Japan. This text then explains the model of a Japanese software factory, which is presented by the logical configuration that has been satisfied by

  17. An ontology-based semantic configuration approach to constructing Data as a Service for enterprises

    Science.gov (United States)

    Cai, Hongming; Xie, Cheng; Jiang, Lihong; Fang, Lu; Huang, Chenxi

    2016-03-01

    To align business strategies with IT systems, enterprises should rapidly implement new applications based on existing information with complex associations to adapt to the continually changing external business environment. Thus, Data as a Service (DaaS) has become an enabling technology for enterprise through information integration and the configuration of existing distributed enterprise systems and heterogonous data sources. However, business modelling, system configuration and model alignment face challenges at the design and execution stages. To provide a comprehensive solution to facilitate data-centric application design in a highly complex and large-scale situation, a configurable ontology-based service integrated platform (COSIP) is proposed to support business modelling, system configuration and execution management. First, a meta-resource model is constructed and used to describe and encapsulate information resources by way of multi-view business modelling. Then, based on ontologies, three semantic configuration patterns, namely composite resource configuration, business scene configuration and runtime environment configuration, are designed to systematically connect business goals with executable applications. Finally, a software architecture based on model-view-controller (MVC) is provided and used to assemble components for software implementation. The result of the case study demonstrates that the proposed approach provides a flexible method of implementing data-centric applications.

  18. Common Practices from Two Decades of Water Resources Modelling Published in Environmental Modelling & Software: 1997 to 2016

    Science.gov (United States)

    Ames, D. P.; Peterson, M.; Larsen, J.

    2016-12-01

    A steady flow of manuscripts describing integrated water resources management (IWRM) modelling has been published in Environmental Modelling & Software since the journal's inaugural issue in 1997. These papers represent two decades of peer-reviewed scientific knowledge regarding methods, practices, and protocols for conducting IWRM. We have undertaken to explore this specific assemblage of literature with the intention of identifying commonly reported procedures in terms of data integration methods, modelling techniques, approaches to stakeholder participation, means of communication of model results, and other elements of the model development and application life cycle. Initial results from this effort will be presented including a summary of commonly used practices, and their evolution over the past two decades. We anticipate that results will show a pattern of movement toward greater use of both stakeholder/participatory modelling methods as well as increased use of automated methods for data integration and model preparation. Interestingly, such results could be interpreted to show that the availability of better, faster, and more integrated software tools and technologies free the modeler to take a less technocratic and more human approach to water resources modelling.

  19. Overview of Java application configuration frameworks

    OpenAIRE

    Denisov, Victor

    2013-01-01

    This paper reviews three major application configuration frameworks for Java-based applications: java.util.Properties, Apache Commons Configuration and Preferences API. Basic functionality of each framework is illustrated with code examples. Pros and cons of each framework are described in moderate detail. Suggestions are made about typical use cases for each framework.

  20. Configuration space analysis of common cost functions in radiotherapy beam-weight optimization algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Rowbottom, Carl Graham [Joint Department of Physics, Institute of Cancer Research and the Royal Marsden NHS Trust, Sutton, Surrey (United Kingdom); Webb, Steve [Joint Department of Physics, Institute of Cancer Research and the Royal Marsden NHS Trust, Sutton, Surrey (United Kingdom)

    2002-01-07

    The successful implementation of downhill search engines in radiotherapy optimization algorithms depends on the absence of local minima in the search space. Such techniques are much faster than stochastic optimization methods but may become trapped in local minima if they exist. A technique known as 'configuration space analysis' was applied to examine the search space of cost functions used in radiotherapy beam-weight optimization algorithms. A downhill-simplex beam-weight optimization algorithm was run repeatedly to produce a frequency distribution of final cost values. By plotting the frequency distribution as a function of final cost, the existence of local minima can be determined. Common cost functions such as the quadratic deviation of dose to the planning target volume (PTV), integral dose to organs-at-risk (OARs), dose-threshold and dose-volume constraints for OARs were studied. Combinations of the cost functions were also considered. The simple cost function terms such as the quadratic PTV dose and integral dose to OAR cost function terms are not susceptible to local minima. In contrast, dose-threshold and dose-volume OAR constraint cost function terms are able to produce local minima in the example case studied. (author)

  1. Configuration Management File Manager Developed for Numerical Propulsion System Simulation

    Science.gov (United States)

    Follen, Gregory J.

    1997-01-01

    One of the objectives of the High Performance Computing and Communication Project's (HPCCP) Numerical Propulsion System Simulation (NPSS) is to provide a common and consistent way to manage applications, data, and engine simulations. The NPSS Configuration Management (CM) File Manager integrated with the Common Desktop Environment (CDE) window management system provides a common look and feel for the configuration management of data, applications, and engine simulations for U.S. engine companies. In addition, CM File Manager provides tools to manage a simulation. Features include managing input files, output files, textual notes, and any other material normally associated with simulation. The CM File Manager includes a generic configuration management Application Program Interface (API) that can be adapted for the configuration management repositories of any U.S. engine company.

  2. ICAROUS: Integrated Configurable Architecture for Unmanned Systems

    Science.gov (United States)

    Consiglio, Maria C.

    2016-01-01

    NASA's Unmanned Aerial System (UAS) Traffic Management (UTM) project aims at enabling near-term, safe operations of small UAS vehicles in uncontrolled airspace, i.e., Class G airspace. A far-term goal of UTM research and development is to accommodate the expected rise in small UAS traffic density throughout the National Airspace System (NAS) at low altitudes for beyond visual line-of-sight operations. This video describes a new capability referred to as ICAROUS (Integrated Configurable Algorithms for Reliable Operations of Unmanned Systems), which is being developed under the auspices of the UTM project. ICAROUS is a software architecture comprised of highly assured algorithms for building safety-centric, autonomous, unmanned aircraft applications. Central to the development of the ICAROUS algorithms is the use of well-established formal methods to guarantee higher levels of safety assurance by monitoring and bounding the behavior of autonomous systems. The core autonomy-enabling capabilities in ICAROUS include constraint conformance monitoring and autonomous detect and avoid functions. ICAROUS also provides a highly configurable user interface that enables the modular integration of mission-specific software components.

  3. Integrated Development and Maintenance of Software Products to Support Efficient Updating of Customer Configurations: A Case Study in Mass Market ERP Software

    NARCIS (Netherlands)

    Jansen, S.R.L.; Brinkkemper, S.; Ballintijn, G.; Nieuwland, Arco van

    2006-01-01

    The maintenance of enterprise application software at a customer site is a potentially complex task for software vendors. This complexity can unfortunately result in a significant amount of work and risk. This paper presents a case study of a product software vendor that tries to reduce this

  4. Project B610 process control configuration acceptance test report

    International Nuclear Information System (INIS)

    Silvan, G.R.

    1995-01-01

    The purpose of this test is to verify the Westinghouse configuration of the MICON A/S Distributed Control System for project B610. The following will be verified: (1) proper assignment and operation of all field inputs to and outputs from the MICON Termination panels; (2) proper operation of all display data on the operators' console; (3) proper operation of all required alarms; and (4) proper operation of all required interlocks. This test only verifies the proper operation of the Westinghouse control configuration (or program). It will not be responsible for verifying proper operation of the MICON hardware or operating software. Neither does it test any of the B610 instrument. The MICON hardware and software has been tested as part of the equipment procurement. Instrumentation and wiring installed under project B620 will be tested under a separate functional test. In some cases, precise transmitter ranges, alarm setpoints, and controller tuning parameters are not available at this time. Therefore, approximate values are used during the test. This should not affect the proper operation of the configuration or the validity of this test. Final values will be assigned during operability testing

  5. Performance Evaluation of Software Routers with VPN Features

    Directory of Open Access Journals (Sweden)

    H. Redžović

    2017-11-01

    Full Text Available This paper presents implementation and analysis of the VPN software router which is based on Quagga and strongSwan open-source software tools. We validated the functionalities of strongSwan and Quagga in realistic environment which include scenarios with link failures. Also, we measured and analyzed the performance of encryption and hash algorithms supported by strongSwan software, in order to advise an optimal VPN configuration that provides the best performance.

  6. Etiological classification of presbycusis in Turkish population according to audiogram configuration.

    Science.gov (United States)

    Kaya, Kamil Hakan; Karaman Koç, Arzu; Sayın, İbrahim; Güneş, Selçuk; Canpolat, Sinan; Şimşek, Baver; Kayhan, Fatma Tülin

    2015-01-01

    This study aims to classify age related hearing loss in Turkish population according to Schuknecht audiometric configurations for presbycusis and investigate the most common etiologies. A total of 1,134 patients (568 males, 566 females; mean age 70.5±7.7 years; range 55 to 80 years) with age related hearing loss were included in the study. Audiograms of patients were classified into three categories: high frequency steeply sloping (HFSS), flat, and high frequency gently sloping (HFGS). Speech discrimination scores were evaluated and compared. In the study population, HFSS audiogram configuration was the most frequently observed (48.5%), followed by HFGS configuration (26.9%), and flat configuration (24.5%), respectively. While HFSS audiogram configuration was statistically significantly more common in males, flat audiogram configuration was statistically significantly more common in females (p=0.0001). HFSS group mean air conduction threshold were statistically significantly higher than flat and HFGS groups (p=0.0001). No statistically significantly difference was detected in terms of speech discrimination scores between three groups (p=0.796). Results of this study suggest that, in Turkish population, while sensory presbycusis is more common in males, strial presbycusis is more common in females. No difference was detected in terms of the prevalence of cochlear presbycusis in males and females (p=0.0001).

  7. LHCb: The nightly build and test system for LCG AA and LHCb software

    CERN Multimedia

    Kruzelecki, K; Degaudenzi, H

    2009-01-01

    The core software stack both from the LCG Application Area and LHCb consists of more than 25 C++/Fortran/Python projects build for about 20 different configurations on Linux, Windows and MacOSX. To these projects, one can also add about 20 external software packages (Boost, Python, Qt, CLHEP, ...) which have also to be build for the same configurations. It order to reduce the time of the development cycle and increase the quality insurance, a framework has been developed for the daily (nightly actually) build and test of the software. Performing the build and the tests on several configurations and platform allows to increase the efficiency of the unit and integration tests. Main features: - flexible and fine grained setup (full, partial build) through a web interface - possibility to build several "slots" with different configurations - precise and highly granular reports on a web server - support for CMT projects (but not only) with their cross-dependencies. - scalable client -server architecture for the co...

  8. Simulator configuration management system

    International Nuclear Information System (INIS)

    Faulent, J.; Brooks, J.G.

    1990-01-01

    The proposed revisions to ANS 3.5-1985 (Section 5) require Utilities to establish a simulator Configuration Management System (CMS). The proposed CMS must be capable of: Establishing and maintaining a simulator design database. Identifying and documenting differences between the simulator and its reference plant. Tracking the resolution of identified differences. Recording data to support simulator certification, testing and maintenance. This paper discusses a CMS capable of meeting the proposed requirements contained in ANS 3.5. The system will utilize a personal computer and a relational database management software to construct a simulator design database. The database will contain records to all reference nuclear plant data used in designing the simulator, as well as records identifying all the software, hardware and documentation making up the simulator. Using the relational powers of the database management software, reports will be generated identifying the impact of reference plant changes on the operation of the simulator. These reports can then be evaluated in terms of training needs to determine if changes are required for the simulator. If a change is authorized, the CMS will track the change through to its resolution and then incorporate the change into the simulator design database

  9. National Ignition Facility Configuration Management Plan

    International Nuclear Information System (INIS)

    Cabral, S G; Moore, T L

    2002-01-01

    This Configuration Management Plan (CMP) describes the technical and administrative management process for controlling the National Ignition Facility (NIF) Project configuration. The complexity of the NIF Project (i.e., participation by multiple national laboratories and subcontractors involved in the development, fabrication, installation, and testing of NIF hardware and software, as well as construction and testing of Project facilities) requires implementation of the comprehensive configuration management program defined in this plan. A logical schematic illustrating how the plan functions is provided in Figure 1. A summary of the process is provided in Section 4.0, Configuration Change Control. Detailed procedures that make up the overall process are referenced. This CMP is consistent with guidance for managing a project's configuration provided in Department of Energy (DOE) Order 430.1, Guide PMG 10, ''Project Execution and Engineering Management Planning''. Configuration management is a formal discipline comprised of the following four elements: (1) Identification--defines the functional and physical characteristics of a Project and uniquely identifies the defining requirements. This includes selection of components of the end product(s) subject to control and selection of the documents that define the project and components. (2) Change management--provides a systematic method for managing changes to the project and its physical and functional configuration to ensure that all changes are properly identified, assessed, reviewed, approved, implemented, tested, and documented. (3) Data management--ensures that necessary information on the project and its end product(s) is systematically recorded and disseminated for decision-making and other uses. Identifies, stores and controls, tracks status, retrieves, and distributes documents. (4) Assessments and validation--ensures that the planned configuration requirements match actual physical configurations and

  10. A new approach to configurable primary data collection.

    Science.gov (United States)

    Stanek, J; Babkin, E; Zubov, M

    2016-09-01

    The formats, semantics and operational rules of data processing tasks in genomics (and health in general) are highly divergent and can rapidly change. In such an environment, the problem of consistent transformation and loading of heterogeneous input data to various target repositories becomes a critical success factor. The objective of the project was to design a new conceptual approach to configurable data transformation, de-identification, and submission of health and genomic data sets. Main motivation was to facilitate automated or human-driven data uploading, as well as consolidation of heterogeneous sources in large genomic or health projects. Modern methods of on-demand specialization of generic software components were applied. For specification of input-output data and required data collection activities, we propose a simple data model of flat tables as well as a domain-oriented graphical interface and portable representation of transformations in XML. Using such methods, the prototype of the Configurable Data Collection System (CDCS) was implemented in Java programming language with Swing graphical interfaces. The core logic of transformations was implemented as a library of reusable plugins. The solution is implemented as a software prototype for a configurable service-oriented system for semi-automatic data collection, transformation, sanitization and safe uploading to heterogeneous data repositories-CDCS. To address the dynamic nature of data schemas and data collection processes, the CDCS prototype facilitates interactive, user-driven configuration of the data collection process and extends basic functionality with a wide range of third-party plugins. Notably, our solution also allows for the reduction of manual data entry for data originally missing in the output data sets. First experiments and feedback from domain experts confirm the prototype is flexible, configurable and extensible; runs well on data owner's systems; and is not dependent on

  11. ETICS meta-data software editing - from check out to commit operations

    International Nuclear Information System (INIS)

    Begin, M-E; Sancho, G D-A; Ronco, S D; Gentilini, M; Ronchieri, E; Selmi, M

    2008-01-01

    People involved in modular projects need to improve the build software process, planning the correct execution order and detecting circular dependencies. The lack of suitable tools may cause delays in the development, deployment and maintenance of the software. Experience in such projects has shown that the use of version control and build systems is not able to support the development of the software efficiently, due to a large number of errors each of which causes the breaking of the build process. Common causes of errors are for example the adoption of new libraries, libraries incompatibility, the extension of the current project in order to support new software modules. In this paper, we describe a possible solution implemented in ETICS, an integrated infrastructure for the automated configuration, build and test of Grid and distributed software. ETICS has defined meta-data software abstractions, from which it is possible to download, build and test software projects, setting for instance dependencies, environment variables and properties. Furthermore, the meta-data information is managed by ETICS reflecting the version control system philosophy, because of the existence of a meta-data repository and the handling of a list of operations, such as check out and commit. All the information related to a specific software are stored in the repository only when they are considered to be correct. By means of this solution, we introduce a sort of flexibility inside the ETICS system, allowing users to work accordingly to their needs. Moreover, by introducing this functionality, ETICS will be a version control system like for the management of the meta-data

  12. The Qualification Experiences for Safety-critical Software of POSAFE-Q

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jang Yeol; Son, Kwang Seop; Cheon, Se Woo; Lee, Jang Soo; Kwon, Kee Choon [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2009-05-15

    Programmable Logic Controllers (PLC) have been applied to the Reactor Protection System (RPS) and the Engineered Safety Feature (ESF)-Component Control System (CCS) as the major safety system components of nuclear power plants. This paper describes experiences on the qualification of the safety-critical software including the pCOS kernel and system tasks related to a safety-grade PLC, i.e. the works done for the Software Verification and Validation, Software Safety Analysis, Software Quality Assurance, and Software Configuration Management etc.

  13. Space Geodesy Project Information and Configuration Management Procedure

    Science.gov (United States)

    Merkowitz, Stephen M.

    2016-01-01

    This plan defines the Space Geodesy Project (SGP) policies, procedures, and requirements for Information and Configuration Management (CM). This procedure describes a process that is intended to ensure that all proposed and approved technical and programmatic baselines and changes to the SGP hardware, software, support systems, and equipment are documented.

  14. Software quality assurance plan for PORFLOW-3D

    International Nuclear Information System (INIS)

    Maheras, S.J.

    1993-03-01

    This plan describes the steps taken by the Idaho National Engineering Laboratory Subsurface and Environmental Modeling Unit personnel to implement software quality assurance procedures for the PORFLOW-3D computer code. PORFLOW-3D was used to conduct radiological performance assessments at the Savannah River Site. software quality assurance procedures for PORFLOW-3D include software acquisition, installation, testing, operation, maintenance, and retirement. Configuration control and quality assurance procedures are also included or referenced in this plan

  15. Installation et Configuration Centralisées et Automatisées d’une Ferme de Serveur sous SLC6

    CERN Document Server

    Tourneyre, Stéphane; Mesnard, Emmanuel

    This report aims to present a study of the change of system installation and configuration of Linux servers with the distribution of Scientific Linux CERN (SLC) within the LHCb experiment at CERN. These servers are primarily used to sort the output data of various sensors detectors proton collisions. It was planned to explore a solution based software Cobbler / Puppet to replace the existing software, Quattor, to help install and configure automatically. First, these tests should be done on virtual machines and then putting these tools in real conditions with machines without hard disk, such as those in production. Currently, the use of software to allow configuration automates Puppet works and meets the expectations of the project manager, Niko Neufeld. Cobbler on, after various tests, it fails to meet our expectations fully. Therefore a thorough study should be continued or finding another software or by adapting Cobbler. For the part of the machines without hard disk, it should be done before the end of...

  16. Improving network management with Software Defined Networking

    International Nuclear Information System (INIS)

    Dzhunev, Pavel

    2013-01-01

    Software-defined networking (SDN) is developed as an alternative to closed networks in centers for data processing by providing a means to separate the control layer data layer switches, and routers. SDN introduces new possibilities for network management and configuration methods. In this article, we identify problems with the current state-of-the-art network configuration and management mechanisms and introduce mechanisms to improve various aspects of network management

  17. Software management of the LHC Detector Control Systems

    CERN Document Server

    Varela, F

    2007-01-01

    The control systems of each of the four Large Hadron Collider (LHC) experiments will contain of the order of 150 computers running the back-end applications. These applications will have to be maintained and eventually upgraded during the lifetime of the experiments, ~20 years. This paper presents the centralized software management strategy adopted by the Joint COntrols Project (JCOP) [1], which is based on a central database that holds the overall system configuration. The approach facilitates the integration of different parts of a control system and provides versioning of its various software components. The information stored in the configuration database can eventually be used to restore a computer in the event of failure.

  18. Software management of the LHC detector control systems

    CERN Document Server

    Varela, F

    2007-01-01

    The control systems of each of the four Large Hadron Collider (LHC) experiments will contain of the order of 150 computers running the back-end applications. These applications will have to be maintained and eventually upgraded during the lifetime of the experiments, ~20 years. This paper presents the centralized software management strategy adopted by the Joint COntrols Project (JCOP) [1], which is based on a central database that holds the overall system configuration. The approach facilitates the integration of different parts of a control system and provides versioning of its various software components. The information stored in the configuration database can eventually be used to restore a computer in the event of failure.

  19. Software quality assurance plans for safety-critical software

    International Nuclear Information System (INIS)

    Liddle, P.

    2006-01-01

    Application software is defined as safety-critical if a fault in the software could prevent the system components from performing their nuclear-safety functions. Therefore, for nuclear-safety systems, the AREVA TELEPERM R XS (TXS) system is classified 1E, as defined in the Inst. of Electrical and Electronics Engineers (IEEE) Std 603-1998. The application software is classified as Software Integrity Level (SIL)-4, as defined in IEEE Std 7-4.3.2-2003. The AREVA NP Inc. Software Program Manual (SPM) describes the measures taken to ensure that the TELEPERM XS application software attains a level of quality commensurate with its importance to safety. The manual also describes how TELEPERM XS correctly performs the required safety functions and conforms to established technical and documentation requirements, conventions, rules, and standards. The program manual covers the requirements definition, detailed design, integration, and test phases for the TELEPERM XS application software, and supporting software created by AREVA NP Inc. The SPM is required for all safety-related TELEPERM XS system applications. The program comprises several basic plans and practices: 1. A Software Quality-Assurance Plan (SQAP) that describes the processes necessary to ensure that the software attains a level of quality commensurate with its importance to safety function. 2. A Software Safety Plan (SSP) that identifies the process to reasonably ensure that safety-critical software performs as intended during all abnormal conditions and events, and does not introduce any new hazards that could jeopardize the health and safety of the public. 3. A Software Verification and Validation (V and V) Plan that describes the method of ensuring the software is in accordance with the requirements. 4. A Software Configuration Management Plan (SCMP) that describes the method of maintaining the software in an identifiable state at all times. 5. A Software Operations and Maintenance Plan (SO and MP) that

  20. LHCb: A CMake-based build and configuration framework

    CERN Multimedia

    Clemencic, M; Mato, P

    2011-01-01

    The LHCb experiment has been using the CMT build and configuration tool for its software since the first versions, mainly because of its multi-platform build support and its powerful configuration management functionality. Still, CMT has some limitations in terms of build performance and the increased complexity added to the tool to cope with new use cases added latterly. Therefore, we have been looking for a viable alternative to it and we have investigated the possibility of adopting the CMake tool, which does a very good job for building and is getting very popular in the HEP community. The result of this study is a CMake-based framework which provides most of the special configuration features available natively only in CMT, with the advantages of better performances, flexibility and portability.

  1. Software use cases to elicit the software requirements analysis within the ASTRI project

    Science.gov (United States)

    Conforti, Vito; Antolini, Elisa; Bonnoli, Giacomo; Bruno, Pietro; Bulgarelli, Andrea; Capalbi, Milvia; Fioretti, Valentina; Fugazza, Dino; Gardiol, Daniele; Grillo, Alessandro; Leto, Giuseppe; Lombardi, Saverio; Lucarelli, Fabrizio; Maccarone, Maria Concetta; Malaguti, Giuseppe; Pareschi, Giovanni; Russo, Federico; Sangiorgi, Pierluca; Schwarz, Joseph; Scuderi, Salvatore; Tanci, Claudio; Tosti, Gino; Trifoglio, Massimo; Vercellone, Stefano; Zanmar Sanchez, Ricardo

    2016-07-01

    The Italian National Institute for Astrophysics (INAF) is leading the Astrofisica con Specchi a Tecnologia Replicante Italiana (ASTRI) project whose main purpose is the realization of small size telescopes (SST) for the Cherenkov Telescope Array (CTA). The first goal of the ASTRI project has been the development and operation of an innovative end-to-end telescope prototype using a dual-mirror optical configuration (SST-2M) equipped with a camera based on silicon photo-multipliers and very fast read-out electronics. The ASTRI SST-2M prototype has been installed in Italy at the INAF "M.G. Fracastoro" Astronomical Station located at Serra La Nave, on Mount Etna, Sicily. This prototype will be used to test several mechanical, optical, control hardware and software solutions which will be used in the ASTRI mini-array, comprising nine telescopes proposed to be placed at the CTA southern site. The ASTRI mini-array is a collaborative and international effort led by INAF and carried out by Italy, Brazil and South-Africa. We present here the use cases, through UML (Unified Modeling Language) diagrams and text details, that describe the functional requirements of the software that will manage the ASTRI SST-2M prototype, and the lessons learned thanks to these activities. We intend to adopt the same approach for the Mini Array Software System that will manage the ASTRI miniarray operations. Use cases are of importance for the whole software life cycle; in particular they provide valuable support to the validation and verification activities. Following the iterative development approach, which breaks down the software development into smaller chunks, we have analysed the requirements, developed, and then tested the code in repeated cycles. The use case technique allowed us to formalize the problem through user stories that describe how the user procedurally interacts with the software system. Through the use cases we improved the communication among team members, fostered

  2. Guidance and Control Software Project Data - Volume 1: Planning Documents

    Science.gov (United States)

    Hayhurst, Kelly J. (Editor)

    2008-01-01

    The Guidance and Control Software (GCS) project was the last in a series of software reliability studies conducted at Langley Research Center between 1977 and 1994. The technical results of the GCS project were recorded after the experiment was completed. Some of the support documentation produced as part of the experiment, however, is serving an unexpected role far beyond its original project context. Some of the software used as part of the GCS project was developed to conform to the RTCA/DO-178B software standard, "Software Considerations in Airborne Systems and Equipment Certification," used in the civil aviation industry. That standard requires extensive documentation throughout the software development life cycle, including plans, software requirements, design and source code, verification cases and results, and configuration management and quality control data. The project documentation that includes this information is open for public scrutiny without the legal or safety implications associated with comparable data from an avionics manufacturer. This public availability has afforded an opportunity to use the GCS project documents for DO-178B training. This report provides a brief overview of the GCS project, describes the 4-volume set of documents and the role they are playing in training, and includes the planning documents from the GCS project. Volume 1 contains five appendices: A. Plan for Software Aspects of Certification for the Guidance and Control Software Project; B. Software Development Standards for the Guidance and Control Software Project; C. Software Verification Plan for the Guidance and Control Software Project; D. Software Configuration Management Plan for the Guidance and Control Software Project; and E. Software Quality Assurance Activities.

  3. Reconfigurable network systems and software-defined networking

    OpenAIRE

    Zilberman, N.; Watts, P. M.; Rotsos, C.; Moore, A. W.

    2015-01-01

    Modern high-speed networks have evolved from relatively static networks to highly adaptive networks facilitating dynamic reconfiguration. This evolution has influenced all levels of network design and management, introducing increased programmability and configuration flexibility. This influence has extended from the lowest level of physical hardware interfaces to the highest level of network management by software. A key representative of this evolution is the emergence of software-defined n...

  4. Rapid assessment of assignments using plagiarism detection software.

    Science.gov (United States)

    Bischoff, Whitney R; Abrego, Patricia C

    2011-01-01

    Faculty members most often use plagiarism detection software to detect portions of students' written work that have been copied and/or not attributed to their authors. The rise in plagiarism has led to a parallel rise in software products designed to detect plagiarism. Some of these products are configurable for rapid assessment and teaching, as well as for plagiarism detection.

  5. Performing Verification and Validation in Reuse-Based Software Engineering

    Science.gov (United States)

    Addy, Edward A.

    1999-01-01

    The implementation of reuse-based software engineering not only introduces new activities to the software development process, such as domain analysis and domain modeling, it also impacts other aspects of software engineering. Other areas of software engineering that are affected include Configuration Management, Testing, Quality Control, and Verification and Validation (V&V). Activities in each of these areas must be adapted to address the entire domain or product line rather than a specific application system. This paper discusses changes and enhancements to the V&V process, in order to adapt V&V to reuse-based software engineering.

  6. Application of configurable logic in nuclear fuel handling

    Energy Technology Data Exchange (ETDEWEB)

    Ernst, W H; Rayment, D J [Canadian General Electric Co. Ltd., Peterborough, ON (Canada)

    1997-12-31

    Control and protective systems operating in the older nuclear power stations are nearing the end of their reliable operating life. These systems are still subject to frequent logic changes. Testing the software logic changes is becoming a significant task with ever greater expense. The software based systems can be replaced with systems using configurable logic. These systems provide new, more reliable technology, offer the capability for change, and provide capability for complete logic simulation and test before installation. There is a base of operating experience with these devices and many potential applications where they can be used to advantage. (author). 5 refs.

  7. Application of configurable logic in nuclear fuel handling

    International Nuclear Information System (INIS)

    Ernst, W.H.; Rayment, D.J.

    1996-01-01

    Control and protective systems operating in the older nuclear power stations are nearing the end of their reliable operating life. These systems are still subject to frequent logic changes. Testing the software logic changes is becoming a significant task with ever greater expense. The software based systems can be replaced with systems using configurable logic. These systems provide new, more reliable technology, offer the capability for change, and provide capability for complete logic simulation and test before installation. There is a base of operating experience with these devices and many potential applications where they can be used to advantage. (author). 5 refs

  8. Computer software configuration management plan for the Honeywell modular automation system

    International Nuclear Information System (INIS)

    Cunningham, L.T.

    1997-01-01

    This document provides a Computer Software management plan for a new Honeywell Modular Automation System (MAS) being installed in the Plutonium Finishing Plant (PFP). This type of system will be used to control new thermal stabilization furnaces, a vertical denitrator calciner, and a pyrolysis furnace

  9. Validation and configuration management plan for the KE basins KE-PU spreadsheet code

    International Nuclear Information System (INIS)

    Harris, R.A.

    1996-01-01

    This report provides documentation of the spreadsheet KE-PU software that is used to verify compliance with the Operational Safety Requirement and Process Standard limit on the amount of plutonium in the KE-Basin sandfilter backwash pit. Included are: A summary of the verification of the method and technique used in KE-PU that were documented elsewhere, the requirements, plans, and results of validation tests that confirm the proper functioning of the software, the procedures and approvals required to make changes to the software, and the method used to maintain configuration control over the software

  10. Tracking code patterns over multiple software versions with Herodotos

    DEFF Research Database (Denmark)

    Palix, Nicolas Jean-Michel; Lawall, Julia; Muller, Gilles

    2010-01-01

    An important element of understanding a software code base is to identify the repetitive patterns of code it contains and how these evolve over time. Some patterns are useful to the software, and may be modularized. Others are detrimental to the software, such as patterns that represent defects...... pattern occurrences over multiple versions of a software project, independent of other changes in the source files. Guided by a user-provided configuration file, Herodotos builds various graphs showing the evolution of the pattern occurrences and computes some statistics. We have evaluated this approach...

  11. 48 CFR 352.239-70 - Standard for security configurations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Standard for security... operating system patch level and anti-virus software level. Note: FDCC is applicable to all computing... applications operated on behalf of HHS are fully functional and operate correctly on systems configured in...

  12. 7 Processes that Enable NASA Software Engineering Technologies: Value-Added Process Engineering

    Science.gov (United States)

    Housch, Helen; Godfrey, Sally

    2011-01-01

    The presentation reviews Agency process requirements and the purpose, benefits, and experiences or seven software engineering processes. The processes include: product integration, configuration management, verification, software assurance, measurement and analysis, requirements management, and planning and monitoring.

  13. Seven Processes that Enable NASA Software Engineering Technologies

    Science.gov (United States)

    Housch, Helen; Godfrey, Sally

    2011-01-01

    This slide presentation reviews seven processes that NASA uses to ensure that software is developed, acquired and maintained as specified in the NPR 7150.2A requirement. The requirement is to ensure that all software be appraised for the Capability Maturity Model Integration (CMMI). The enumerated processes are: (7) Product Integration, (6) Configuration Management, (5) Verification, (4) Software Assurance, (3) Measurement and Analysis, (2) Requirements Management and (1) Planning & Monitoring. Each of these is described and the group(s) that are responsible is described.

  14. Expected and Realized Costs and Benefits from Implementing Product Configuration Systems

    DEFF Research Database (Denmark)

    Edwards, Kasper

    2010-01-01

    Product configuration systems (PCS) are a technology well suited for mass customization and support the task of configuring the product to the individual customer’s needs. PCS are at the same time complex software systems that may be tailored to solve a variety of problems for a firm, e.......g. supporting the quotation process or validating the structure of a product. This paper reports findings from a study of 12 Danish firms, which at the time of the study have implemented or are in the process of implementing product configuration systems. 12 costs and 12 benefits are identified in literature...... organization. It is observed that product configuration projects are treated as simple technical projects although they should be regarded as organizational change projects....

  15. The application of formal software engineering methods to the unattended and remote monitoring software suite at Los Alamos National Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Determan, John Clifford [Los Alamos National Laboratory; Longo, Joseph F [Los Alamos National Laboratory; Michel, Kelly D [Los Alamos National Laboratory

    2009-01-01

    The Unattended and Remote Monitoring (UNARM) system is a collection of specialized hardware and software used by the International Atomic Energy Agency (IAEA) to institute nuclear safeguards at many nuclear facilities around the world. The hardware consists of detectors, instruments, and networked computers for acquiring various forms of data, including but not limited to radiation data, global position coordinates, camera images, isotopic data, and operator declarations. The software provides two primary functions: the secure and reliable collection of this data from the instruments and the ability to perform an integrated review and analysis of the disparate data sources. Several years ago the team responsible for maintaining the software portion of the UNARM system began the process of formalizing its operations. These formal operations include a configuration management system, a change control board, an issue tracking system, and extensive formal testing, for both functionality and reliability. Functionality is tested with formal test cases chosen to fully represent the data types and methods of analysis that will be commonly encountered. Reliability is tested with iterative, concurrent testing where up to five analyses are executed simultaneously for thousands of cycles. Iterative concurrent testing helps ensure that there are no resource conflicts or leaks when multiple system components are in use simultaneously. The goal of this work is to provide a high quality, reliable product, commensurate with the criticality of the application. Testing results will be presented that demonstrate that this goal has been achieved and the impact of the introduction of a formal software engineering framework to the UNARM product will be presented.

  16. The MUSOS (MUsic SOftware System) Toolkit: A computer-based, open source application for testing memory for melodies.

    Science.gov (United States)

    Rainsford, M; Palmer, M A; Paine, G

    2018-04-01

    Despite numerous innovative studies, rates of replication in the field of music psychology are extremely low (Frieler et al., 2013). Two key methodological challenges affecting researchers wishing to administer and reproduce studies in music cognition are the difficulty of measuring musical responses, particularly when conducting free-recall studies, and access to a reliable set of novel stimuli unrestricted by copyright or licensing issues. In this article, we propose a solution for these challenges in computer-based administration. We present a computer-based application for testing memory for melodies. Created using the software Max/MSP (Cycling '74, 2014a), the MUSOS (Music Software System) Toolkit uses a simple modular framework configurable for testing common paradigms such as recall, old-new recognition, and stem completion. The program is accompanied by a stimulus set of 156 novel, copyright-free melodies, in audio and Max/MSP file formats. Two pilot tests were conducted to establish the properties of the accompanying stimulus set that are relevant to music cognition and general memory research. By using this software, a researcher without specialist musical training may administer and accurately measure responses from common paradigms used in the study of memory for music.

  17. Rapid Software Development for Experiment Control at OPAL

    International Nuclear Information System (INIS)

    Hathaway, P.V.; Lam, Tony; Franceschini, Ferdi; Hauser, Nick; Rayner, Hugh

    2005-01-01

    Full text: ANSTO is undertaking the parallel development of instrument control and graphical experiment interface software for seven neutron beam instruments at OPAL. Each instrument poses several challenges for a common system solution, including custom detector interfaces, a range of motion and beamline optics schema, and a spectrum of online data reduction requirements. To provide a superior system with the least development effort, the computing team have adopted proven, configurable, server-based control software (SICS)1., a highly Integrated Scientific Experimental Environment (GumTree)2. and industry-standard database management systems. The resulting graphical interfaces allow operation in a familiar experiment domain, with monitoring of data and parameters independent of control system specifics. GumTree presents the experimenter with a consistent interface for experiment management, instrument control and data reduction tasks. The facility instrument scientists can easily reconfigure instruments and add ancillaries. The user community can expect a reduced learning curve for performing each experiment. GumTree can be installed anywhere for pre-experiment familiarisation, postprocessing of acquired data sets, and integration with third party analysis tools. Instrument scientists are seeing faster software development iterations and have a solid basis to prepare for the next suite of instruments. 1. SICS from PSI (lns00.psi.ch). 2. GumTree (gumtree.sourceforge.net), new site: http://gumtree.sourceforge.net/wiki/index.php/Main_Page

  18. Heterodyne grating interferometer based on a quasi-common-optical-path configuration for a two-degrees-of-freedom straightness measurement

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Ju-Yi; Hsieh, Hung-Lin; Lerondel, Gilles; Deturche, Regis; Lu, Mini-Pei; Chen, Jyh-Chen

    2011-03-20

    We present a heterodyne grating interferometer based on a quasi-common-optical-path (QCOP) design for a two-degrees-of-freedom (DOF) straightness measurement. Two half-wave plates are utilized to rotate the polarizations of two orthogonally polarized beams. The grating movement can be calculated by measuring the phase difference variation in each axis. The experimental results demonstrate that our method has the ability to measure two-DOF straightness and still maintain high system stability. The proposed and demonstrated method, which relies on heterodyne interferometric phase measurement combined with the QCOP configuration, has the advantages of high measurement resolution, relatively straightforward operation, and high system stability.

  19. Large scale and performance tests of the ATLAS online software

    International Nuclear Information System (INIS)

    Alexandrov; Kotov, V.; Mineev, M.; Roumiantsev, V.; Wolters, H.; Amorim, A.; Pedro, L.; Ribeiro, A.; Badescu, E.; Caprini, M.; Burckhart-Chromek, D.; Dobson, M.; Jones, R.; Kazarov, A.; Kolos, S.; Liko, D.; Lucio, L.; Mapelli, L.; Nassiakou, M.; Schweiger, D.; Soloviev, I.; Hart, R.; Ryabov, Y.; Moneta, L.

    2001-01-01

    One of the sub-systems of the Trigger/DAQ system of the future ATLAS experiment is the Online Software system. It encompasses the functionality needed to configure, control and monitor the DAQ. Its architecture is based on a component structure described in the ATLAS Trigger/DAQ technical proposal. Regular integration tests ensure its smooth operation in test beam setups during its evolutionary development towards the final ATLAS online system. Feedback is received and returned into the development process. Studies of the system behavior have been performed on a set of up to 111 PCs on a configuration which is getting closer to the final size. Large scale and performance test of the integrated system were performed on this setup with emphasis on investigating the aspects of the inter-dependence of the components and the performance of the communication software. Of particular interest were the run control state transitions in various configurations of the run control hierarchy. For the purpose of the tests, the software from other Trigger/DAQ sub-systems has been emulated. The author presents a brief overview of the online system structure, its components and the large scale integration tests and their results

  20. Configurable data and CAMAC hardware representations for implementation of the SPHERE DAQ and offline systems

    International Nuclear Information System (INIS)

    Isupov, A.Yu.

    2001-01-01

    An implementation of the experimental data configurable representation for using in the DAQ and offline systems of the SPHERE setup at the LHE, JINR is described. A software scheme of the SPHERE CAMAC hardware's configurable description, intended to online data acquisition (DAQ) implementation based on the qdpb system, is issued

  1. Supporting Tablet Configuration, Tracking, and Infection Control Practices in Digital Health Interventions: Study Protocol.

    Science.gov (United States)

    Furberg, Robert D; Ortiz, Alexa M; Zulkiewicz, Brittany A; Hudson, Jordan P; Taylor, Olivia M; Lewis, Megan A

    2016-06-27

    Tablet-based health care interventions have the potential to encourage patient care in a timelier manner, allow physicians convenient access to patient records, and provide an improved method for patient education. However, along with the continued adoption of tablet technologies, there is a concomitant need to develop protocols focusing on the configuration, management, and maintenance of these devices within the health care setting to support the conduct of clinical research. Develop three protocols to support tablet configuration, tablet management, and tablet maintenance. The Configurator software, Tile technology, and current infection control recommendations were employed to develop three distinct protocols for tablet-based digital health interventions. Configurator is a mobile device management software specifically for iPhone operating system (iOS) devices. The capabilities and current applications of Configurator were reviewed and used to develop the protocol to support device configuration. Tile is a tracking tag associated with a free mobile app available for iOS and Android devices. The features associated with Tile were evaluated and used to develop the Tile protocol to support tablet management. Furthermore, current recommendations on preventing health care-related infections were reviewed to develop the infection control protocol to support tablet maintenance. This article provides three protocols: the Configurator protocol, the Tile protocol, and the infection control protocol. These protocols can help to ensure consistent implementation of tablet-based interventions, enhance fidelity when employing tablets for research purposes, and serve as a guide for tablet deployments within clinical settings.

  2. GUIdock: Using Docker Containers with a Common Graphics User Interface to Address the Reproducibility of Research.

    Directory of Open Access Journals (Sweden)

    Ling-Hong Hung

    Full Text Available Reproducibility is vital in science. For complex computational methods, it is often necessary, not just to recreate the code, but also the software and hardware environment to reproduce results. Virtual machines, and container software such as Docker, make it possible to reproduce the exact environment regardless of the underlying hardware and operating system. However, workflows that use Graphical User Interfaces (GUIs remain difficult to replicate on different host systems as there is no high level graphical software layer common to all platforms. GUIdock allows for the facile distribution of a systems biology application along with its graphics environment. Complex graphics based workflows, ubiquitous in systems biology, can now be easily exported and reproduced on many different platforms. GUIdock uses Docker, an open source project that provides a container with only the absolutely necessary software dependencies and configures a common X Windows (X11 graphic interface on Linux, Macintosh and Windows platforms. As proof of concept, we present a Docker package that contains a Bioconductor application written in R and C++ called networkBMA for gene network inference. Our package also includes Cytoscape, a java-based platform with a graphical user interface for visualizing and analyzing gene networks, and the CyNetworkBMA app, a Cytoscape app that allows the use of networkBMA via the user-friendly Cytoscape interface.

  3. GUIdock: Using Docker Containers with a Common Graphics User Interface to Address the Reproducibility of Research.

    Science.gov (United States)

    Hung, Ling-Hong; Kristiyanto, Daniel; Lee, Sung Bong; Yeung, Ka Yee

    2016-01-01

    Reproducibility is vital in science. For complex computational methods, it is often necessary, not just to recreate the code, but also the software and hardware environment to reproduce results. Virtual machines, and container software such as Docker, make it possible to reproduce the exact environment regardless of the underlying hardware and operating system. However, workflows that use Graphical User Interfaces (GUIs) remain difficult to replicate on different host systems as there is no high level graphical software layer common to all platforms. GUIdock allows for the facile distribution of a systems biology application along with its graphics environment. Complex graphics based workflows, ubiquitous in systems biology, can now be easily exported and reproduced on many different platforms. GUIdock uses Docker, an open source project that provides a container with only the absolutely necessary software dependencies and configures a common X Windows (X11) graphic interface on Linux, Macintosh and Windows platforms. As proof of concept, we present a Docker package that contains a Bioconductor application written in R and C++ called networkBMA for gene network inference. Our package also includes Cytoscape, a java-based platform with a graphical user interface for visualizing and analyzing gene networks, and the CyNetworkBMA app, a Cytoscape app that allows the use of networkBMA via the user-friendly Cytoscape interface.

  4. Strategies for successful software development risk management

    Directory of Open Access Journals (Sweden)

    Marija Boban

    2003-01-01

    Full Text Available Nowadays, software is becoming a major part of enterprise business. Software development is activity connected with advanced technology and high level of knowledge. Risks on software development projects must be successfully mitigated to produce successful software systems. Lack of a defined approach to risk management is one of the common causes for project failures. To improve project chances for success, this work investigates common risk impact areas to perceive a foundation that can be used to define a common approach to software risk management. Based on typical risk impact areas on software development projects, we propose three risk management strategies suitable for a broad area of enterprises and software development projects with different amounts of connected risks. Proposed strategies define activities that should be performed for successful risk management, the one that will enable software development projects to perceive risks as soon as possible and to solve problems connected with risk materialization. We also propose a risk-based approach to software development planning and risk management as attempts to address and retire the highest impact risks as early as possible in the development process. Proposed strategies should improve risk management on software development projects and help create a successful software solution.

  5. Balancing technical and regulatory concerns related to testing and control of performance assessment software

    International Nuclear Information System (INIS)

    Seitz, R.R.; Matthews, S.D.; Kostelnik, K.M.

    1990-01-01

    What activities are required to assure that a performance assessment (PA) computer code operates as it is intended? Answers to this question will vary depending on the individual's area of expertise. Different perspectives on testing and control of PA software are discussed based on interpretations of the testing and control process associated with the different involved parties. This discussion leads into the presentation of a general approach to software testing and control that address regulatory requirements. Finally, the need for balance between regulatory and scientific concerns is illustrated through lessons learned in previous implementations of software testing and control programs. Configuration control and software testing are required to provide assurance that a computer code performs as intended. Configuration control provides traceability and reproducibility of results produced with PA software and provides a system to assure that users have access to the current version of the software. Software testing is conducted to assure that the computer code has been written properly, solution techniques have been properly implemented, and the software is capable of representing the behavior of the specific system to be modeled. Comprehensive software testing includes: software analysis, verification testing, benchmark testing, and site-specific calibration/validation testing

  6. Identification of subsurface layer with Wenner-Schlumberger arrays configuration geoelectrical method

    Science.gov (United States)

    Jamaluddin; Prasetyawati Umar, Emi

    2018-02-01

    One of measurement methods to investigate the condition of the subsurface is by using geoelectric method. This research uses wenner-Schlumberger arrays configuration geoelectrical method which is mapping resistivity that is commonly known as profiling (2D) in order to identify the lateral and vertical anomaly of material resistivity. 2D resistivity cross section is obtained from the result of data- processing on software Res2Dinv. The data were obtained along 70 m using Wenner-Schlumberger configuration with 5 m spaced electrode. The approximated value of resistivity obtained from the data processing ranged from 1000-1548 Ωm and with the iteration error 87.9%. Based on the geological map of Ujung Pandang sheet, the location of the research is an alluvium and coastal precipitation area with grain in forms of gravel, sand, clay, mud, and coral limestone. Thus, by observing and analyzing the variety of the resistivity cross-section from the inversion data, there are areas (a) showing resistivity values ranged from 0.1-0.2 Ωm which is estimated to be salt water intrusion based on the resistivity table of Earth materials, and region (b) which is a mixture of sand and clay material with the range of resistivity values between 1-1000 μm.

  7. Safety critical software development qualification

    International Nuclear Information System (INIS)

    Marron, J. E.

    2006-01-01

    With the increasing use of digital systems in control applications, customers must acquire appropriate expectations for software development and quality assurance procedures. Purchasers and users of digital systems need to understand the benefits to the supplier of effective quality systems. These systems consist not only of procedures but tools that enable automation. Without the use of automation, quality can not be assured. A software and systems quality program starts with the documents you are very familiar with. But these documents must define more than the final system. They must address specific development environment characteristics and testing capabilities. Starting with the RFP, some of the items that should be introduced are Software Configuration Management, regression testing and defect tracking. The digital system customer is in the best position to enforce the use of software and systems quality programs by including them in project requirements as early as the Purchase Order. The customer's understanding of the full scope and implementation of a software quality program is essential to achieving the quality necessary in nuclear projects, and, incidentally, completing those projects on schedule. (authors)

  8. Software change control in the Sizewell B ISCO

    International Nuclear Information System (INIS)

    Johnson, A.

    1997-01-01

    Central to the control and instrumentation system of the Sizewell B nuclear power plant is a control, data acquisition and control system based on a distributed network of several hundred microprocessors. The system has been integrated into a single functional unity, and software modifications affecting one part of it have an effect on the other parts. The software modification and configuration processes are therefore kept as similar as possible. (A.K.)

  9. Software design space exploration for exascale combustion co-design

    Energy Technology Data Exchange (ETDEWEB)

    Chan, Cy [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Unat, Didem [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Lijewski, Michael [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Zhang, Weiqun [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bell, John [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Shalf, John [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2013-09-26

    The design of hardware for next-generation exascale computing systems will require a deep understanding of how software optimizations impact hardware design trade-offs. In order to characterize how co-tuning hardware and software parameters affects the performance of combustion simulation codes, we created ExaSAT, a compiler-driven static analysis and performance modeling framework. Our framework can evaluate hundreds of hardware/software configurations in seconds, providing an essential speed advantage over simulators and dynamic analysis techniques during the co-design process. Our analytic performance model shows that advanced code transformations, such as cache blocking and loop fusion, can have a significant impact on choices for cache and memory architecture. Our modeling helped us identify tuned configurations that achieve a 90% reduction in memory traffic, which could significantly improve performance and reduce energy consumption. These techniques will also be useful for the development of advanced programming models and runtimes, which must reason about these optimizations to deliver better performance and energy efficiency.

  10. Future Scenarios for Software-Defined Metro and Access Networks and Software-Defined Photonics

    Directory of Open Access Journals (Sweden)

    Tommaso Muciaccia

    2017-01-01

    Full Text Available In recent years, architectures, devices, and components in telecommunication networks have been challenged by evolutionary and revolutionary factors which are drastically changing the traffic features. Most of these changes imply the need for major re-configurability and programmability not only in data-centers and core networks, but also in the metro-access segment. In a wide variety of contexts, this necessity has been addressed by the proposed introduction of the innovative paradigm of software-defined networks (SDNs. Several solutions inspired by the SDN model have been recently proposed also for metro and access networks, where the adoption of a new generation of software-defined reconfigurable integrated photonic devices is highly desirable. In this paper, we review the possible future application scenarios for software-defined metro and access networks and software-defined photonics (SDP, on the base of analytics, statistics, and surveys. This work describes the reasons underpinning the presented radical change of paradigm and summarizes the most significant solutions proposed in literature, with a specific emphasis to physical-layer reconfigurable networks and a focus on both architectures and devices.

  11. A Software Data Transport Framework for Trigger Applications on Clusters

    CERN Document Server

    Steinbeck, T M; Tilsner, H; Steinbeck, Timm M.; Lindenstruth, Volker; Tilsner, Heinz

    2003-01-01

    In the future ALICE heavy ion experiment at CERN's Large Hadron Collider input data rates of up to 25 GB/s have to be handled by the High Level Trigger (HLT) system, which has to scale them down to at most 1.25 GB/s before being written to permanent storage. The HLT system that is being designed to cope with these data rates consists of a large PC cluster, up to the order of a 1000 nodes, connected by a fast network. For the software that will run on these nodes a flexible data transport and distribution software framework has been developed. This framework consists of a set of separate components, that can be connected via a common interface, allowing to construct different configurations for the HLT, that are even changeable at runtime. To ensure a fault-tolerant operation of the HLT, the framework includes a basic fail-over mechanism that will be further expanded in the future, utilizing the runtime reconnection feature of the framework's component interface. First performance tests show very promising res...

  12. Using containers with ATLAS offline software

    CERN Document Server

    Vogel, Marcelo; The ATLAS collaboration

    2017-01-01

    This paper describes the deployment of ATLAS offline software in containers for software development. For this we are using Docker, which is a lightweight virtualization technology that encapsulates a piece of software inside a complete file system. The deployment of offline releases via containers removes the strict requirement of compatibility between the runtime environment needed for job execution and the configuration of worker nodes at computing sites. If these two are decoupled from each other, sites can upgrade their nodes whenever and however they see fit. In this work, ATLAS software is distributed in containers either via the CernVM File System (CVMFS) or by means of a full ATLAS offline release installation. In software development, separating the build and runtime environment from the development environment allows users to take advantage of many modern code development tools that may not be available in production runtime setups like SLC6. It also frees developers from depending on resources lik...

  13. REVEAL: Software Documentation and Platform Migration

    Science.gov (United States)

    Wilson, Michael A.; Veibell, Victoir T.

    2011-01-01

    The Research Environment for Vehicle Embedded Analysis on Linux (REVEAL) is reconfigurable data acquisition software designed for network-distributed test and measurement applications. In development since 2001, it has been successfully demonstrated in support of a number of actual missions within NASA's Suborbital Science Program. Improvements to software configuration control were needed to properly support both an ongoing transition to operational status and continued evolution of REVEAL capabilities. For this reason the project described in this report targets REVEAL software source documentation and deployment of the software on a small set of hardware platforms different from what is currently used in the baseline system implementation. This presentation specifically describes the actions taken over a ten week period by two undergraduate student interns and serves as an overview of the content of the final report for that internship.

  14. Prepare for X-Win32 - the new X11 server software for Windows computers

    CERN Multimedia

    IT Department

    2011-01-01

    Starnet X-Win32 will replace Exceed as the X11 Server software on Windows computers by February 2012. X11 Server software allows a Windows user to have a graphical user interface on a remote Linux server. This change, initially motivated by a significant change of license conditions for Exceed, brings an easier integration of Windows and Linux logon mechanisms. At the same time, X-Win32 addresses the common use cases while providing a more intuitive configuration interface. CERN Predefined Connections will be available as before. They offer an easy way of starting applications on LXPLUS using PuTTY or starting the KDE, GNOME or ICE window managers. Since X-Win32 is better integrated with SSH and CERN Kerberos compared to Exceed, it is much simpler to set up secure access to Linux services. The decision to choose X-Win32 as the new X11 software resulted from an evaluation that involved various user communities and support teams. More information, including the documented use cases, is available at https://...

  15. Hardware support for software controlled fast reconfiguration of performance counters

    Science.gov (United States)

    Salapura, Valentina; Wisniewski, Robert W.

    2013-06-18

    Hardware support for software controlled reconfiguration of performance counters may include a plurality of performance counters collecting one or more counts of one or more selected activities. A storage element stores data value representing a time interval, and a timer element reads the data value and detects expiration of the time interval based on the data value and generates a signal. A plurality of configuration registers stores a set of performance counter configurations. A state machine receives the signal and selects a configuration register from the plurality of configuration registers for reconfiguring the one or more performance counters.

  16. Computer games and software engineering

    CERN Document Server

    Cooper, Kendra M L

    2015-01-01

    Computer games represent a significant software application domain for innovative research in software engineering techniques and technologies. Game developers, whether focusing on entertainment-market opportunities or game-based applications in non-entertainment domains, thus share a common interest with software engineers and developers on how to best engineer game software.Featuring contributions from leading experts in software engineering, the book provides a comprehensive introduction to computer game software development that includes its history as well as emerging research on the inte

  17. Restrictions on Software for Personal and Professional Use

    CERN Multimedia

    2004-01-01

    A growing number of computer security incidents detected at CERN are due to additional software installed for personal and professional use. As a consequence, the smooth operation of CERN is put at risk and often many hours are lost solving the problems. To reduce this security risk, installation and/or use of software on CERN's computing and network infrastructure needs to be restricted. Therefore: Do NOT install software for personal use Do NOT install 'free' or other software unless you have the expertise to configure and maintain it securely. Please comply to these rules to keep our computer systems safe. Further explanation of these restrictions is at http://cern.ch/security/software-restrictions Restricted software, known to cause security and/or network problems (e.g. KaZaA and other P2P/Peer-to-Peer file sharing applications, Skype P2P telephony software, ICQ, VNC, ...), is listed at: http://cern.ch/security/software-restrictions/list

  18. Hardware And Software Architectures For Reconfigurable Time-Critical Control Tasks

    Directory of Open Access Journals (Sweden)

    Adam Piłat

    2007-01-01

    Full Text Available The most popular configuration of the controlled laboratory test-rigs is the personalcomputer (PC equipped with the I/O board. The dedicated software components allowsto conduct a wide range of user-defined tasks. The typical configuration functionality canbe customized by PC hardware components and their programmable reconfiguration. Thenext step in the automatic control system design is the embedded solution. Usually, thedesign process of the embedded control system is supported by the high-level software. Thededicated programming tools support multitasking property of the microcontroller by selectionof different sampling frequencies of algorithm blocks. In this case the multi-layer andmultitasking control strategy can be realized on the chip. The proposed solutions implementrapid prototyping approach. The available toolkits and device drivers integrate system-leveldesign environment and the real-time application software, transferring the functionality ofMATLAB/Simulink programs to PCs or microcontrolers application environment.

  19. Gammasphere software development

    International Nuclear Information System (INIS)

    Piercey, R.B.

    1993-01-01

    Activities of the nuclear physics group are described. Progress was made in organizing the Gammasphere Software Working Group, establishing a nuclear computing facility, participating in software development at Lawrence Berkeley, developing a common data file format, and adapting the ORNL UPAK software to run at Gammasphere. A universal histogram object was developed that defines a file format and provides for an objective-oriented programming model. An automated liquid nitrogen fill system was developed for Gammasphere (110 Ge detectors comprise the sphere)

  20. 78 FR 47014 - Configuration Management Plans for Digital Computer Software Used in Safety Systems of Nuclear...

    Science.gov (United States)

    2013-08-02

    .... ML12354A524. 3. Revision 1 of RG 1.170, ``Test Documentation for Digital Computer Software used in Safety... is in ADAMS at Accession No. ML12354A531. 4. Revision 1 of RG 1.171, ``Software Unit Testing for... Software Used in Safety Systems of Nuclear Power Plants AGENCY: Nuclear Regulatory Commission. ACTION...

  1. A Single Tower Configuration of the Modular Gamma Box Counter System - 13392

    Energy Technology Data Exchange (ETDEWEB)

    Morris, K.; Nakazawa, D.; Francalangia, J.; Gonzalez, H. [Canberra Industries Inc., 800 Research Parkway, Meriden, CT, 06450 (United States)

    2013-07-01

    Canberra's Standard Gamma Box Counter System is designed to perform accurate quantitative assays of gamma emitting nuclides for a wide range of large containers including B-25 crates and ISO shipping containers. Using a modular building-block approach, the system offers tremendous flexibility for a variety of measurement situations with wide ranges of sample activities and throughput requirements, as well as the opportunity to modify the configuration for other applications at a later date. The typical configuration consists of two opposing towers each equipped with two high purity germanium detectors, and an automated container trolley. This paper presents a modified configuration, consisting of a single tower placed inside a measurement trailer with three detector assemblies, allowing for additional vertical segmentation as well as a viewing a container outside the trailer through the trailer wall. An automatic liquid nitrogen fill system is supplied for each of the detectors. The use of a forklift to move the container for horizontal segmentation is accommodated by creating an additional operational and calibration set-up in the NDA 2000 software to allow for the operator to rotate the container and assay the opposite side, achieving the same sensitivity as a comparable two-tower system. This Segmented Gamma Box Counter System retains the core technologies and design features of the standard configuration. The detector assemblies are shielded to minimize interference from environmental and plant background, and are collimated to provide segmentation of the container. The assembly positions can also be modified in height and distance from the container. The ISOCS calibration software provides for a flexible approach to providing the calibrations for a variety of measurement geometries. The NDA 2000 software provides seamless operation with the current configuration, handling the data acquisition and analysis. In this paper, an overview of this system is

  2. The effect of earthquake on architecture geometry with non-parallel system irregularity configuration

    Science.gov (United States)

    Teddy, Livian; Hardiman, Gagoek; Nuroji; Tudjono, Sri

    2017-12-01

    Indonesia is an area prone to earthquake that may cause casualties and damage to buildings. The fatalities or the injured are not largely caused by the earthquake, but by building collapse. The collapse of the building is resulted from the building behaviour against the earthquake, and it depends on many factors, such as architectural design, geometry configuration of structural elements in horizontal and vertical plans, earthquake zone, geographical location (distance to earthquake center), soil type, material quality, and construction quality. One of the geometry configurations that may lead to the collapse of the building is irregular configuration of non-parallel system. In accordance with FEMA-451B, irregular configuration in non-parallel system is defined to have existed if the vertical lateral force-retaining elements are neither parallel nor symmetric with main orthogonal axes of the earthquake-retaining axis system. Such configuration may lead to torque, diagonal translation and local damage to buildings. It does not mean that non-parallel irregular configuration should not be formed on architectural design; however the designer must know the consequence of earthquake behaviour against buildings with irregular configuration of non-parallel system. The present research has the objective to identify earthquake behaviour in architectural geometry with irregular configuration of non-parallel system. The present research was quantitative with simulation experimental method. It consisted of 5 models, where architectural data and model structure data were inputted and analyzed using the software SAP2000 in order to find out its performance, and ETAB2015 to determine the eccentricity occurred. The output of the software analysis was tabulated, graphed, compared and analyzed with relevant theories. For areas of strong earthquake zones, avoid designing buildings which wholly form irregular configuration of non-parallel system. If it is inevitable to design a

  3. Database Foundation For The Configuration Management Of The CERN Accelerator Controls Systems

    CERN Document Server

    Zaharieva, Z; Peryt, M

    2011-01-01

    The Controls Configuration Database (CCDB) and its interfaces have been developed over the last 25 years in order to become nowadays the basis for the Configuration Management of the Controls System for all accelerators at CERN. The CCDB contains data for all configuration items and their relationships, required for the correct functioning of the Controls System. The configuration items are quite heterogeneous, depicting different areas of the Controls System – ranging from 3000 Front-End Computers, 75 000 software devices allowing remote control of the accelerators, to valid states of the Accelerators Timing System. The article will describe the different areas of the CCDB, their interdependencies and the challenges to establish the data model for such a diverse configuration management database, serving a multitude of clients. The CCDB tracks the life of the configuration items by allowing their clear identification, triggering of change management processes as well as providing status accounting and aud...

  4. Three-phase multilevel inverter configuration for open-winding high power application

    DEFF Research Database (Denmark)

    Sanjeevikumar, Padmanaban; Blaabjerg, Frede; Wheeler, Patrick William

    2015-01-01

    This paper work exploits a new dual open-winding three-phase multilevel inverter configuration suitable for high power medium-voltage applications. Modular structure comprised of standard three-phase voltage source inverter (VSI) along with one additional bi-directional semiconductor device (MOSFET...... for implementation purpose. Proposed dual-inverter configuration generates multilevel outputs with benefit includes reduced THD and dv/dt in comparison to other dual-inverter topologies. Complete model of the multilevel ac drive is developed with simple MSCFM modulation in Matlab/PLECs numerical software...

  5. Software design specification and analysis(NuFDS) approach for the safety critical software based on porgrammable logic controller(PLC)

    International Nuclear Information System (INIS)

    Koo, Seo Ryong; Seong, Poong Hyun; Jung, Jin Yong; Choi, Seong Soo

    2004-01-01

    This paper introduces the software design specification and analysis technique for the safety-critical system based on Programmable Logic Controller (PLC). During software development phases, the design phase should perform an important role to connect between requirements phase and implementation phase as a process of translating problem requirements into software structures. In this work, the Nuclear FBD-style Design Specification and analysis (NuFDS) approach was proposed. The NuFDS approach for nuclear Instrumentation and Control (I and C) software are suggested in a straight forward manner. It consists of four major specifications as follows; Database, Software Architecture, System Behavior, and PLC Hardware Configuration. Additionally, correctness, completeness, consistency, and traceability check techniques are also suggested for the formal design analysis in NuFDS approach. In addition, for the tool supporting, we are developing NuSDS tool based on the NuFDS approach which is a tool, especially for the software design specification in nuclear fields

  6. CASE-BASED PRODUCT CONFIGURATION AND REUSE IN MASS CUSTOMIZATION

    Institute of Scientific and Technical Information of China (English)

    Wang Shiwei; Tan Jianrong; Zhang Shuyou; Wang Xin; He Chenqi

    2004-01-01

    The increasing complexity and size of configuration knowledge bases requires the provision of advanced methods supporting the development of the actual configuration process and design reuse.A new framework to find a feasible and practical product configuration method is presented in mass customization.The basic idea of the approach is to integrate case-based reasoning (CBR) with a constraint satisfaction problem(CSP).The similarity measure between a crisp and range is also given,which is common in case retrieves.Based on the configuration model,a product platform and customer needs,case adaptation is carried out with the repair-based algorithm.Lastly,the methodology in the elevator configuration design domain is tested.

  7. Configuration Management for Wendelstein 7-X

    International Nuclear Information System (INIS)

    Brakel, R.; Eeten, P.v.; Hartmann, D.A.; Henkelmann, K.; Knauer, J.; Mueller, K.; Okkenga-Wolf, A.; Wenzel, U.

    2009-01-01

    A complex system like the large superconducting Wendelstein 7-X stellarator necessitates a dedicated organizational structure which assures permanent consistency between the requirements of its system specification and the performance attributes of all its components throughout its life time. This includes well-defined processes and centrally coordinated information structures. For this purposes the department Configuration Management (CM) has recently been established at W7-X. The detailed tasks of CM for W7-X are oriented along common CM standards and comprise configuration identification, change management, configuration status accounting and configuration verification. While the assembly of W7-X is proceeding some components are still under procurement or even under design. Thus design changes and non-conformances may have a direct impact on the assembly process. Highest priority has therefore been assigned to efficient control of change and non-conformance processes which might delay the assembly schedule.

  8. Co-simulation of dynamic systems in parallel and serial model configurations

    International Nuclear Information System (INIS)

    Sweafford, Trevor; Yoon, Hwan Sik

    2013-01-01

    Recent advancement in simulation software and computation hardware make it realizable to simulate complex dynamic systems comprised of multiple submodels developed in different modeling languages. The so-called co-simulation enables one to study various aspects of a complex dynamic system with heterogeneous submodels in a cost-effective manner. Among several different model configurations for co-simulation, synchronized parallel configuration is regarded to expedite the simulation process by simulation multiple sub models concurrently on a multi core processor. In this paper, computational accuracies as well as computation time are studied for three different co-simulation frameworks : integrated, serial, and parallel. for this purpose, analytical evaluations of the three different methods are made using the explicit Euler method and then they are applied to two-DOF mass-spring systems. The result show that while the parallel simulation configuration produces the same accurate results as the integrated configuration, results of the serial configuration, results of the serial configuration show a slight deviation. it is also shown that the computation time can be reduced by running simulation in the parallel configuration. Therefore, it can be concluded that the synchronized parallel simulation methodology is the best for both simulation accuracy and time efficiency.

  9. Using containers with ATLAS offline software

    CERN Document Server

    Vogel, Marcelo; The ATLAS collaboration; Heinrich, Lukas; Stewart, Graeme

    2017-01-01

    Title: Using containers with ATLAS offline software Marcelo Vogel, Bergische Universitaet Wuppertal Graeme Stewart, University of Glasgow Johannes Elmsheuser, Brookhaven National Laboratory Lukas Heinrich, New York University Abstract: This paper describes the deployment of ATLAS offline software in containers for software development and the use in production jobs on the grid - such as event generation, simulation, reconstruction and physics derivations - and in physics analysis. For this we are using Docker and Singularity which are both lightweight virtualization technologies to encapsulates a piece of software inside a complete file system. The deployment of offline releases via containers removes the interdependence between the runtime environment needed for job execution and the configuration of a computing site’s worker nodes. Once the two are decoupled from each other, sites can upgrade their nodes whenever and however they see fit. Docker or Singularity will provide a uniform runtime environment fo...

  10. The software-cycle model for re-engineering and reuse

    Science.gov (United States)

    Bailey, John W.; Basili, Victor R.

    1992-01-01

    This paper reports on the progress of a study which will contribute to our ability to perform high-level, component-based programming by describing means to obtain useful components, methods for the configuration and integration of those components, and an underlying economic model of the costs and benefits associated with this approach to reuse. One goal of the study is to develop and demonstrate methods to recover reusable components from domain-specific software through a combination of tools, to perform the identification, extraction, and re-engineering of components, and domain experts, to direct the applications of those tools. A second goal of the study is to enable the reuse of those components by identifying techniques for configuring and recombining the re-engineered software. This component-recovery or software-cycle model addresses not only the selection and re-engineering of components, but also their recombination into new programs. Once a model of reuse activities has been developed, the quantification of the costs and benefits of various reuse options will enable the development of an adaptable economic model of reuse, which is the principal goal of the overall study. This paper reports on the conception of the software-cycle model and on several supporting techniques of software recovery, measurement, and reuse which will lead to the development of the desired economic model.

  11. Packaging of control system software

    International Nuclear Information System (INIS)

    Zagar, K.; Kobal, M.; Saje, N.; Zagar, A.; Sabjan, R.; Di Maio, F.; Stepanov, D.

    2012-01-01

    Control system software consists of several parts - the core of the control system, drivers for integration of devices, configuration for user interfaces, alarm system, etc. Once the software is developed and configured, it must be installed to computers where it runs. Usually, it is installed on an operating system whose services it needs, and also in some cases dynamically links with the libraries it provides. Operating system can be quite complex itself - for example, a typical Linux distribution consists of several thousand packages. To manage this complexity, we have decided to rely on Red Hat Package Management system (RPM) to package control system software, and also ensure it is properly installed (i.e., that dependencies are also installed, and that scripts are run after installation if any additional actions need to be performed). As dozens of RPM packages need to be prepared, we are reducing the amount of effort and improving consistency between packages through a Maven-based infrastructure that assists in packaging (e.g., automated generation of RPM SPEC files, including automated identification of dependencies). So far, we have used it to package EPICS, Control System Studio (CSS) and several device drivers. We perform extensive testing on Red Hat Enterprise Linux 5.5, but we have also verified that packaging works on CentOS and Scientific Linux. In this article, we describe in greater detail the systematic system of packaging we are using, and its particular application for the ITER CODAC Core System. (authors)

  12. Self-Configuration and Self-Optimization Process in Heterogeneous Wireless Networks

    Directory of Open Access Journals (Sweden)

    Eduardo Camponogara

    2010-12-01

    Full Text Available Self-organization in Wireless Mesh Networks (WMN is an emergent research area, which is becoming important due to the increasing number of nodes in a network. Consequently, the manual configuration of nodes is either impossible or highly costly. So it is desirable for the nodes to be able to configure themselves. In this paper, we propose an alternative architecture for self-organization of WMN based on Optimized Link State Routing Protocol (OLSR and the ad hoc on demand distance vector (AODV routing protocols as well as using the technology of software agents. We argue that the proposed self-optimization and self-configuration modules increase the throughput of network, reduces delay transmission and network load, decreases the traffic of HELLO messages according to network’s scalability. By simulation analysis, we conclude that the self-optimization and self-configuration mechanisms can significantly improve the performance of OLSR and AODV protocols in comparison to the baseline protocols analyzed.

  13. Self-Configuration and Self-Optimization Process in Heterogeneous Wireless Networks

    Science.gov (United States)

    Guardalben, Lucas; Villalba, Luis Javier García; Buiati, Fábio; Sobral, João Bosco Mangueira; Camponogara, Eduardo

    2011-01-01

    Self-organization in Wireless Mesh Networks (WMN) is an emergent research area, which is becoming important due to the increasing number of nodes in a network. Consequently, the manual configuration of nodes is either impossible or highly costly. So it is desirable for the nodes to be able to configure themselves. In this paper, we propose an alternative architecture for self-organization of WMN based on Optimized Link State Routing Protocol (OLSR) and the ad hoc on demand distance vector (AODV) routing protocols as well as using the technology of software agents. We argue that the proposed self-optimization and self-configuration modules increase the throughput of network, reduces delay transmission and network load, decreases the traffic of HELLO messages according to network’s scalability. By simulation analysis, we conclude that the self-optimization and self-configuration mechanisms can significantly improve the performance of OLSR and AODV protocols in comparison to the baseline protocols analyzed. PMID:22346584

  14. KAERI software verification and validation guideline for developing safety-critical software in digital I and C system of NPP

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jang Yeol; Lee, Jang Soo; Eom, Heung Seop

    1997-07-01

    This technical report is to present V and V guideline development methodology for safety-critical software in NPP safety system. Therefore it is to present V and V guideline of planning phase for the NPP safety system in addition to critical safety items, for example, independence philosophy, software safety analysis concept, commercial off the shelf (COTS) software evaluation criteria, inter-relationships between other safety assurance organizations, including the concepts of existing industrial standard, IEEE Std-1012, IEEE Std-1059. This technical report includes scope of V and V guideline, guideline framework as part of acceptance criteria, V and V activities and task entrance as part of V and V activity and exit criteria, review and audit, testing and QA records of V and V material and configuration management, software verification and validation plan production etc., and safety-critical software V and V methodology. (author). 11 refs.

  15. KAERI software verification and validation guideline for developing safety-critical software in digital I and C system of NPP

    International Nuclear Information System (INIS)

    Kim, Jang Yeol; Lee, Jang Soo; Eom, Heung Seop.

    1997-07-01

    This technical report is to present V and V guideline development methodology for safety-critical software in NPP safety system. Therefore it is to present V and V guideline of planning phase for the NPP safety system in addition to critical safety items, for example, independence philosophy, software safety analysis concept, commercial off the shelf (COTS) software evaluation criteria, inter-relationships between other safety assurance organizations, including the concepts of existing industrial standard, IEEE Std-1012, IEEE Std-1059. This technical report includes scope of V and V guideline, guideline framework as part of acceptance criteria, V and V activities and task entrance as part of V and V activity and exit criteria, review and audit, testing and QA records of V and V material and configuration management, software verification and validation plan production etc., and safety-critical software V and V methodology. (author). 11 refs

  16. gemcWeb: A Cloud Based Nuclear Physics Simulation Software

    Science.gov (United States)

    Markelon, Sam

    2017-09-01

    gemcWeb allows users to run nuclear physics simulations from the web. Being completely device agnostic, scientists can run simulations from anywhere with an Internet connection. Having a full user system, gemcWeb allows users to revisit and revise their projects, and share configurations and results with collaborators. gemcWeb is based on simulation software gemc, which is based on standard GEant4. gemcWeb requires no C++, gemc, or GEant4 knowledge. Using a simple but powerful GUI allows users to configure their project from geometries and configurations stored on the deployment server. Simulations are then run on the server, with results being posted to the user, and then securely stored. Python based and open-source, the main version of gemcWeb is hosted internally at Jefferson National Labratory and used by the CLAS12 and Electron-Ion Collider Project groups. However, as the software is open-source, and hosted as a GitHub repository, an instance can be deployed on the open web, or any institution's intra-net. An instance can be configured to host experiments specific to an institution, and the code base can be modified by any individual or group. Special thanks to: Maurizio Ungaro, PhD., creator of gemc; Markus Diefenthaler, PhD., advisor; and Kyungseon Joo, PhD., advisor.

  17. Techniques and tools for software qualification in KNICS

    International Nuclear Information System (INIS)

    Cha, Kyung H.; Lee, Yeong J.; Cheon, Se W.; Kim, Jang Y.; Lee, Jang S.; Kwon, Kee C.

    2004-01-01

    This paper describes techniques and tools for qualifying safety software in Korea Nuclear Instrumentation and Control System (KNICS). Safety software are developed and applied for a Reactor Protection System (RPS), an Engineered Safety Features and Component Control System (ESF-CCS), and a safety Programmable Logic Controller (PLC) in the KNICS. Requirements and design specifications of safety software are written by both natural language and formal specification languages. Statechart is used for formal specification of software of the ESF-CCS and the safety PLC while NuSCR is used for formal specification of them of the RPS. pSET (POSCON Software Engineering Tool) as a software development tool has been developed and utilized for the IEC61131-3 based PLC programming. The qualification of the safety software consists of software verification and validation (V and V) through software life cycle, software safety analysis, and software configuration management, software quality assurance, and COTS (Commercial-Off-The-Shelf) dedication. The criteria and requirements for qualifying the safety software have been established with them in Software Review Plan (SRP)/Branch Technical Positions (BTP)-14, IEEE Std. 7-4.3.2-1998, NUREG/CR-6463, IEEE Std. 1012-1998, and so on. Figure 1 summarizes qualification techniques and tools for the safety software

  18. Performance Test of Openflow Agent on Openflow Software-Based Mikrotik RB750 Switch

    Directory of Open Access Journals (Sweden)

    Rikie Kartadie

    2016-11-01

    Full Text Available A network is usually developed by several devices such as router, switch etc. Every device forwards data package manipulation with complicated protocol planted in its hardware. An operator is responsible for running configuration either to manage rules or application applied in the network. Human error may occur when device configuration run manually by operator. Some famous vendors, one of them is MikroTik, has also been implementing this OpenFlow on its operation. It provides the implementation of SDN/OpenFlow architecture with affordable cost. The second phase research result showed that switch OF software-based MikroTik resulted higher latency value than both mininet and switch OF software-based OpenWRT. The average gap value of switch OF software-based MikroTik is 2012 kbps lower than the value of switch OF software-based OpenWRT. The average gap value of throughput bandwidth protocol UDP switch OF software-based MikroTik is 3.6176 kBps lower than switch OF software-based OpenWRT and it is 8.68 kBps lower than mininet. The average gap throughput jitter protokol UDP of switch OF software-based MiktoTik is 0.0103ms lower than switch OF software-based OpenWRT and 0.0093ms lower than mininet. 

  19. Complexity of Configurators Relative to Integrations and Field of Application

    DEFF Research Database (Denmark)

    Kristjansdottir, Katrin; Shafiee, Sara; Battistello, Loris

    . Moreover, configurators are commonly integrated to various IT systems within companies. The complexity of configurators is an important factor when it comes to performance, development and maintenance of the systems. A direct comparison of the complexity based on the different application...... integrations to other IT systems. The research method adopted in the paper is based on a survey followed with interviews where the unit of analysis is based on operating configurators within a company.......Configurators are applied widely to automate the specification processes at companies. The literature describes the industrial application of configurators supporting both sales and engineering processes, where configurators supporting the engineering processes are described more challenging...

  20. MCTS Microsoft SharePoint 2010 Configuration Study Guide Exam 70-667

    CERN Document Server

    Pyles, James

    2010-01-01

    A Sybex study guide for the new SharePoint Server 2010 Configuration examSharePoint holds 55 percent of the collaboration and content management market, with many more companies indicating they plan to join the fold. IT professionals interested in enhancing their marketability with the new Microsoft Certified Technology Specialist: Microsoft SharePoint Server 2010 Configuring exam will find this guide may be their only alternative to costly classroom training.Microsoft SharePoint claims over half the market for collaboration and content management software; IT professionals will boost their ma

  1. UN_PAT: a software for calculating transient grounding potential

    Directory of Open Access Journals (Sweden)

    Johny Hernán Montaña

    2006-09-01

    Full Text Available This paper presents results from work done at the National University of Colombia and from a PhD thesis written there. This work was aimed at implementing software for analysing the transient behaviour of any configuration of grounding system buried in lineal, homogeneous and isotropic soil. The hybrid electromagnetic model (HEM was used because it presents high versatility and low computation time. The UN_PAT software was written in C++; it used free libraries with the aim of being free software so that it could be modified and improved in future work. The software results were validated with other software, with results from another analysis model and experimental results; some of these comparisons are given in this paper.

  2. Attacks on Mobile Phones that Use the Automatic Configuration Mechanism

    Directory of Open Access Journals (Sweden)

    A. G. Beltov

    2012-09-01

    Full Text Available The authors analyze the attacks on mobile devices that use the mechanism of an automatic configuration OMA/OTA, whose aim is listening to the Internet traffic of subscribers and the intrusion of malicious software on the user’s device, and suggest ways to protect mobile phones against such attacks.

  3. A software defined RTU multi-protocol automatic adaptation data transmission method

    Science.gov (United States)

    Jin, Huiying; Xu, Xingwu; Wang, Zhanfeng; Ma, Weijun; Li, Sheng; Su, Yong; Pan, Yunpeng

    2018-02-01

    Remote terminal unit (RTU) is the core device of the monitor system in hydrology and water resources. Different devices often have different communication protocols in the application layer, which results in the difficulty in information analysis and communication networking. Therefore, we introduced the idea of software defined hardware, and abstracted the common feature of mainstream communication protocols of RTU application layer, and proposed a uniformed common protocol model. Then, various communication protocol algorithms of application layer are modularized according to the model. The executable codes of these algorithms are labeled by the virtual functions and stored in the flash chips of embedded CPU to form the protocol stack. According to the configuration commands to initialize the RTU communication systems, it is able to achieve dynamic assembling and loading of various application layer communication protocols of RTU and complete the efficient transport of sensor data from RTU to central station when the data acquisition protocol of sensors and various external communication terminals remain unchanged.

  4. Configuration Management for eXtreme Programming

    DEFF Research Database (Denmark)

    Asklund, U.; Bendix, Lars Gotfred; Ekman, T.

    2003-01-01

    Extreme programming (XP) is a software development method that prescribes the use of 12 different practices. Four of these practices (collective code ownership, continuous integration, small releases and refactoring) can indeed be given good support by the use of simple configuration management (CM......) techniques. We report on our experience in providing many groups of novice developers with CM education, processes and tools to support the four CM-related XP practices in their projects. True to the spirit of XP both education and processes are very lightweight and we found that it was sufficient to focus...

  5. Software life after in-service

    International Nuclear Information System (INIS)

    Tseng, M.; Eng, P.

    1993-01-01

    Software engineers and designers tend to conclude a software project at the in-service milestone of the software life cycle. But the reality is that the 'life after in-service' is significantly longer than other phases of the life cycle, typically 20 years or more depending on the maintainability of the hardware platform and the designed life of the plant. During this period, the software asset (as with other physical assets in the plant) continues to be upgraded to correct deficiencies, meet new requirements, cope with obsolescence of equipment and so on. The software life cycle ends with a migration of the software to a different platform. It is typical in a software development project to put a great deal of emphasis on design methodologies, techniques, tools, development environment, standard procedures, and project management to ensure quality product is delivered on schedule and within budget. More often than not, a disproportion of emphasis is placed on the issues and needs of the in-service phase. Once the software is in-service, the designers move on to other projects, while the maintenance and support staff must manage the software. This paper examines the issues in three steps. First it presents a view of software from maintenance and support staff perspectives, including complexity of software, suitability of documentation, configuration management, training, difficulties and risks associated with making changes, required skills and knowledge. Second, it identifies the concerns raised from these viewpoints, including costs of maintaining the software, ability to meet additional requirements, availability of support tools, length of time required to engineer and install changes, and a strategy for the migration of software asset. Finally it discusses some approaches to deal with the concerns. (Author) 5 refs., fig

  6. Co-sourcing in software development offshoring

    DEFF Research Database (Denmark)

    Schlichter, Bjarne Rerup; Persson, John Stouby

    2013-01-01

    Software development projects are increasingly geographical distributed with offshoring, which introduce complex risks that can lead to project failure. Co-sourcing is a highly integrative and cohesive approach, seen successful, to software development offshoring. However, research of how co......-sourcing shapes the perception and alleviation of common offshoring risks is limited. We present a case study of how a certified CMMI-level 5 Danish software supplier approaches these risks in offshore co-sourcing. The paper explains how common offshoring risks are perceived and alleviated when adopting the co...

  7. A learning apprentice for software parts composition

    Science.gov (United States)

    Allen, Bradley P.; Holtzman, Peter L.

    1987-01-01

    An overview of the knowledge acquisition component of the Bauhaus, a prototype computer aided software engineering (CASE) workstation for the development of domain-specific automatic programming systems (D-SAPS) is given. D-SAPS use domain knowledge in the refinement of a description of an application program into a compilable implementation. The approach to the construction of D-SAPS was to automate the process of refining a description of a program, expressed in an object-oriented domain language, into a configuration of software parts that implement the behavior of the domain objects.

  8. Why are common quality and development policies needed?

    International Nuclear Information System (INIS)

    Alandes, M; Abad, A; Dini, L; Guerrero, P

    2012-01-01

    The EMI project is based on the collaboration of four major middleware projects in Europe, all already developing middleware products and having their pre-existing strategies for developing, releasing and controlling their software artefacts. In total, the EMI project is made up of about thirty development individual teams, called “Product Teams” in EMI. A Product Team is responsible for the entire lifecycle of specific products or small groups of tightly coupled products, including the development of test-suites to be peer reviewed within the overall certification process. The Quality Assurance in EMI (European Middleware Initiative), as requested by the grid infrastructures and the EU funding agency, must support the teams in providing uniform releases and interoperable middleware distributions, with a common degree of verification and validation of the software and with metrics and objective criteria to compare product quality and evolution over time. In order to achieve these goals, the QA team in EMI has defined and now it monitors the development work and release with a set of comprehensive policies covering all aspects of a software project such as packaging, configuration, documentation, certification, release management and testing. This contribution will present with practical and useful examples the achievements, problems encountered and lessons learned in the definition, implementation and review of Quality Assurance and Development policies. It also describes how these policies have been implemented in the EMI project including the benefits and difficulties encountered by the developers in the project. The main value of this contribution is that all the policies explained are not depending on EMI or grid environments and can be used by any software project.

  9. Software quality assurance plan for viscometer

    International Nuclear Information System (INIS)

    Gimera, M.

    1994-01-01

    The in situ viscometer is a portable instrument designed to raise and lower a sphere (rheometer ball) through layers of tank waste material while recording ball position, velocity, and cable tension. In the field, the viscometer attaches to a decontamination spool piece which in turn is designed to attach to any 4-inch, 150-pound flange (typical of many available tank risers). The motion of the ball and collection of data is controlled by instrumentation and control equipment housed in a separate remote control console. This document covers the product, Viscometer Data Acquisition Software. This document provides the software quality assurance plan, verification and validation plan, and configuration management plan for developing the software for the instrumentation that will be used to obtain rheology data from Tank SY-101

  10. PIV/HPIV Film Analysis Software Package

    Science.gov (United States)

    Blackshire, James L.

    1997-01-01

    A PIV/HPIV film analysis software system was developed that calculates the 2-dimensional spatial autocorrelations of subregions of Particle Image Velocimetry (PIV) or Holographic Particle Image Velocimetry (HPIV) film recordings. The software controls three hardware subsystems including (1) a Kodak Megaplus 1.4 camera and EPIX 4MEG framegrabber subsystem, (2) an IEEE/Unidex 11 precision motion control subsystem, and (3) an Alacron I860 array processor subsystem. The software runs on an IBM PC/AT host computer running either the Microsoft Windows 3.1 or Windows 95 operating system. It is capable of processing five PIV or HPIV displacement vectors per second, and is completely automated with the exception of user input to a configuration file prior to analysis execution for update of various system parameters.

  11. ICAROUS - Integrated Configurable Algorithms for Reliable Operations Of Unmanned Systems

    Science.gov (United States)

    Consiglio, María; Muñoz, César; Hagen, George; Narkawicz, Anthony; Balachandran, Swee

    2016-01-01

    NASA's Unmanned Aerial System (UAS) Traffic Management (UTM) project aims at enabling near-term, safe operations of small UAS vehicles in uncontrolled airspace, i.e., Class G airspace. A far-term goal of UTM research and development is to accommodate the expected rise in small UAS traffic density throughout the National Airspace System (NAS) at low altitudes for beyond visual line-of-sight operations. This paper describes a new capability referred to as ICAROUS (Integrated Configurable Algorithms for Reliable Operations of Unmanned Systems), which is being developed under the UTM project. ICAROUS is a software architecture comprised of highly assured algorithms for building safety-centric, autonomous, unmanned aircraft applications. Central to the development of the ICAROUS algorithms is the use of well-established formal methods to guarantee higher levels of safety assurance by monitoring and bounding the behavior of autonomous systems. The core autonomy-enabling capabilities in ICAROUS include constraint conformance monitoring and contingency control functions. ICAROUS also provides a highly configurable user interface that enables the modular integration of mission-specific software components.

  12. REVEAL: Software Documentation and Platform Migration

    Science.gov (United States)

    Wilson, Michael A.; Veibell, Victoir T.; Freudinger, Lawrence C.

    2008-01-01

    The Research Environment for Vehicle Embedded Analysis on Linux (REVEAL) is reconfigurable data acquisition software designed for network-distributed test and measurement applications. In development since 2001, it has been successfully demonstrated in support of a number of actual missions within NASA s Suborbital Science Program. Improvements to software configuration control were needed to properly support both an ongoing transition to operational status and continued evolution of REVEAL capabilities. For this reason the project described in this report targets REVEAL software source documentation and deployment of the software on a small set of hardware platforms different from what is currently used in the baseline system implementation. This report specifically describes the actions taken over a ten week period by two undergraduate student interns and serves as a final report for that internship. The topics discussed include: the documentation of REVEAL source code; the migration of REVEAL to other platforms; and an end-to-end field test that successfully validates the efforts.

  13. Sex and Electrode Configuration in Transcranial Electrical Stimulation

    Directory of Open Access Journals (Sweden)

    Michael J. Russell

    2017-08-01

    Full Text Available Transcranial electrical stimulation (tES can be an effective non-invasive neuromodulation procedure. Unfortunately, the considerable variation in reported treatment outcomes, both within and between studies, has made the procedure unreliable for many applications. To determine if individual differences in cranium morphology and tissue conductivity can account for some of this variation, the electrical density at two cortical locations (temporal and frontal directly under scalp electrodes was modeled using a validated MRI modeling procedure in 23 subjects (12 males and 11 females. Three different electrode configurations (non-cephalic, bi-cranial, and ring commonly used in tES were modeled at three current intensities (0.5, 1.0, and 2.0 mA. The aims were to assess the effects of configuration and current intensity on relative current received at a cortical brain target directly under the stimulating electrode and to characterize individual variation. The different electrode configurations resulted in up to a ninefold difference in mean current densities delivered to the brains. The ring configuration delivered the least current and the non-cephalic the most. Female subjects showed much less current to the brain than male subjects. Individual differences in the current received and differences in electrode configurations may account for significant variability in current delivered and, thus, potentially a significant portion of reported variation in clinical outcomes at two commonly targeted regions of the brain.

  14. Modular Software for Spacecraft Navigation Using the Global Positioning System (GPS)

    Science.gov (United States)

    Truong, S. H.; Hartman, K. R.; Weidow, D. A.; Berry, D. L.; Oza, D. H.; Long, A. C.; Joyce, E.; Steger, W. L.

    1996-01-01

    The Goddard Space Flight Center Flight Dynamics and Mission Operations Divisions have jointly investigated the feasibility of engineering modular Global Positioning SYSTEM (GPS) navigation software to support both real time flight and ground postprocessing configurations. The goals of this effort are to define standard GPS data interfaces and to engineer standard, reusable navigation software components that can be used to build a broad range of GPS navigation support applications. The paper discusses the GPS modular software (GMOD) system and operations concepts, major requirements, candidate software architecture, feasibility assessment and recommended software interface standards. In additon, ongoing efforts to broaden the scope of the initial study and to develop modular software to support autonomous navigation using GPS are addressed,

  15. Radioisotope thermoelectric generator transportation system subsystem 143 software development plan

    International Nuclear Information System (INIS)

    King, D.A.

    1994-01-01

    This plan describes the activities to be performed and the controls to be applied to the process of specifying, developing, and qualifying the data acquisition software for the Radioisotope Thermoelectric Generator (RTG) Transportation System Subsystem 143 Instrumentation and Data Acquisition System (IDAS). This plan will serve as a software quality assurance plan, a verification and validation (V and V) plan, and a configuration management plan

  16. GNU polyxmass: a software framework for mass spectrometric simulations of linear (bio-polymeric analytes

    Directory of Open Access Journals (Sweden)

    Rusconi Filippo

    2006-04-01

    Full Text Available Abstract Background Nowadays, a variety of (bio-polymers can be analyzed by mass spectrometry. The detailed interpretation of the spectra requires a huge number of "hypothesis cycles", comprising the following three actions 1 put forth a structural hypothesis, 2 test it, 3 (invalidate it. This time-consuming and painstaking data scrutiny is alleviated by using specialized software tools. However, all the software tools available to date are polymer chemistry-specific. This imposes a heavy overhead to researchers who do mass spectrometry on a variety of (bio-polymers, as each polymer type will require a different software tool to perform data simulations and analyses. We developed a software to address the lack of an integrated software framework able to deal with different polymer chemistries. Results The GNU polyxmass software framework performs common (bio-chemical simulations–along with simultaneous mass spectrometric calculations–for any kind of linear bio-polymeric analyte (DNA, RNA, saccharides or proteins. The framework is organized into three modules, all accessible from one single binary program. The modules let the user to 1 define brand new polymer chemistries, 2 perform quick mass calculations using a desktop calculator paradigm, 3 graphically edit polymer sequences and perform (bio-chemical/mass spectrometric simulations. Any aspect of the mass calculations, polymer chemistry reactions or graphical polymer sequence editing is configurable. Conclusion The scientist who uses mass spectrometry to characterize (bio-polymeric analytes of different chemistries is provided with a single software framework for his data prediction/analysis needs, whatever the polymer chemistry being involved.

  17. Large Scale Software Building with CMake in ATLAS

    CERN Document Server

    Elmsheuser, Johannes; The ATLAS collaboration; Obreshkov, Emil; Undrus, Alexander

    2016-01-01

    The offline software of the ATLAS experiment at the LHC (Large Hadron Collider) serves as the platform for detector data reconstruction, simulation and analysis. It is also used in the detector trigger system to select LHC collision events during data taking. ATLAS offline software consists of several million lines of C++ and Python code organized in a modular design of more than 2000 specialized packages. Because of different workflows many stable numbered releases are in parallel production use. To accommodate specific workflow requests, software patches with modified libraries are distributed on top of existing software releases on a daily basis. The different ATLAS software applications require a flexible build system that strongly supports unit and integration tests. Within the last year this build system was migrated to CMake. A CMake configuration has been developed that allows one to easily set up and build the mentioned software packages. This also makes it possible to develop and test new and modifi...

  18. Large scale software building with CMake in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00218447; The ATLAS collaboration; Elmsheuser, Johannes; Obreshkov, Emil; Undrus, Alexander

    2017-01-01

    The offline software of the ATLAS experiment at the LHC (Large Hadron Collider) serves as the platform for detector data reconstruction, simulation and analysis. It is also used in the detector trigger system to select LHC collision events during data taking. ATLAS offline software consists of several million lines of C++ and Python code organized in a modular design of more than 2000 specialized packages. Because of different workflows many stable numbered releases are in parallel production use. To accommodate specific workflow requests, software patches with modified libraries are distributed on top of existing software releases on a daily basis. The different ATLAS software applications require a flexible build system that strongly supports unit and integration tests. Within the last year this build system was migrated to CMake. A CMake configuration has been developed that allows one to easily set up and build the mentioned software packages. This also makes it possible to develop and test new and modifi...

  19. The self-description data configuration model

    Energy Technology Data Exchange (ETDEWEB)

    Abadie, Lana, E-mail: lana.abadie@iter.org [ITER Organization, Route de vinon sur Verdon, 13115 St Paul Lez Durance (France); Di Maio, Franck; Klotz, Wolf-Dieter; Mahajan, Kirti; Stepanov, Denis; Utzel, Nadine; Wallander, Anders [ITER Organization, Route de vinon sur Verdon, 13115 St Paul Lez Durance (France)

    2012-12-15

    Highlights: Black-Right-Pointing-Pointer We use the relational model to represent the configuration data for ITER. Black-Right-Pointing-Pointer We explain the different modeled views namely physical, functional and control. Black-Right-Pointing-Pointer We explain how this information is used to generate configuration files. Black-Right-Pointing-Pointer We explain that this information is validated. - Abstract: ITER will consist of roughly 160 plant systems I and C delivered in kind which need to be integrated into the ITER control infrastructure. To make the integration of all these plant systems I and C, a smooth operation, the CODAC (Controls, Data Access and Communications) group release every year the core software environment which consists of many applications. In this paper we would like to describe what configuration data and how it is modeled in the version 2. The model is based on three views, the physical one which lists the components with their signals, the functional view which describes the control functions and variables required to implement them and the control view which links the two previous views. We use Hibernate as an ORM (Object Relational Mapping) framework with a PostgreSQL database and Spring as a framework to handle transactions.

  20. A multi-perspective approach for the design of Product Configuration Systems - an evaluation of industry applications

    DEFF Research Database (Denmark)

    Hvam, Lars

    2004-01-01

    , a de-marcation and definition of the configuration system to be designed. • Analysis and modelling of the part of the company’s product assortment which is to be included in the con-figuration system. • Selection of configuration software and programming of the configuration system. • Implementation....... The procedure or certain parts of the procedure have currently been tested and further developed in cooperation with a number of industrial companies including F.L.Smidth, American Power Conversion (APC), Aalborg industries, NEG-Micon, GEA-Niro and IBM-SMS. This paper presents the experiences gained from 4...

  1. Firm Strategies and Business Models in the Software Industry: A Configurational Approach

    OpenAIRE

    Pussep, Anton

    2017-01-01

    Researchers have long focused on the determinants of firm success, which is of crucial interest to practitioners as well, since being successful is at the very heart of economic activity. Extant research emphasizes three levels of analysis at which determinants occur: firm, industry, and group level. Each level has been found to affect firm success. At group level, firms choose between a limited set of competitive approaches. The resulting groups are referred to as configurations. The a...

  2. Integrating configuration workflows with project management system

    International Nuclear Information System (INIS)

    Nilsen, Dimitri; Weber, Pavel

    2014-01-01

    The complexity of the heterogeneous computing resources, services and recurring infrastructure changes at the GridKa WLCG Tier-1 computing center require a structured approach to configuration management and optimization of interplay between functional components of the whole system. A set of tools deployed at GridKa, including Puppet, Redmine, Foreman, SVN and Icinga, provides the administrative environment giving the possibility to define and develop configuration workflows, reduce the administrative effort and improve sustainable operation of the whole computing center. In this presentation we discuss the developed configuration scenarios implemented at GridKa, which we use for host installation, service deployment, change management procedures, service retirement etc. The integration of Puppet with a project management tool like Redmine provides us with the opportunity to track problem issues, organize tasks and automate these workflows. The interaction between Puppet and Redmine results in automatic updates of the issues related to the executed workflow performed by different system components. The extensive configuration workflows require collaboration and interaction between different departments like network, security, production etc. at GridKa. Redmine plugins developed at GridKa and integrated in its administrative environment provide an effective way of collaboration within the GridKa team. We present the structural overview of the software components, their connections, communication protocols and show a few working examples of the workflows and their automation.

  3. A bottom-up approach to automatically configured Tango control systems

    International Nuclear Information System (INIS)

    Rubio-Manrique, S.; Beltran, D.; Costa, I.; Fernandez-Carreiras, D.; Gigante, J.V.; Klora, J.; Matilla, O.; Ranz, R.; Ribas, J.; Sanchez, O.

    2012-01-01

    Alba is the first synchrotron light source built in Spain. Most of Alba control system has been developed on top of Tango control system. An amount of 5531 devices are controlled in Alba accelerators (linac, booster and storage ring) using 150 Linux PCs. Alba maintains a central repository, so called 'Cabling and Controls database' (CCDB), which keeps the inventory of equipment, cables, connections and their configuration and technical specifications. The valuable information kept in this MySQL database enables some tools to automatically create and configure Tango devices and other software components of the control systems of Accelerators, beamlines and laboratories. This paper describes the process involved in this automatic setup

  4. AgdbNet – antigen sequence database software for bacterial typing

    Directory of Open Access Journals (Sweden)

    Maiden Martin CJ

    2006-06-01

    Full Text Available Abstract Background Bacterial typing schemes based on the sequences of genes encoding surface antigens require databases that provide a uniform, curated, and widely accepted nomenclature of the variants identified. Due to the differences in typing schemes, imposed by the diversity of genes targeted, creating these databases has typically required the writing of one-off code to link the database to a web interface. Here we describe agdbNet, widely applicable web database software that facilitates simultaneous BLAST querying of multiple loci using either nucleotide or peptide sequences. Results Databases are described by XML files that are parsed by a Perl CGI script. Each database can have any number of loci, which may be defined by nucleotide and/or peptide sequences. The software is currently in use on at least five public databases for the typing of Neisseria meningitidis, Campylobacter jejuni and Streptococcus equi and can be set up to query internal isolate tables or suitably-configured external isolate databases, such as those used for multilocus sequence typing. The style of the resulting website can be fully configured by modifying stylesheets and through the use of customised header and footer files that surround the output of the script. Conclusion The software provides a rapid means of setting up customised Internet antigen sequence databases. The flexible configuration options enable typing schemes with differing requirements to be accommodated.

  5. MPS Vax monitor and control software architecture

    International Nuclear Information System (INIS)

    Allison, S.; Spencer, N.; Underwood, K.; VanOlst, D.; Zelanzy, M.

    1993-04-01

    The new Machine Protection System (MPS) now being tested at the SLAC Linear Collider (SLC) includes monitoring and controlling facilities integrated into the existing VAX control system. The actual machine protection is performed by VME micros which control the beam repetition rate on a pulse-by-pulse basis based on measurements from fault detectors. The VAX is used to control and configure the VME micros, configure custom CAMAC modules providing the fault detector inputs, monitor and report faults and system errors, update the SLC database, and interface with the user. The design goals of the VAX software include a database-driven system to allow configuration changes without code changes, use of a standard TCP/IP-based message service for communication, use of existing SLCNET micros for CAMAC configuration, security and verification features to prevent unauthorized access, error and alarm logging and display updates as quickly as possible, and use of touch panels and X-windows displays for the user interface

  6. UPVapor: Cofrentes nuclear power plant production results analysis software

    International Nuclear Information System (INIS)

    Curiel, M.; Palomo, M. J.; Baraza, A.; Vaquer, J.

    2010-10-01

    UPVapor software version 02 has been developed for the Cofrentes nuclear power plant Data Analysis Department (Spain). It is an analysis graphical environment in which users have available all the plant variables registered in the process computer system (SIEC). In order to perform this, UPVapor software has many advanced graphic tools for work simplicity, as well as a friendly environment easy to use and with many configuration possibilities. Plant variables are classified in the same way that they are in SIEC computer and these values are taken from it through the network of Iberdrola. UPVapor can generate two different types of graphics: evolution graphs and X Y graphs. The first ones analyse the evolution up to twenty plant variables in a user's defined time period and according to historic plant files. Many tools are available: cursors, graphic configuration, mobile means, non valid data visualization ... Moreover, a particular analysis configuration can be saved, as a pre selection, giving the possibility of charging pre selection directly and developing quick monitoring of a group of preselected plant variables. In X Y graphs, it is possible to analyse a variable value against another variable in a defined time. As an option, users can filter previous data depending on a variable certain range, with the possibility of programming up to five filters. As well as the other graph, X Y graph has many configurations, saving and printing options. With UPVapor software, data analysts can save a valuable time during daily work and, as it is of easy utilization, it permits to other users to perform their own analysis without ask the analysts to develop. Besides, it can be used from any work centre with access to network framework. (Author)

  7. UPVapor: Cofrentes nuclear power plant production results analysis software

    Energy Technology Data Exchange (ETDEWEB)

    Curiel, M. [Logistica y Acondicionamientos Industriales SAU, Sorolla Center, local 10, Av. de las Cortes Valencianas No. 58, 46015 Valencia (Spain); Palomo, M. J. [ISIRYM, Universidad Politecnica de Valencia, Camino de Vera s/n, Valencia (Spain); Baraza, A. [Iberdrola Generacion S. A., Central Nuclear Cofrentes, Carretera Almansa Requena s/n, 04662 Cofrentes, Valencia (Spain); Vaquer, J., E-mail: m.curiel@lainsa.co [TITANIA Servicios Tecnologicos SL, Sorolla Center, local 10, Av. de las Cortes Valencianas No. 58, 46015 Valencia (Spain)

    2010-10-15

    UPVapor software version 02 has been developed for the Cofrentes nuclear power plant Data Analysis Department (Spain). It is an analysis graphical environment in which users have available all the plant variables registered in the process computer system (SIEC). In order to perform this, UPVapor software has many advanced graphic tools for work simplicity, as well as a friendly environment easy to use and with many configuration possibilities. Plant variables are classified in the same way that they are in SIEC computer and these values are taken from it through the network of Iberdrola. UPVapor can generate two different types of graphics: evolution graphs and X Y graphs. The first ones analyse the evolution up to twenty plant variables in a user's defined time period and according to historic plant files. Many tools are available: cursors, graphic configuration, mobile means, non valid data visualization ... Moreover, a particular analysis configuration can be saved, as a pre selection, giving the possibility of charging pre selection directly and developing quick monitoring of a group of preselected plant variables. In X Y graphs, it is possible to analyse a variable value against another variable in a defined time. As an option, users can filter previous data depending on a variable certain range, with the possibility of programming up to five filters. As well as the other graph, X Y graph has many configurations, saving and printing options. With UPVapor software, data analysts can save a valuable time during daily work and, as it is of easy utilization, it permits to other users to perform their own analysis without ask the analysts to develop. Besides, it can be used from any work centre with access to network framework. (Author)

  8. Advanced multilateration theory, software development, and data processing: The MICRODOT system

    Science.gov (United States)

    Escobal, P. R.; Gallagher, J. F.; Vonroos, O. H.

    1976-01-01

    The process of geometric parameter estimation to accuracies of one centimeter, i.e., multilateration, is defined and applications are listed. A brief functional explanation of the theory is presented. Next, various multilateration systems are described in order of increasing system complexity. Expected systems accuracy is discussed from a general point of view and a summary of the errors is listed. An outline of the design of a software processing system for multilateration, called MICRODOT, is presented next. The links of this software, which can be used for multilateration data simulations or operational data reduction, are examined on an individual basis. Functional flow diagrams are presented to aid in understanding the software capability. MICRODOT capability is described with respect to vehicle configurations, interstation coordinate reduction, geophysical parameter estimation, and orbit determination. Numerical results obtained from MICRODOT via data simulations are displayed both for hypothetical and real world vehicle/station configurations such as used in the GEOS-3 Project. These simulations show the inherent power of the multilateration procedure.

  9. A PICKSC Science Gateway for enabling the common plasma physicist to run kinetic software

    Science.gov (United States)

    Hu, Q.; Winjum, B. J.; Zonca, A.; Youn, C.; Tsung, F. S.; Mori, W. B.

    2017-10-01

    Computer simulations offer tremendous opportunities for studying plasmas, ranging from simulations for students that illuminate fundamental educational concepts to research-level simulations that advance scientific knowledge. Nevertheless, there is a significant hurdle to using simulation tools. Users must navigate codes and software libraries, determine how to wrangle output into meaningful plots, and oftentimes confront a significant cyberinfrastructure with powerful computational resources. Science gateways offer a Web-based environment to run simulations without needing to learn or manage the underlying software and computing cyberinfrastructure. We discuss our progress on creating a Science Gateway for the Particle-in-Cell and Kinetic Simulation Software Center that enables users to easily run and analyze kinetic simulations with our software. We envision that this technology could benefit a wide range of plasma physicists, both in the use of our simulation tools as well as in its adaptation for running other plasma simulation software. Supported by NSF under Grant ACI-1339893 and by the UCLA Institute for Digital Research and Education.

  10. Configuration analysis and optimization on multipolar Galatea trap

    Energy Technology Data Exchange (ETDEWEB)

    Tong, W. M., E-mail: dianqi@hit.edu.cn; Tao, B. Q.; Jin, X. J.; Li, Z. W. [Harbin Institute of Technology, School of Electrical Engineering and Automation (China)

    2016-10-15

    Multipolar Galatea magnetic trap simulation model was established with the finite element simulation software COMSOL Multiphysics. Analyses about the magnetic section configuration show that better magnetic configuration should make more plasma stay in the weak magnetic field rather than the annular magnetic shell field. Then an optimization model was established with axial electromagnetic force, weak magnetic field area and average magnetic mirror ratio as the optimization goals and with the currents of myxines as design variables. Select appropriate weight coefficients and get optimization results by applying genetic algorithm. Results show that the superiority of the target value of typical application parameters, including the average magnetic mirror can reduce more than 5%, the weak magnetic field area can increase at least 65%, at the same time, axial electromagnetic force acting on the outer myxines can be reduced to less than 50 N. Finally, the results were proved by COMSOL Multiphysics and the results proved the optimized magnetic trap configuration with more plasma in the weak magnetic field can reduce the plasma diffusion velocity and is more conducive for the constraint of plasma.

  11. EDS operator and control software

    International Nuclear Information System (INIS)

    Ott, L.L.

    1985-04-01

    The Enrichment Diagnostic System (EDS) was developed at Lawrence Livermore National Laboratory (LLNL) to acquire, display and analyze large quantities of transient data for a real-time Advanced Vapor Laser Isotope Separation (AVLIS) experiment. Major topics discussed in this paper are the EDS operator interface (SHELL) program, the data acquisition and analysis scheduling software, and the graphics software. The workstation concept used in EDS, the software used to configure a user's workstation, and the ownership and management of a diagnostic are described. An EDS diagnostic is a combination of hardware and software designed to study specific aspects of the process. Overall system performance is discussed from the standpoint of scheduling techniques, evaluation tools, optimization techniques, and program-to-program communication methods. EDS is based on a data driven design which keeps the need to modify software to a minimum. This design requires a fast and reliable data base management system. A third party data base management product, Berkeley Software System Database, written explicitly for HP1000's, is used for all EDS data bases. All graphics is done with an in-house graphics product, Device Independent Graphics Library (DIGLIB). Examples of devices supported by DIGLIB are: Versatec printer/plotters, Raster Technologies Graphic Display Controllers, and HP terminals (HP264x and HP262x). The benefits derived by using HP hardware and software as well as obstacles imposed by the HP environment are presented in relation to EDS development and implementation

  12. Automating an EXAFS facility: hardware and software considerations

    International Nuclear Information System (INIS)

    Georgopoulos, P.; Sayers, D.E.; Bunker, B.; Elam, T.; Grote, W.A.

    1981-01-01

    The basic design considerations for computer hardware and software, applicable not only to laboratory EXAFS facilities, but also to synchrotron installations, are reviewed. Uniformity and standardization of both hardware configurations and program packages for data collection and analysis are heavily emphasized. Specific recommendations are made with respect to choice of computers, peripherals, and interfaces, and guidelines for the development of software packages are set forth. A description of two working computer-interfaced EXAFS facilities is presented which can serve as prototypes for future developments. 3 figures

  13. On the Update Problems for Software Defined Networks

    Directory of Open Access Journals (Sweden)

    V. A. Zakharov

    2014-01-01

    Full Text Available The designing of network update algorithms is urgent for the development of SDN control software. A particular case of Network Update Problem is that of restoring seamlessly a given network configuration after some packet forwarding rules have been disabled (say, at the expiry of their time-outs. We study this problem in the framework of a formal model of SDN, develop correct and safe network recovering algorithms, and show that in general case there is no way to restore network configuration seamlessly without referring to priorities of packet forwarding rules.

  14. Western aeronautical test range real-time graphics software package MAGIC

    Science.gov (United States)

    Malone, Jacqueline C.; Moore, Archie L.

    1988-01-01

    The master graphics interactive console (MAGIC) software package used on the Western Aeronautical Test Range (WATR) of the NASA Ames Research Center is described. MAGIC is a resident real-time research tool available to flight researchers-scientists in the NASA mission control centers of the WATR at the Dryden Flight Research Facility at Edwards, California. The hardware configuration and capabilities of the real-time software package are also discussed.

  15. UFMulti: A new parallel processing software system for HEP

    Science.gov (United States)

    Avery, Paul; White, Andrew

    1989-12-01

    UFMulti is a multiprocessing software package designed for general purpose high energy physics applications, including physics and detector simulation, data reduction and DST physics analysis. The system is particularly well suited for installations where several workstation or computers are connected through a local area network (LAN). The initial configuration of the software is currently running on VAX/VMS machines with a planned extension to ULTRIX, using the new RISC CPUs from Digital, in the near future.

  16. UFMULTI: A new parallel processing software system for HEP

    International Nuclear Information System (INIS)

    Avery, P.; White, A.

    1989-01-01

    UFMulti is a multiprocessing software package designed for general purpose high energy physics applications, including physics and detector simulation, data reduction and DST physics analysis. The system is particularly well suited for installations where several workstations or computers are connected through a local area network (LAN). The initial configuration of the software is currently running on VAX/VMS machines with a planned extension to ULTRIX, using the new RISC CPUs from Digital, in the near future. (orig.)

  17. Patterns of Software Development Process

    Directory of Open Access Journals (Sweden)

    Sandro Javier Bolaños Castro

    2011-12-01

    Full Text Available "Times New Roman","serif";mso-fareast-font-family:"Times New Roman";mso-ansi-language:EN-US;mso-fareast-language:EN-US;mso-bidi-language:AR-SA">This article presents a set of patterns that can be found to perform best practices in software processes that are directly related to the problem of implementing the activities of the process, the roles involved, the knowledge generated and the inputs and outputs belonging to the process. In this work, a definition of the architecture is encouraged by using different recurrent configurations that strengthen the process and yield efficient results for the development of a software project. The patterns presented constitute a catalog, which serves as a vocabulary for communication among project participants [1], [2], and also can be implemented through software tools, thus facilitating patterns implementation [3]. Additionally, a tool that can be obtained under GPL (General Public license is provided for this purpose

  18. Testing digital safety system software with a testability measure based on a software fault tree

    International Nuclear Information System (INIS)

    Sohn, Se Do; Hyun Seong, Poong

    2006-01-01

    Using predeveloped software, a digital safety system is designed that meets the quality standards of a safety system. To demonstrate the quality, the design process and operating history of the product are reviewed along with configuration management practices. The application software of the safety system is developed in accordance with the planned life cycle. Testing, which is a major phase that takes a significant time in the overall life cycle, can be optimized if the testability of the software can be evaluated. The proposed testability measure of the software is based on the entropy of the importance of basic statements and the failure probability from a software fault tree. To calculate testability, a fault tree is used in the analysis of a source code. With a quantitative measure of testability, testing can be optimized. The proposed testability can also be used to demonstrate whether the test cases based on uniform partitions, such as branch coverage criteria, result in homogeneous partitions that is known to be more effective than random testing. In this paper, the testability measure is calculated for the modules of a nuclear power plant's safety software. The module testing with branch coverage criteria required fewer test cases if the module has higher testability. The result shows that the testability measure can be used to evaluate whether partitions have homogeneous characteristics

  19. Adding Support to the ALMA Common Software for Real-Time Operations through the Usage of a POSIX-Compliant RTOS

    Science.gov (United States)

    Tobar, R. J.; von Brand, H.; Araya, M. A.; Juerges, T.

    2010-12-01

    The ALMA Common Software (ACS) framework lacks of the real-time capabilities to control the antennas’ instrumentation — as has been probed by previous works — which has lead to non-portable workarounds to the problem. Indeed, the time service used in ACS, based in the Container/Component model, presents plenty of results that confirm this statement. This work addresses the problem of design and integrate a real-time service for ACS, providing to the framework an implementation such that the control operations over the different instruments could be done within real-time constraints. This implementation is compared with the current time service, showing the difference between the two systems when subjecting them to common scenarios. Also, the new implementation is done following the POSIX specification, ensuring interoperability and portability through different operating systems.

  20. Application of software quality assurance to a specific scientific code development task

    International Nuclear Information System (INIS)

    Dronkers, J.J.

    1986-03-01

    This paper describes an application of software quality assurance to a specific scientific code development program. The software quality assurance program consists of three major components: administrative control, configuration management, and user documentation. The program attempts to be consistent with existing local traditions of scientific code development while at the same time providing a controlled process of development

  1. Sending Learning Pills to Mobile Devices in Class to Enhance Student Performance and Motivation in Network Services Configuration Courses

    Science.gov (United States)

    Munoz-Organero, M.; Munoz-Merino, P. J.; Kloos, C. D.

    2012-01-01

    Teaching electrical and computer software engineers how to configure network services normally requires the detailed presentation of many configuration commands and their numerous parameters. Students tend to find it difficult to maintain acceptable levels of motivation. In many cases, this results in their not attending classes and not dedicating…

  2. NASA HERMeS Hall Thruster Electrical Configuration Characterization

    Science.gov (United States)

    Peterson, Peter; Kamhawi, Hani; Huang, Wensheng; Yim, John; Herman, Daniel; Williams, George; Gilland, James; Hofer, Richard

    2016-01-01

    NASAs Hall Effect Rocket with Magnetic Shielding (HERMeS) 12.5 kW Technology Demonstration Unit-1 (TDU-1) Hall thruster has been the subject of extensive technology maturation in preparation for development into a flight ready propulsion system. Part of the technology maturation was to test the TDU-1 thruster in several ground based electrical configurations to assess the thruster robustness and suitability to successful in-space operation. The ground based electrical configuration testing has recently been demonstrated as an important step in understanding and assessing how a Hall thruster may operate differently in space compared to ground based testing, and to determine the best configuration to conduct development and qualification testing. This presentation will cover the electrical configuration testing of the TDU-1 HERMeS Hall thruster in NASA Glenn Research Centers Vacuum Facility 5. The three electrical configurations examined are the thruster body tied to facility ground, thruster floating, and finally the thruster body electrically tied to cathode common. The TDU-1 HERMeS was configured with two different exit plane boundary conditions, dielectric and conducting, to examine the influence on the electrical configuration characterization.

  3. The impact of the operating environment on the design of redundant configurations

    International Nuclear Information System (INIS)

    Marseguerra, M.; Padovani, E.; Zio, E.

    1999-01-01

    Safety systems are often characterized by substantial redundancy and diversification in safety critical components. In principle, such redundancy and diversification can bring benefits when compared to single-component systems. However, it has also been recognized that the evaluation of these benefits should take into account that redundancies cannot be founded, in practice, on the assumption of complete independence, so that the resulting risk profile is strongly dominated by dependent failures. It is therefore mandatory that the effects of common cause failures be estimated in any probabilistic safety assessment (PSA). Recently, in the Hughes model for hardware failures and in the Eckhardt and Lee models for software failures, it was proposed that the stressfulness of the operating environment affects the probability that a particular type of component will fail. Thus, dependence of component failure behaviors can arise indirectly through the variability of the environment which can directly affect the success of a redundant configuration. In this paper we investigate the impact of indirect component dependence by means of the introduction of a probability distribution which describes the variability of the environment. We show that the variance of the distribution of the number, or times, of system failures can give an indication of the presence of the environment. Further, the impact of the environment is shown to affect the reliability and the design of redundant configurations

  4. A Software Framework for Multimodal Human-Computer Interaction Systems

    NARCIS (Netherlands)

    Shen, Jie; Pantic, Maja

    2009-01-01

    This paper describes a software framework we designed and implemented for the development and research in the area of multimodal human-computer interface. The proposed framework is based on publish / subscribe architecture, which allows developers and researchers to conveniently configure, test and

  5. [Porting Radiotherapy Software of Varian to Cloud Platform].

    Science.gov (United States)

    Zou, Lian; Zhang, Weisha; Liu, Xiangxiang; Xie, Zhao; Xie, Yaoqin

    2017-09-30

    To develop a low-cost private cloud platform of radiotherapy software. First, a private cloud platform which was based on OpenStack and the virtual GPU hardware was builded. Then on the private cloud platform, all the Varian radiotherapy software modules were installed to the virtual machine, and the corresponding function configuration was completed. Finally the software on the cloud was able to be accessed by virtual desktop client. The function test results of the cloud workstation show that a cloud workstation is equivalent to an isolated physical workstation, and any clients on the LAN can use the cloud workstation smoothly. The cloud platform transplantation in this study is economical and practical. The project not only improves the utilization rates of radiotherapy software, but also makes it possible that the cloud computing technology can expand its applications to the field of radiation oncology.

  6. Designing flexible, ''chemist-friendly'' software to control a radiochemistry autosynthesizer

    International Nuclear Information System (INIS)

    Feliu, A.L.

    1989-01-01

    To enhance the utility of process control software to control radiochemistry autosynthesizers used with short-lived positron-emitting isotopes, a scheme is proposed by which routine executive-level tasks, hardware control operations, and chemical procedures have been segregated. This strategy can lead to chemist-friendly control programs for any desired hardware configuration, as illustrated in new software designed to exploit the features and flexibility of the CTI/Siemens Chemical Process Control Unit. (author)

  7. Possibilities and Limitations of Applying Software Reliability Growth Models to Safety- Critical Software

    International Nuclear Information System (INIS)

    Kim, Man Cheol; Jang, Seung Cheol; Ha, Jae Joo

    2006-01-01

    As digital systems are gradually introduced to nuclear power plants (NPPs), the need of quantitatively analyzing the reliability of the digital systems is also increasing. Kang and Sung identified (1) software reliability, (2) common-cause failures (CCFs), and (3) fault coverage as the three most critical factors in the reliability analysis of digital systems. For the estimation of the safety-critical software (the software that is used in safety-critical digital systems), the use of Bayesian Belief Networks (BBNs) seems to be most widely used. The use of BBNs in reliability estimation of safety-critical software is basically a process of indirectly assigning a reliability based on various observed information and experts' opinions. When software testing results or software failure histories are available, we can use a process of directly estimating the reliability of the software using various software reliability growth models such as Jelinski- Moranda model and Goel-Okumoto's nonhomogeneous Poisson process (NHPP) model. Even though it is generally known that software reliability growth models cannot be applied to safety-critical software due to small number of expected failure data from the testing of safety-critical software, we try to find possibilities and corresponding limitations of applying software reliability growth models to safety critical software

  8. Verification and validation of control system software

    International Nuclear Information System (INIS)

    Munro, J.K. Jr.; Kisner, R.A.; Bhadtt, S.C.

    1991-01-01

    The following guidelines are proposed for verification and validation (V ampersand V) of nuclear power plant control system software: (a) use risk management to decide what and how much V ampersand V is needed; (b) classify each software application using a scheme that reflects what type and how much V ampersand V is needed; (c) maintain a set of reference documents with current information about each application; (d) use Program Inspection as the initial basic verification method; and (e) establish a deficiencies log for each software application. The following additional practices are strongly recommended: (a) use a computer-based configuration management system to track all aspects of development and maintenance; (b) establish reference baselines of the software, associated reference documents, and development tools at regular intervals during development; (c) use object-oriented design and programming to promote greater software reliability and reuse; (d) provide a copy of the software development environment as part of the package of deliverables; and (e) initiate an effort to use formal methods for preparation of Technical Specifications. The paper provides background information and reasons for the guidelines and recommendations. 3 figs., 3 tabs

  9. Evaluation of a novel compact shearography system with DOE configuration

    Science.gov (United States)

    da Silva, Fabio Aparecido Alves; Willemann, Daniel Pedro; Fantin, Analucia Vieira; Benedet, Mauro Eduardo; Gonçalves, Armando Albertazzi

    2018-05-01

    The most common optical configuration used to produce the lateral shifted images, in a Shearography system, is the Modified Michelson interferometer, because of its simple configuration. Tests carried out in recent years have shown that the modified interferometer of Michelson is a device that presents good results in a laboratory environment, but still presents difficulties in the field. These difficulties were the main motivation for the development of a more robust system, able to operate in unstable environments. This paper presents a new shearography configuration based on Diffractive Optical Element (DOE). Different from the diffractive common-path setups found in literature, in the proposed configuration, the DOE is positioned between the image sensor and the objective lens and mounted on a flexible holder, which has an important function to promote the system's robustness. Another advantage of the proposed system is in respect to phase shifting, since it is insensitive to wavelength variations. The lateral movement of the DOE produces a phase shifting in the shearography system. Since the pitch of the diffractive grating used is about 60 times greater than the wavelength of a green laser, the DOE configuration becomes much more robust to external influences compared to the Michelson Interferometer configuration. This work also presents an evaluation of the proposed shearography system designed, and some comparative results regarding a classical shearography system.

  10. How Well Can Existing Software Support Processes Accomplish Sustainment of a Non-Developmental Item-Based Acquisition Strategy

    Science.gov (United States)

    2017-04-06

    guidance to the PM regarding development and sustainment of software . The need for a strong application of software engineering principles is...on the battlefield by a government- developed network manager application . The configuration of this confluence of software will be jointly managed...How Well Can Existing Software -Support Processes Accomplish Sustainment of a Non- Developmental Item-Based Acquisition Strategy? Graciano

  11. ETICS the international software engineering service for the grid

    CERN Document Server

    Di Meglio, A; Couvares, P; Ronchieri, E; Takács, E

    2008-01-01

    The ETICS system is a distributed software configuration, build and test system designed to fulfil the needs of improving the quality, reliability and interoperability of distributed software in general and grid software in particular. The ETICS project is a consortium of five partners (CERN, INFN, Engineering Ingegneria Informatica, 4D Soft and the University of Wisconsin-Madison). The ETICS service consists of a build and test job execution system based on the Metronome software and an integrated set of web services and software engineering tools to design, maintain and control build and test scenarios. The ETICS system allows taking into account complex dependencies among applications and middleware components and provides a rich environment to perform static and dynamic analysis of the software and execute deployment, system and interoperability tests. This paper gives an overview of the system architecture and functionality set and then describes how the EC-funded EGEE, DILIGENT and OMII-Europe projects ...

  12. Patterns in the Pythagorean Configuration and Some Extensions: The Power of Interactive Geometry Software

    Science.gov (United States)

    Contreras, José

    2015-01-01

    In this paper I describe classroom experiences with pre-service secondary mathematics teachers (PSMTs) investigating and extending patterns embedded in the Pythagorean configuration. This geometric figure is a fruitful source of mathematical tasks to help students, including PSMTs, further develop habits of mind such as visualization,…

  13. New GPIB Control Software at Jefferson Lab

    International Nuclear Information System (INIS)

    Matthew Bickley; Pavel Chevtsov

    2005-01-01

    The control of GPIB devices at Jefferson Lab is based on the GPIB device/driver library. The library is a part of the device/driver development framework. It is activated with the use of the device configuration files that define all hardware components used in the control system to communicate with GPIB devices. As soon as the software is activated, it is ready to handle any device connected to these components and only needs to know the set of commands that the device can understand. The old GPIB control software at Jefferson Lab requires the definition of these commands in the form of a device control software module written in C for each device. Though such modules are relatively simple, they have to be created, successfully compiled, and supported for all control computer platforms. In the new version of GPIB control software all device communication commands are defined in device protocol (ASCII text) files. This makes the support of GPIB devices in the control system much easier

  14. The Star, a dynamically configured dataflow director for realtime control

    International Nuclear Information System (INIS)

    Bickley, M.; Kewisch, J.

    1993-01-01

    The CEBAF accelerator is controlled by an automated system consisting of 50 computers connected to machine hardware and another 20 to 30 computers used for displaying machine data. The control system communication software must manage the inter-machine communication of these computer. Each of the different segments of software that make up the machine control system is treated as data sources and data sinks, with a single process mediating the transfer of all data between any data source/data sink pair. The mediating process is called the Star. This dynamically configured process keeps track of all available machine data posted by data sources and of all data requested by data sinks. Data transmission rates through the Star are kept low by sending only data that is requested by other control software, and then only when the value of the data changes. The system is entirely response-driven, with the Star process taking action only at the request of either a data source or a sink. The software for the communication is written using standard C code and TCP/IP sockets, making the communication software platform independent

  15. Software quality assurance handbook

    Energy Technology Data Exchange (ETDEWEB)

    1990-09-01

    There are two important reasons for Software Quality Assurance (SQA) at Allied-Signal Inc., Kansas City Division (KCD): First, the benefits from SQA make good business sense. Second, the Department of Energy has requested SQA. This handbook is one of the first steps in a plant-wide implementation of Software Quality Assurance at KCD. The handbook has two main purposes. The first is to provide information that you will need to perform software quality assurance activities. The second is to provide a common thread to unify the approach to SQA at KCD. 2 figs.

  16. Use of collaboration software to improve nuclear power plant outage management

    Energy Technology Data Exchange (ETDEWEB)

    Germain, Shawn

    2015-02-01

    Nuclear Power Plant (NPP) refueling outages create some of the most challenging activities the utilities face in both tracking and coordinating thousands of activities in a short period of time. Other challenges, including nuclear safety concerns arising from atypical system configurations and resource allocation issues, can create delays and schedule overruns, driving up outage costs. Today the majority of the outage communication is done using processes that do not take advantage of advances in modern technologies that enable enhanced communication, collaboration and information sharing. Some of the common practices include: runners that deliver paper-based requests for approval, radios, telephones, desktop computers, daily schedule printouts, and static whiteboards that are used to display information. Many gains have been made to reduce the challenges facing outage coordinators; however; new opportunities can be realized by utilizing modern technological advancements in communication and information tools that can enhance the collective situational awareness of plant personnel leading to improved decision-making. Ongoing research as part of the Light Water Reactor Sustainability Program (LWRS) has been targeting NPP outage improvement. As part of this research, various applications of collaborative software have been demonstrated through pilot project utility partnerships. Collaboration software can be utilized as part of the larger concept of Computer-Supported Cooperative Work (CSCW). Collaborative software can be used for emergent issue resolution, Outage Control Center (OCC) displays, and schedule monitoring. Use of collaboration software enables outage staff and subject matter experts (SMEs) to view and update critical outage information from any location on site or off.

  17. Database foundation for the configuration management of the CERN accelerator controls systems

    International Nuclear Information System (INIS)

    Zaharieva, Z.; Martin Marquez, M.; Peryt, M.

    2012-01-01

    The Controls Configuration Database (CCDB) and its interfaces have been developed over the last 25 years in order to become nowadays the basis for the Configuration Management of the Control System for all accelerators at CERN. The CCDB contains data for all configuration items and their relationships, required for the correct functioning of the Control System. The configuration items are quite heterogeneous, depicting different areas of the Control System - ranging from 3000 Front-End Computers, 75000 software devices allowing remote control of the accelerators, to valid states of the Accelerators Timing System. The article will describe the different areas of the CCDB, their inter-dependencies and the challenges to establish the data model for such a diverse configuration management database, serving a multitude of clients. The CCDB tracks the life of the configuration items by allowing their clear identification, triggering of change management processes as well as providing status accounting and audits. This required the development and implementation of a combination of tailored processes and tools. The Controls System is a data-driven one - the data stored in the CCDB is extracted and propagated to the controls hardware in order to configure it remotely. Therefore a special attention is placed on data security and data integrity as an incorrectly configured item can have a direct impact on the operation of the accelerators. (authors)

  18. Exploratory research for the development of a computer aided software design environment with the software technology program

    Science.gov (United States)

    Hardwick, Charles

    1991-01-01

    Field studies were conducted by MCC to determine areas of research of mutual interest to MCC and JSC. NASA personnel from the Information Systems Directorate and research faculty from UHCL/RICIS visited MCC in Austin, Texas to examine tools and applications under development in the MCC Software Technology Program. MCC personnel presented workshops in hypermedia, design knowledge capture, and design recovery on site at JSC for ISD personnel. The following programs were installed on workstations in the Software Technology Lab, NASA/JSC: (1) GERM (Graphic Entity Relations Modeler); (2) gIBIS (Graphic Issues Based Information System); and (3) DESIRE (Design Recovery tool). These applications were made available to NASA for inspection and evaluation. Programs developed in the MCC Software Technology Program run on the SUN workstation. The programs do not require special configuration, but they will require larger than usual amounts of disk space and RAM to operate properly.

  19. ANALYSIS OF SPECIAL WASTE CONFIGURATIONS AT THE SRS WASTE MANAGEMENT FACILITIES

    International Nuclear Information System (INIS)

    Casella, V; Raymond Dewberry, R

    2007-01-01

    Job Control Waste (JCW) at the Savannah River Site (SRS) Solid Waste Management Facilities (SWMF) may be disposed of in special containers, and the analysis of these containers requires developing specific analysis methodologies. A method has been developed for the routine assay of prohibited items (liquids, etc.) contained in a 30-gallon drum that is then placed into a 55-gallon drum. Method development consisted of system calibration with a NIST standard at various drum-to-detector distances, method verification with a liquid sample containing a known amount of Pu-238, and modeling the inner container using Ortec Isotopic software. Using this method for measurement of the known standard in the drum-in-drum configuration produced excellent agreement (within 15%) with the known value. Savannah River Site Solid Waste Management also requested analysis of waste contained in large black boxes (commonly 18-feet x 12-feet x 7-feet) stored at the SWMF. These boxes are frequently stored in high background areas and background radiation must be considered for each analysis. A detection limit of less than 150 fissile-gram-equivalents (FGE) of TRU waste is required for the black-box analyses. There is usually excellent agreement for the measurements at different distances and measurement uncertainties of about 50% are obtained at distances of at least twenty feet from the box. This paper discusses the experimental setup, analysis and data evaluation for drum-in-drum and black box waste configurations at SRS

  20. Licensing of safety critical software for nuclear reactors. Common position of seven European nuclear regulators and authorised technical support organisations

    International Nuclear Information System (INIS)

    2010-01-01

    It is widely accepted that the assessment of software cannot be limited to verification and testing of the end product, i.e. the computer code. Other factors such as the quality of the processes and methods for specifying, designing and coding have an important impact on the implementation. Existing standards provide limited guidance on the regulatory and safety assessment of these factors. An undesirable consequence of this situation is that the licensing approaches taken by nuclear safety authorities and by technical support organisations are determined independently with only limited informal technical co-ordination and information exchange. It is notable that several software implementations of nuclear safety systems have been marred by costly delays caused by difficulties in co-ordinating the development and qualification process. It was thus felt necessary to compare the respective licensing approaches, to identify where a consensus already exists, and to see how greater consistency and more mutual acceptance could be introduced into current practices. This report is the result of the work of a group of regulator and safety authorities' experts. The 2007 version was completed at the invitation of the Western European Nuclear Regulators' Association (WENRA). The major result of the work is the identification of consensus and common technical positions on a set of important licensing issues raised by the design and operation of computer based systems used in nuclear power plants for the implementation of safety functions. The purpose is to introduce greater consistency and more mutual acceptance into current practices. To achieve these common positions, detailed consideration was paid to the licensing approaches followed in the different countries represented by the experts of the task force. The report is intended to be useful: - to coordinate regulators' and safety experts' technical viewpoints in the design of regulators' national policies and in revisions

  1. Licensing of safety critical software for nuclear reactors. Common position of seven European nuclear regulators and authorised technical support organisations

    Energy Technology Data Exchange (ETDEWEB)

    2010-07-01

    It is widely accepted that the assessment of software cannot be limited to verification and testing of the end product, i.e. the computer code. Other factors such as the quality of the processes and methods for specifying, designing and coding have an important impact on the implementation. Existing standards provide limited guidance on the regulatory and safety assessment of these factors. An undesirable consequence of this situation is that the licensing approaches taken by nuclear safety authorities and by technical support organisations are determined independently with only limited informal technical co-ordination and information exchange. It is notable that several software implementations of nuclear safety systems have been marred by costly delays caused by difficulties in co-ordinating the development and qualification process. It was thus felt necessary to compare the respective licensing approaches, to identify where a consensus already exists, and to see how greater consistency and more mutual acceptance could be introduced into current practices. This report is the result of the work of a group of regulator and safety authorities' experts. The 2007 version was completed at the invitation of the Western European Nuclear Regulators' Association (WENRA). The major result of the work is the identification of consensus and common technical positions on a set of important licensing issues raised by the design and operation of computer based systems used in nuclear power plants for the implementation of safety functions. The purpose is to introduce greater consistency and more mutual acceptance into current practices. To achieve these common positions, detailed consideration was paid to the licensing approaches followed in the different countries represented by the experts of the task force. The report is intended to be useful: - to coordinate regulators' and safety experts' technical viewpoints in the design of regulators' national

  2. Progress on Plant-Level Components for Nuclear Fuel Recycling: Commonality

    International Nuclear Information System (INIS)

    De Almeida, Valmor F.

    2011-01-01

    considering these activities. The exploitation of mathematical commonality for unit operations with the intent of developing generic and comprehensive software for nuclear reprocessing has not been considered to this author's knowledge. Past attention has been given to plant-level processes on an individual and isolated basis, which has led to various models and corresponding codes implementing non-systematic approaches based on elementary principles of chemical processing. This practice has built an initial knowledge base, but it was not fruitful in producing a lasting and extensible simulation capability. In contrast, a common, rigorous, mathematical modeling framework for all plant-level operations, as proposed here, has a tremendous practical and theoretical value because it will reduce the software implementation work, creates a well-defined modeling standard to compare past and future models, and, more importantly, opens the doors for scientific considerations of simulation fidelity; the latter has an obvious beneficial impact in supporting experimental validation programs. Therefore, the proposed framework is likely to generate a solid foundation for modeling plant-level processes for physicochemical nuclear (and non-nuclear) applications. Demonstration of concrete module implementation is the subject of future communications including prototypes for several modules, namely, voloxidation, dissolver, digester, accountability tank, and solvent extraction. Unit-operation commonality is a key aspect to be explored for a successful implementation of these modules aimed at realizing various flowsheets and plant configurations.

  3. Progress on Plant-Level Components for Nuclear Fuel Recycling: Commonality

    Energy Technology Data Exchange (ETDEWEB)

    de Almeida, Valmor F. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2011-08-15

    considering these activities. The exploitation of mathematical commonality for unit operations with the intent of developing generic and comprehensive software for nuclear reprocessing has not been considered to this author's knowledge. Past attention has been given to plant-level processes on an individual and isolated basis, which has led to various models and corresponding codes implementing non-systematic approaches based on elementary principles of chemical processing. This practice has built an initial knowledge base, but it was not fruitful in producing a lasting and extensible simulation capability. In contrast, a common, rigorous, mathematical modeling framework for all plant-level operations, as proposed here, has a tremendous practical and theoretical value because it will reduce the software implementation work, creates a well-defined modeling standard to compare past and future models, and, more importantly, opens the doors for scientific considerations of simulation fidelity; the latter has an obvious beneficial impact in supporting experimental validation programs. Therefore, the proposed framework is likely to generate a solid foundation for modeling plant-level processes for physicochemical nuclear (and non-nuclear) applications. Demonstration of concrete module implementation is the subject of future communications including prototypes for several modules, namely, voloxidation, dissolver, digester, accountability tank, and solvent extraction. Unit-operation commonality is a key aspect to be explored for a successful implementation of these modules aimed at realizing various flowsheets and plant configurations.

  4. Adaptive intrusion data system (AIDS) software routines

    International Nuclear Information System (INIS)

    Corlis, N.E.

    1980-07-01

    An Adaptive Intrusion Data System (AIDS) was developed to collect information from intrusion alarm sensors as part of an evaluation system to improve sensor performance. AIDS is a unique digital data-compression, storage, and formatting system; it also incorporates a capability for video selection and recording for assessment of the sensors monitored by the system. The system is software reprogrammable to numerous configurations that may be used for the collection of environmental, bilevel, analog, and video data. This report describes the software routines that control the different AIDS data-collection modes, the diagnostic programs to test the operating hardware, and the data format. Sample data printouts are also included

  5. Systems and software variability management concepts, tools and experiences

    CERN Document Server

    Capilla, Rafael; Kang, Kyo-Chul

    2013-01-01

    The success of product line engineering techniques in the last 15 years has popularized the use of software variability as a key modeling approach for describing the commonality and variability of systems at all stages of the software lifecycle. Software product lines enable a family of products to share a common core platform, while allowing for product specific functionality being built on top of the platform. Many companies have exploited the concept of software product lines to increase the resources that focus on highly differentiating functionality and thus improve their competitiveness

  6. Model-Driven Development for PDS4 Software and Services

    Science.gov (United States)

    Hughes, J. S.; Crichton, D. J.; Algermissen, S. S.; Cayanan, M. D.; Joyner, R. S.; Hardman, S. H.; Padams, J. H.

    2018-04-01

    PDS4 data product labels provide the information necessary for processing the referenced digital object. However, significantly more information is available in the PDS4 Information Model. This additional information is made available for use, by both software and services, to configure, promote resiliency, and improve interoperability.

  7. Software Development and Test Methodology for a Distributed Ground System

    Science.gov (United States)

    Ritter, George; Guillebeau, Pat; McNair, Ann R. (Technical Monitor)

    2002-01-01

    The Marshall Space Flight Center's (MSFC) Payload Operations Center (POC) ground system has evolved over a period of about 10 years. During this time the software processes have migrated from more traditional to more contemporary development processes in an effort to minimize unnecessary overhead while maximizing process benefits. The Software processes that have evolved still emphasize requirements capture, software configuration management, design documenting, and making sure the products that have been developed are accountable to initial requirements. This paper will give an overview of how the Software Processes have evolved, highlighting the positives as well as the negatives. In addition, we will mention the COTS tools that have been integrated into the processes and how the COTS have provided value to the project.

  8. Conceptual study of calibration software for large scale input accountancy tank

    International Nuclear Information System (INIS)

    Uchikoshi, Seiji; Yasu, Kan-ichi; Watanabe, Yuichi; Matsuda, Yuji; Kawai, Akio; Tamura, Toshiyuki; Shimizu, Hidehiko.

    1996-01-01

    Demonstration experiments for large scale input accountancy tank are going to be under way by Nuclear Material Control Center. Development of calibration software for accountancy system with dip-tube manometer is an important task in the experiments. A conceptual study of the software has been carried out to construct high precision accountancy system. And, the study was based on ANSI N15.19-1989. Items of the study are overall configuration, correction method for influence of bubble formation, function model of calibration, and fitting method for calibration curve. Following remarks are the results of this study. 1) Overall configuration of the software was constructed. 2) It was shown by numerical solution, that the influence of bubble formation can be corrected using period of pressure wave. 3) Two function models of calibration for well capacity and for inner structure volume were prepared from tank design, and good fitness of the model for net capacity (balance of both models) was confirmed by fitting to designed shape of the tank. 4) The necessity of further consideration about both-variables-in-error-model and cumulative-error-model was recognized. We are going to develop a practical software on the basis of the results, and to verify it by the demonstration experiments. (author)

  9. SLIMarray: Lightweight software for microarray facility management

    Directory of Open Access Journals (Sweden)

    Marzolf Bruz

    2006-10-01

    Full Text Available Abstract Background Microarray core facilities are commonplace in biological research organizations, and need systems for accurately tracking various logistical aspects of their operation. Although these different needs could be handled separately, an integrated management system provides benefits in organization, automation and reduction in errors. Results We present SLIMarray (System for Lab Information Management of Microarrays, an open source, modular database web application capable of managing microarray inventories, sample processing and usage charges. The software allows modular configuration and is well suited for further development, providing users the flexibility to adapt it to their needs. SLIMarray Lite, a version of the software that is especially easy to install and run, is also available. Conclusion SLIMarray addresses the previously unmet need for free and open source software for managing the logistics of a microarray core facility.

  10. An open software system based on X Windows for process control and equipment monitoring

    International Nuclear Information System (INIS)

    Aimar, A.; Carlier, E.; Mertens, V.

    1992-01-01

    The construction and application of a configurable open software system for process control and equipment monitoring can speed up and simplify the development and maintenance of equipment specific software as compared to individual solutions. The present paper reports the status of such an approach for the distributed control systems of SPS and LEP beam transfer components, based on X Windows and the OSF/Motif tool kit and applying data modeling and software engineering methods. (author)

  11. ETICS: the international software engineering service for the grid

    Energy Technology Data Exchange (ETDEWEB)

    Meglio, A D; Begin, M-E [CERN (Switzerland); Couvares, P [University of Wisconsin-Madison (United States); Ronchieri, E [INFN CNAF (Italy); Takacs, E [4D SOFT Ltd (Hungary)], E-mail: alberto.di.meglio@cern.ch

    2008-07-15

    The ETICS system is a distributed software configuration, build and test system designed to fulfil the needs of improving the quality, reliability and interoperability of distributed software in general and grid software in particular. The ETICS project is a consortium of five partners (CERN, INFN, Engineering Ingegneria Informatica, 4D Soft and the University of Wisconsin-Madison). The ETICS service consists of a build and test job execution system based on the Metronome software and an integrated set of web services and software engineering tools to design, maintain and control build and test scenarios. The ETICS system allows taking into account complex dependencies among applications and middleware components and provides a rich environment to perform static and dynamic analysis of the software and execute deployment, system and interoperability tests. This paper gives an overview of the system architecture and functionality set and then describes how the EC-funded EGEE, DILIGENT and OMII-Europe projects are using the software engineering services to build, validate and distribute their software. Finally a number of significant use and test cases will be described to show how ETICS can be used in particular to perform interoperability tests of grid middleware using the grid itself.

  12. ETICS: the international software engineering service for the grid

    Science.gov (United States)

    Meglio, A. D.; Bégin, M.-E.; Couvares, P.; Ronchieri, E.; Takacs, E.

    2008-07-01

    The ETICS system is a distributed software configuration, build and test system designed to fulfil the needs of improving the quality, reliability and interoperability of distributed software in general and grid software in particular. The ETICS project is a consortium of five partners (CERN, INFN, Engineering Ingegneria Informatica, 4D Soft and the University of Wisconsin-Madison). The ETICS service consists of a build and test job execution system based on the Metronome software and an integrated set of web services and software engineering tools to design, maintain and control build and test scenarios. The ETICS system allows taking into account complex dependencies among applications and middleware components and provides a rich environment to perform static and dynamic analysis of the software and execute deployment, system and interoperability tests. This paper gives an overview of the system architecture and functionality set and then describes how the EC-funded EGEE, DILIGENT and OMII-Europe projects are using the software engineering services to build, validate and distribute their software. Finally a number of significant use and test cases will be described to show how ETICS can be used in particular to perform interoperability tests of grid middleware using the grid itself.

  13. ETICS: the international software engineering service for the grid

    International Nuclear Information System (INIS)

    Meglio, A D; Begin, M-E; Couvares, P; Ronchieri, E; Takacs, E

    2008-01-01

    The ETICS system is a distributed software configuration, build and test system designed to fulfil the needs of improving the quality, reliability and interoperability of distributed software in general and grid software in particular. The ETICS project is a consortium of five partners (CERN, INFN, Engineering Ingegneria Informatica, 4D Soft and the University of Wisconsin-Madison). The ETICS service consists of a build and test job execution system based on the Metronome software and an integrated set of web services and software engineering tools to design, maintain and control build and test scenarios. The ETICS system allows taking into account complex dependencies among applications and middleware components and provides a rich environment to perform static and dynamic analysis of the software and execute deployment, system and interoperability tests. This paper gives an overview of the system architecture and functionality set and then describes how the EC-funded EGEE, DILIGENT and OMII-Europe projects are using the software engineering services to build, validate and distribute their software. Finally a number of significant use and test cases will be described to show how ETICS can be used in particular to perform interoperability tests of grid middleware using the grid itself

  14. Configuration and application of He RFQ LLRF control system based on EPICS

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, Tae-Sung; Jeong, Hae-Seong; Kim, Seong-Gu; Song, Young-Gi; Kim, Han-Sung; Seol, Kyung-Tae; Kwon, Hyeok-Jung; Cho, Yong-Sub [Korea Multipurpose Accelerator Complex, Gyeongju (Korea, Republic of)

    2015-10-15

    In He RFQ device, the high-power Radio-Frequency (RF) is very important because it is responsible for the stable delivery and efficient acceleration of the beam. Since that, the control system of high-power Radio-Frequency must be developed and this system is called LLRF control system. The LLRF control system required exquisite amplitude value that has ±1 % error range. We need a precise remote control system for this reason. This paper represents the configuration of LLRF control system in terms of software layers based on EPICS. Also, this paper explains the application of LLRF control system to test environment (hardware) and represents test result and suggests future work. The LLRF control system at the He RFQ is very important. The configuration of LLRF control system is completed on the software side and hardware modules: vxworks operating system installation, EPICS BASE compilation, module source code compiled, object file loading and execution on vxworks, EPICS IOC operation check, etc. The application of LLRF control system to module is implemented well: ADC module, DAC module, EPICS IOC test.

  15. Analyzing, Modelling, and Designing Software Ecosystems

    DEFF Research Database (Denmark)

    Manikas, Konstantinos

    as the software development and distribution by a set of actors dependent on each other and the ecosystem. We commence on the hypothesis that the establishment of a software ecosystem on the telemedicine services of Denmark would address these issues and investigate how a software ecosystem can foster...... the development, implementation, and use of telemedicine services. We initially expand the theory of software ecosystems by contributing to the definition and understanding of software ecosystems, providing means of analyzing existing and designing new ecosystems, and defining and measuring the qualities...... of software ecosystems. We use these contributions to design a software ecosystem in the telemedicine services of Denmark with (i) a common platform that supports and promotes development from different actors, (ii) high software interaction, (iii) strong social network of actors, (iv) robust business...

  16. Synergy between Software Product Line and Intelligent Mobile Middleware

    DEFF Research Database (Denmark)

    Zhang, Weishan; Hansen, Klaus Marius

    2007-01-01

    with OWL ontology reasoning enhanced BDI (belief-desire-intention) agents in an ongoing research project called PLIMM (product line enabled intelligent mobile middleware), in which Frame based software product line techniques are applied. Besides the advantages of a software product line, our approach can...... handle ontology evolution and keep all related assets in a consistent state. Ontology evolution is a problem that has not been addressed by current mobile middleware. Another advantage is the ability to configure Jadex BDI agents for different purpose and enhance agent intelligence by adding logic...

  17. Ground test accelerator control system software

    International Nuclear Information System (INIS)

    Burczyk, L.; Dalesio, R.; Dingler, R.; Hill, J.; Howell, J.A.; Kerstiens, D.; King, R.; Kozubal, A.; Little, C.; Martz, V.; Rothrock, R.; Sutton, J.

    1988-01-01

    This paper reports on the GTA control system that provides an environment in which the automation of a state-of-the-art accelerator can be developed. It makes use of commercially available computers, workstations, computer networks, industrial 110 equipment, and software. This system has built-in supervisory control (like most accelerator control systems), tools to support continuous control (like the process control industry), and sequential control for automatic start-up and fault recovery (like few other accelerator control systems). Several software tools support these levels of control: a real-time operating system (VxWorks) with a real-time kernel (VRTX), a configuration database, a sequencer, and a graphics editor. VxWorks supports multitasking, fast context-switching, and preemptive scheduling. VxWorks/VRTX is a network-based development environment specifically designed to work in partnership with the UNIX operating system. A data base provides the interface to the accelerator components. It consists of a run time library and a database configuration and editing tool. A sequencer initiates and controls the operation of all sequence programs (expressed as state programs). A graphics editor gives the user the ability to create color graphic displays showing the state of the machine in either text or graphics form

  18. SoftLab: A Soft-Computing Software for Experimental Research with Commercialization Aspects

    Science.gov (United States)

    Akbarzadeh-T, M.-R.; Shaikh, T. S.; Ren, J.; Hubbell, Rob; Kumbla, K. K.; Jamshidi, M

    1998-01-01

    SoftLab is a software environment for research and development in intelligent modeling/control using soft-computing paradigms such as fuzzy logic, neural networks, genetic algorithms, and genetic programs. SoftLab addresses the inadequacies of the existing soft-computing software by supporting comprehensive multidisciplinary functionalities from management tools to engineering systems. Furthermore, the built-in features help the user process/analyze information more efficiently by a friendly yet powerful interface, and will allow the user to specify user-specific processing modules, hence adding to the standard configuration of the software environment.

  19. Verification and validation of software

    International Nuclear Information System (INIS)

    Machiels, A.J.; Bhjatt, S.; May, R.S.; Woolley, J.A.

    1995-01-01

    Several U.S. nuclear utilities have embarked upon extensive programs to replace and upgrade their analog instrumentation and control (I ampersand C) systems with digital technology. These new digital systems cover a wide range of applications, from safety-critical protection systems to nonsafety control systems, and their implementations range from customized, turnkey computer systems to utility-integrated configurations of programmable logic controllers. In all cases, the utility must address the issue of quality of the digital system and its software

  20. A flexible and configurable system to test accelerator magnets

    Energy Technology Data Exchange (ETDEWEB)

    Jerzy M. Nogiec et al.

    2001-07-20

    Fermilab's accelerator magnet R and D programs, including production of superconducting high gradient quadrupoles for the LHC insertion regions, require rigorous yet flexible magnetic measurement systems. Measurement systems must be capable of handling various types of hardware and extensible to all measurement technologies and analysis algorithms. A tailorable software system that satisfies these requirements is discussed. This single system, capable of distributed parallel signal processing, is built on top of a flexible component-based framework that allows for easy reconfiguration and run-time modification. Both core and domain-specific components can be assembled into various magnet test or analysis systems. The system configured to comprise a rotating coil harmonics measurement is presented. Technologies as Java, OODB, XML, JavaBeans, software bus and component-based architectures are used.

  1. Black-Box Fuzzing of the REDHAWK Software Communications Architecture

    OpenAIRE

    Sayed, Shereef

    2015-01-01

    As the complexity of software increases, so does the complexity of software testing. This challenge is especially true for modern military communications as radio functionality becomes more digital than analog. The Software Communications Architecture was introduced to manage the increased complexity of software radios. But the challenge of testing software radios still remains. A common methodology of software testing is the unit test. However, unit testing of software assumes that the ...

  2. A research on the application of software defined networking in satellite network architecture

    Science.gov (United States)

    Song, Huan; Chen, Jinqiang; Cao, Suzhi; Cui, Dandan; Li, Tong; Su, Yuxing

    2017-10-01

    Software defined network is a new type of network architecture, which decouples control plane and data plane of traditional network, has the feature of flexible configurations and is a direction of the next generation terrestrial Internet development. Satellite network is an important part of the space-ground integrated information network, while the traditional satellite network has the disadvantages of difficult network topology maintenance and slow configuration. The application of SDN technology in satellite network can solve these problems that traditional satellite network faces. At present, the research on the application of SDN technology in satellite network is still in the stage of preliminary study. In this paper, we start with introducing the SDN technology and satellite network architecture. Then we mainly introduce software defined satellite network architecture, as well as the comparison of different software defined satellite network architecture and satellite network virtualization. Finally, the present research status and development trend of SDN technology in satellite network are analyzed.

  3. Mathis software for controlling BCAM-based monitoring and alignment systems

    CERN Document Server

    Klumb, Francis; Kautzmann, Guillaume; CERN. Geneva. ATS Department

    2016-01-01

    The MATHIS Software (Monitoring and Alignment Tracking for HIE-Isolde Software) aims at providing 3D positions of physical components of the HIE-Isolde superconducting modules, accurately and permanently measured by well-designed networks of BCAM devices (Brandeis Camera Angle Monitoring). Although it is originally intended for the HIE-Isolde project, its architecture and its use cases have been extended and optimized for more general setups. Most of the configuration data are stored either within XML-formatted files or within databases. The adaptation of MATHIS for different BCAM monitoring systems therefore does not require any further code rewriting. Moreover, the software is fully cross-platform and can either be run on the specific Linux machines driving the accelerator electronic devices, or be used on independent Windows workstations as a stand-alone software. In the first case, the software mainly relies on FESA (Front End Software Architecture) which is an object-oriented real-time framework that ens...

  4. Holistic processing of face configurations and components.

    Science.gov (United States)

    Hayward, William G; Crookes, Kate; Chu, Ming Hon; Favelle, Simone K; Rhodes, Gillian

    2016-10-01

    Although many researchers agree that faces are processed holistically, we know relatively little about what information holistic processing captures from a face. Most studies that assess the nature of holistic processing do so with changes to the face affecting many different aspects of face information (e.g., different identities). Does holistic processing affect every aspect of a face? We used the composite task, a common means of examining the strength of holistic processing, with participants making same-different judgments about configuration changes or component changes to 1 portion of a face. Configuration changes involved changes in spatial position of the eyes, whereas component changes involved lightening or darkening the eyebrows. Composites were either aligned or misaligned, and were presented either upright or inverted. Both configuration judgments and component judgments showed evidence of holistic processing, and in both cases it was strongest for upright face composites. These results suggest that holistic processing captures a broad range of information about the face, including both configuration-based and component-based information. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  5. LHCb software strategy

    CERN Document Server

    Van Herwijnen, Eric

    1998-01-01

    This document describes the software strategy of the LHCb experiment. The main objective is to reuse designs and code wherever possible; We will implement an architecturally driven design process; This architectural process will be implemented using Object Technology; We aim for platform indepence; try to take advantage of distributed computing and will use industry standards, commercial software and profit from HEP developments; We will implement a common software process and development environment. One of the major problems that we are immediately faced with is the conversion of our current code from Fortran into an Object Oriented language and the conversion of our current developers to Object technology. Some technical terms related to OO programming are defined in Annex A.1

  6. Strategies employed for LHC software performance studies

    CERN Document Server

    Nowak, A

    2010-01-01

    The objective of this work is to collect and assess the software performance related strategies employed by the major players in the LHC software arena: the four main experiments (ALICE, ATLAS, CMS and LHCb) and the two main software frameworks (Geant4 and ROOT). As the software used differs between the parties, so do the directions and methods in optimization, and their intensity. The common feeling shared by nearly all interviewed parties is that performance is not one of their top priorities and that maintaining it at a constant level is a satisfactory solution, given the resources at hand. In principle, despite some organized efforts, a less structured approach seems to be the dominant one, and opportunistic optimization prevails. Four out of six surveyed groups are investigating memory management related effects, deemed to be the primary cause of their performance issues. The most commonly used tools include Valgrind and homegrown software. All questioned groups expressed the desire for advanced tools, s...

  7. Requirements Engineering for Software Integrity and Safety

    Science.gov (United States)

    Leveson, Nancy G.

    2002-01-01

    Requirements flaws are the most common cause of errors and software-related accidents in operational software. Most aerospace firms list requirements as one of their most important outstanding software development problems and all of the recent, NASA spacecraft losses related to software (including the highly publicized Mars Program failures) can be traced to requirements flaws. In light of these facts, it is surprising that relatively little research is devoted to requirements in contrast with other software engineering topics. The research proposed built on our previous work. including both criteria for determining whether a requirements specification is acceptably complete and a new approach to structuring system specifications called Intent Specifications. This grant was to fund basic research on how these ideas could be extended to leverage innovative approaches to the problems of (1) reducing the impact of changing requirements, (2) finding requirements specification flaws early through formal and informal analysis, and (3) avoiding common flaws entirely through appropriate requirements specification language design.

  8. The Company Approach to Software Engineering Project Courses

    Science.gov (United States)

    Broman, D.; Sandahl, K.; Abu Baker, M.

    2012-01-01

    Teaching larger software engineering project courses at the end of a computing curriculum is a way for students to learn some aspects of real-world jobs in industry. Such courses, often referred to as capstone courses, are effective for learning how to apply the skills they have acquired in, for example, design, test, and configuration management.…

  9. An Open-Source Tool Set Enabling Analog-Digital-Software Co-Design

    Directory of Open Access Journals (Sweden)

    Michelle Collins

    2016-02-01

    Full Text Available This paper presents an analog-digital hardware-software co-design environment for simulating and programming reconfigurable systems. The tool simulates, designs, as well as enables experimental measurements after compiling to configurable systems in the same integrated design tool framework. High level software in Scilab/Xcos (open-source programs similar to MATLAB/Simulink that converts the high-level block description by the user to blif format (sci2blif, which acts as an input to the modified VPR tool, including the code v p r 2 s w c s , encoding the specific platform through specific architecture files, resulting in a targetable switch list on the resulting configurable analog–digital system. The resulting tool uses an analog and mixed-signal library of components, enabling users and future researchers access to the basic analog operations/computations that are possible.

  10. A software program for exchanging MR data

    DEFF Research Database (Denmark)

    Ring, P B; Jensen, J A; Henriksen, O

    1993-01-01

    of digital MR images of the human brain. Because there was no common data format, software package was developed for data exchange. This article describes the basic features of the developed software. The software package was written in the language of C and was successfully tested on an IBM-6150 UNIX...... workstation. The software is currently being tested on the following series of UNIX workstations: SUN SPARC, IBM RS6000, and HP 9000/700....

  11. Software quality assurance plan for void fraction instrument

    International Nuclear Information System (INIS)

    Gimera, M.

    1994-01-01

    Waste Tank SY-101 has been the focus of extensive characterization work over the past few years. The waste continually generates gases, most notably hydrogen, which are periodically released from the waste. Gas can be trapped in tank waste in three forms: as void gas (bubbles), dissolved gas, or absorbed gas. Void fraction is the volume percentage of a given sample that is comprised of void gas. The void fraction instrument (VFI) acquires the data necessary to calculate void fraction. This document covers the product, Void Fraction Data Acquisition Software. The void fraction software being developed will have the ability to control the void fraction instrument hardware and acquire data necessary to calculate the void fraction in samples. This document provides the software quality assurance plan, verification and validation plan, and configuration management plan for developing the software for the instrumentation that will be used to obtain void fraction data from Tank SY-101

  12. Toward an Agile Approach to Managing the Effect of Requirements on Software Architecture during Global Software Development

    OpenAIRE

    Alsahli, Abdulaziz; Khan, Hameed; Alyahya, Sultan

    2016-01-01

    Requirement change management (RCM) is a critical activity during software development because poor RCM results in occurrence of defects, thereby resulting in software failure. To achieve RCM, efficient impact analysis is mandatory. A common repository is a good approach to maintain changed requirements, reusing and reducing effort. Thus, a better approach is needed to tailor knowledge for better change management of requirements and architecture during global software development (GSD).The o...

  13. Application of industry-standard guidelines for the validation of avionics software

    Science.gov (United States)

    Hayhurst, Kelly J.; Shagnea, Anita M.

    1990-01-01

    The application of industry standards to the development of avionics software is discussed, focusing on verification and validation activities. It is pointed out that the procedures that guide the avionics software development and testing process are under increased scrutiny. The DO-178A guidelines, Software Considerations in Airborne Systems and Equipment Certification, are used by the FAA for certifying avionics software. To investigate the effectiveness of the DO-178A guidelines for improving the quality of avionics software, guidance and control software (GCS) is being developed according to the DO-178A development method. It is noted that, due to the extent of the data collection and configuration management procedures, any phase in the life cycle of a GCS implementation can be reconstructed. Hence, a fundamental development and testing platform has been established that is suitable for investigating the adequacy of various software development processes. In particular, the overall effectiveness and efficiency of the development method recommended by the DO-178A guidelines are being closely examined.

  14. Reducing the risk of failure: Software Quality assurance standards and methods

    International Nuclear Information System (INIS)

    Elphick, J.; Cope, H.

    1992-01-01

    An effective Software Quality Assurance (SQA) program provides an overall approach to software engineering and the establishment of proven methods for the production of reliable software. And, in the authors experience, the overall costs for the software life are diminished with the application of quality methods. In their experience, the issues for implementing quality standards and practices are many. This paper addresses those issues as well as the lessons learned from developing and implementing a number of software quality assurance programs. Their experience includes the development and implementation of their own NRC accepted SQA program and an SQA program for an engineering software developer, as well as developing SQA procedures, standards, and methods for utilities, medical and commercial clients. Some of the issues addressed in this paper are: setting goals and defining quality; applying the software life cycle; addressing organizational issues; providing flexibility and increasing productivity; producing effective documentation; maintaining quality records; Imposing software configuration management; conducting reviews, audits, and controls; verification and validation; and controlling software procurement

  15. A Survey of Commonly Applied Methods for Software Process Improvement

    Science.gov (United States)

    1994-02-01

    Kaoru Ishikawa [ Ishikawa 85] under the label "total quality control" or "TOC." TOM is at use in many software organizations, to a greater or lesser...destructive side, and that they are difficult to dislodge or modify once they are in place. Ishikawa [ Ishikawa 85] notes that *even when industrial...32 CMU/SEI-93-TR-27 [ Ishikawa 85] Ishikawa , K., What is Total Quality Control? The Japanese Way. Translated by David J. Lu, Prentice-Hall, Englewood

  16. Cross-compilation of ATLAS online software to the power PC-Vx works system

    International Nuclear Information System (INIS)

    Tian Yuren; Li Jin; Ren Zhengyu; Zhu Kejun

    2005-01-01

    BES III, selected ATLAS online software as a framework of its run-control system. BES III applied Power PC-VxWorks system on its front-end readout system, so it is necessary to cross-compile this software to PowerPC-VxWorks system. The article demonstrates several aspects related to this project, such as the structure and organization of the ATLAS online software, the application of CMT tool while cross-compiling, the selection and configuration of the cross-compiler, methods to solve various problems due to the difference of compiler and operating system etc. The software, after cross-compiling, can normally run, and makes up a complete run-control system with the software running on Linux system. (authors)

  17. System engineering and configuration management in ITER

    International Nuclear Information System (INIS)

    Chiocchio, S.; Martin, E.; Barabaschi, P.; Bartels, Hans Werner; How, J.; Spears, W.

    2007-01-01

    The construction of ITER will represent a major challenge for the fusion community at large, because of the intrinsic complexity of the tokamak design, the large number of different systems which are all essential for its operation, the worldwide distribution of the design activities and the unusual procurement scheme based on a combination of in-kind and directly funded deliverables. A key requirement for the success of such a large project is that a systematic approach to ensure the consistency of the design with the required performance is adopted. Also, effective project management methods, tools and working practices must be deployed to facilitate the communication and collaboration among the institutions and industries involved in the project. The authors have been involved in the definition and practical implementation of the design integration and configuration control structure inside ITER and in the system engineering process during the selection and optimization of the machine configuration. In parallel, they have assessed design, drawing and documentation management software to be used for the construction phase. Here, they describe the experience gained in recent years, explain the drivers behind the selection of the documents and drawings management systems, and illustrate the scope and issues of the configuration management activities to ensure the congruence of the design, to control and track the design changes and to manage the interfaces among the ITER systems

  18. Magnetic configuration control of ITER plasmas

    International Nuclear Information System (INIS)

    Albanese, R.; Mattei, M.; Portone, A.; Ambrosino, G.; Artaserse, G.; Crisanti, F.; De Tommasi, G.; Fresa, R.; Sartori, F.; Villone, F.

    2007-01-01

    The aim of this paper is to present some new tools used to review the capability of the ITER Poloidal Field (PF) system in controlling the broad range of plasma configurations presently forecasted during ITER operation. The attention is focused on the axi-symmetric aspects of plasma magnetic configuration control since they pose the greatest challenges in terms of control power and they have the largest impact on machine capital cost. Some preliminary results obtained during ongoing activities in collaboration between ENEA/CREATE and EFDA are presented. The paper is divided in two main parts devoted, respectively, to the presentation of a procedure for the PF current optimisation during the scenario, and of a software environment for the study of the PF system capabilities using the plasma linearized response. The proposed PF current optimisation procedure is then used to assess Scenario 2 design, also taking into account the presence of axisymmetric eddy currents and possible variations of poloidal beta and internal inductance. The numerical linear model based tool derived from the JET oriented eXtreme Shape Controller (XSC) tools is finally used to obtain results on the strike point sweeping in ITER

  19. Using a Foundational Ontology for Reengineering a Software Enterprise Ontology

    Science.gov (United States)

    Perini Barcellos, Monalessa; de Almeida Falbo, Ricardo

    The knowledge about software organizations is considerably relevant to software engineers. The use of a common vocabulary for representing the useful knowledge about software organizations involved in software projects is important for several reasons, such as to support knowledge reuse and to allow communication and interoperability between tools. Domain ontologies can be used to define a common vocabulary for sharing and reuse of knowledge about some domain. Foundational ontologies can be used for evaluating and re-designing domain ontologies, giving to these real-world semantics. This paper presents an evaluating of a Software Enterprise Ontology that was reengineered using the Unified Foundation Ontology (UFO) as basis.

  20. Self-adaptation in Software-intensive Cyber-physical Systems: From System Goals to Architecture Configurations

    Czech Academy of Sciences Publication Activity Database

    Gerostathopoulos, I.; Bureš, Tomáš; Hnětynka, P.; Keznikl, Jaroslav; Kit, M.; Plášil, F.; Plouzeau, N.

    2016-01-01

    Roč. 122, December (2016), s. 378-397 ISSN 0164-1212 Grant - others:GA MŠk(CZ) LD15051 Institutional support: RVO:67985807 Keywords : cyber–physical systems * self-adaptivity * dependability Subject RIV: JC - Computer Hardware ; Software Impact factor: 2.444, year: 2016

  1. Utilizing a Photo-Analysis Software for Content Identifying Method (CIM

    Directory of Open Access Journals (Sweden)

    Nejad Nasim Sahraei

    2015-01-01

    Full Text Available Content Identifying Methodology or (CIM was developed to measure public preferences in order to reveal the common characteristics of landscapes and aspects of underlying perceptions including the individual's reactions to content and spatial configuration, therefore, it can assist with the identification of factors that influenced preference. Regarding the analysis of landscape photographs through CIM, there are several studies utilizing image analysis software, such as Adobe Photoshop, in order to identify the physical contents in the scenes. This study attempts to evaluate public’s ‘preferences for aesthetic qualities of pedestrian bridges in urban areas through a photo-questionnaire survey, in which respondents evaluated images of pedestrian bridges in urban areas. Two groups of images were evaluated as the most and least preferred scenes that concern the highest and lowest mean scores respectively. These two groups were analyzed by CIM and also evaluated based on the respondent’s description of each group to reveal the pattern of preferences and the factors that may affect them. Digimizer Software was employed to triangulate the two approaches and to determine the role of these factors on people’s preferences. This study attempts to introduce the useful software for image analysis which can measure the physical contents and also their spatial organization in the scenes. According to the findings, it is revealed that Digimizer could be a useful tool in CIM approaches through preference studies that utilizes photographs in place of the actual landscape in order to determine the most important factors in public preferences for pedestrian bridges in urban areas.

  2. Process mining software repositories: do developers work as expected?

    NARCIS (Netherlands)

    Poncin, W.; Serebrenik, A.; Brand, van den M.G.J.

    2012-01-01

    Modern software development commonly makes use of a multitude of software repositories. How can these help us to understand the on-going development process? Researchers of Eindhoven University of Technology design new methods revealing how software has been developed.

  3. Software upgradation of PXI based data acquisition for Aditya experiments

    International Nuclear Information System (INIS)

    Panchal, Vipul K.; Chavda, Chhaya; Patel, Vijay; Patel, Narendra; Ghosh, Joydeep

    2015-01-01

    Aditya Data Acquisition and Control System is designed to acquire data from diagnostics like Loop Voltage, Rogowski, Magnetic probes, X-rays etc and for control of gas feed, gate valve control, trigger pulse generation etc. CAMAC based data acquisition system was updated with PXI based Multifunction modules. The System is interfaced using optical connectivity with PC using PCI based controller module. Data is acquired using LabVIEW graphical user interface (GUI) and stored in server. The present GUI based application does not have features like module parameters configuration, analysis, webcasting etc. So a new application software using LabVIEW is being developed with features for individual module support considering programmable channel configuration - sampling rate, number of pre and post trigger samples, number of active channel selection etc. It would also have facility of using multi-functionality of timer and counter. The software would be scalable considering more modules, channels and crates along with security of different access level of user privileges. (author)

  4. [Optimization of electrode configuration in soil electrokinetic remediation].

    Science.gov (United States)

    Liu, Fang; Fu, Rong-Bing; Xu, Zhen

    2015-02-01

    Electric field distributions of several different electrode configurations in non-uniform electric field were simulated using MATLAB software, and the electrokinetic remediation device was constructed according to the best electrode configuration. The changes of soil pH and heavy metal residues in different parts of the device during the electrokinetic remediation were also studied. The results showed that, in terms of the effectiveness of the electric field strength, the square (1-D-1) and hexagonal (2-D-3) were the optimal electrode configurations for one-dimensional and two-dimensional respectively and the changes of soil pH, the removal of heavy metals and the distribution of electric field were closely related to one another. An acidic migration band, which could prevent premature precipitation of heavy metals to a certain extent and promote electrokinetic removal of heavy metals, was formed gradually along with the remediation in the whole hexagon device when the cathodic pH was controlled during the remediation of the four cationic metallic ions, Cd2+, Ni2+, Pb2+ and Cu2+. After 480-hour remediation, the total removals of Cd, Ni, Pb and Cu were 86.6%, 86.2%, 67.7% and 73.0%, respectively. Remediation duration and replacement frequency of the electrodes could be adjusted according to the repair target.

  5. A highly versatile and easily configurable system for plant electrophysiology.

    Science.gov (United States)

    Gunsé, Benet; Poschenrieder, Charlotte; Rankl, Simone; Schröeder, Peter; Rodrigo-Moreno, Ana; Barceló, Juan

    2016-01-01

    In this study we present a highly versatile and easily configurable system for measuring plant electrophysiological parameters and ionic flow rates, connected to a computer-controlled highly accurate positioning device. The modular software used allows easy customizable configurations for the measurement of electrophysiological parameters. Both the operational tests and the experiments already performed have been fully successful and rendered a low noise and highly stable signal. Assembly, programming and configuration examples are discussed. The system is a powerful technique that not only gives precise measuring of plant electrophysiological status, but also allows easy development of ad hoc configurations that are not constrained to plant studies. •We developed a highly modular system for electrophysiology measurements that can be used either in organs or cells and performs either steady or dynamic intra- and extracellular measurements that takes advantage of the easiness of visual object-oriented programming.•High precision accuracy in data acquisition under electrical noisy environments that allows it to run even in a laboratory close to electrical equipment that produce electrical noise.•The system makes an improvement of the currently used systems for monitoring and controlling high precision measurements and micromanipulation systems providing an open and customizable environment for multiple experimental needs.

  6. NASA Software Engineering Benchmarking Study

    Science.gov (United States)

    Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.

    2013-01-01

    .onsolidate, collect and, if needed, develop common processes principles and other assets across the Agency in order to provide more consistency in software development and acquisition practices and to reduce the overall cost of maintaining or increasing current NASA CMMI maturity levels. 6. Provide additional support for small projects that includes: (a) guidance for appropriate tailoring of requirements for small projects, (b) availability of suitable tools, including support tool set-up and training, and (c) training for small project personnel, assurance personnel and technical authorities on the acceptable options for tailoring requirements and performing assurance on small projects. 7. Develop software training classes for the more experienced software engineers using on-line training, videos, or small separate modules of training that can be accommodated as needed throughout a project. 8. Create guidelines to structure non-classroom training opportunities such as mentoring, peer reviews, lessons learned sessions, and on-the-job training. 9. Develop a set of predictive software defect data and a process for assessing software testing metric data against it. 10. Assess Agency-wide licenses for commonly used software tools. 11. Fill the knowledge gap in common software engineering practices for new hires and co-ops.12. Work through the Science, Technology, Engineering and Mathematics (STEM) program with universities in strengthening education in the use of common software engineering practices and standards. 13. Follow up this benchmark study with a deeper look into what both internal and external organizations perceive as the scope of software assurance, the value they expect to obtain from it, and the shortcomings they experience in the current practice. 14. Continue interactions with external software engineering environment through collaborations, knowledge sharing, and benchmarking.

  7. Workflow-Based Software Development Environment

    Science.gov (United States)

    Izygon, Michel E.

    2013-01-01

    The Software Developer's Assistant (SDA) helps software teams more efficiently and accurately conduct or execute software processes associated with NASA mission-critical software. SDA is a process enactment platform that guides software teams through project-specific standards, processes, and procedures. Software projects are decomposed into all of their required process steps or tasks, and each task is assigned to project personnel. SDA orchestrates the performance of work required to complete all process tasks in the correct sequence. The software then notifies team members when they may begin work on their assigned tasks and provides the tools, instructions, reference materials, and supportive artifacts that allow users to compliantly perform the work. A combination of technology components captures and enacts any software process use to support the software lifecycle. It creates an adaptive workflow environment that can be modified as needed. SDA achieves software process automation through a Business Process Management (BPM) approach to managing the software lifecycle for mission-critical projects. It contains five main parts: TieFlow (workflow engine), Business Rules (rules to alter process flow), Common Repository (storage for project artifacts, versions, history, schedules, etc.), SOA (interface to allow internal, GFE, or COTS tools integration), and the Web Portal Interface (collaborative web environment

  8. Importance measures in risk-informed decision making: Ranking, optimisation and configuration control

    Energy Technology Data Exchange (ETDEWEB)

    Vaurio, Jussi K., E-mail: jussi.vaurio@pp1.inet.fi [Prometh Solutions, Hiihtaejaenkuja 3K, 06100 Porvoo (Finland)

    2011-11-15

    This paper describes roles, extensions and applications of importance measures of components and configurations for making risk-informed decisions relevant to system operations, maintenance and safety. Basic importance measures and their relationships are described for independent and mutually exclusive events and for groups of events associated with common cause failures. The roles of importances are described mainly in two groups of activities: (a) ranking safety significance of systems, structures, components and human actions for preventive safety assurance activities, and (b) making decisions about permissible permanent and temporary configurations and allowed configuration times for regulation, technical specifications and for on-line risk monitoring. Criticality importance and sums of criticalities turn out to be appropriate measures for ranking and optimization. Several advantages are pointed out and consistent ranking of pipe segments for in-service inspection is provided as an example. Risk increase factor and its generalization risk gain are most appropriately used to assess corrective priorities and acceptability of a situation when components are already failed or when planning to take one or more components out of service for maintenance. Precise definitions are introduced for multi-failure configurations and it is shown how they can be assessed under uncertainties, in particular when common cause failures or success states may be involved. A general weighted average method is compared to other candidate methods in benchmark cases. It is the preferable method for prediction when a momentary configuration is known or only partially known. Potential applications and optimization of allowed outage times are described. The results show how to generalize and apply various importance measures to ranking and optimization and how to manage configurations in uncertain multi-failure situations. - Highlights: > Rigorous methods developed for using importances

  9. Importance measures in risk-informed decision making: Ranking, optimisation and configuration control

    International Nuclear Information System (INIS)

    Vaurio, Jussi K.

    2011-01-01

    This paper describes roles, extensions and applications of importance measures of components and configurations for making risk-informed decisions relevant to system operations, maintenance and safety. Basic importance measures and their relationships are described for independent and mutually exclusive events and for groups of events associated with common cause failures. The roles of importances are described mainly in two groups of activities: (a) ranking safety significance of systems, structures, components and human actions for preventive safety assurance activities, and (b) making decisions about permissible permanent and temporary configurations and allowed configuration times for regulation, technical specifications and for on-line risk monitoring. Criticality importance and sums of criticalities turn out to be appropriate measures for ranking and optimization. Several advantages are pointed out and consistent ranking of pipe segments for in-service inspection is provided as an example. Risk increase factor and its generalization risk gain are most appropriately used to assess corrective priorities and acceptability of a situation when components are already failed or when planning to take one or more components out of service for maintenance. Precise definitions are introduced for multi-failure configurations and it is shown how they can be assessed under uncertainties, in particular when common cause failures or success states may be involved. A general weighted average method is compared to other candidate methods in benchmark cases. It is the preferable method for prediction when a momentary configuration is known or only partially known. Potential applications and optimization of allowed outage times are described. The results show how to generalize and apply various importance measures to ranking and optimization and how to manage configurations in uncertain multi-failure situations. - Highlights: → Rigorous methods developed for using importances

  10. Software for analysis of waveforms acquired by digital Doppler broadening spectrometer

    International Nuclear Information System (INIS)

    Vlcek, M; Čížek, J; Procházka, I

    2013-01-01

    High-resolution digital spectrometer for coincidence measurement of Doppler broadening of positron annihilation radiation was recently developed and tested. In this spectrometer pulses from high purity Ge (HPGe) detectors are sampled in the real time by fast digitizers and subsequently analyzed off-line by software. We present description of the software routines used for pulse shape analysis in two spectrometer configurations: (i) semi-digital setup in which detector pulses shaped in spectroscopic amplifiers (SA's) are digitized; (ii) pure digital setup in which pulses from detector pre-amplifiers are digitized directly. Software developed in this work will be freely available in the form of source code and pre-compiled binaries.

  11. Integrated conception of hardware/software mixed systems used in nuclear instrumentation

    International Nuclear Information System (INIS)

    Dias, Ailton F.; Sorel, Yves; Akil, Mohamed

    2002-01-01

    Hardware/software codesign carries out the design of systems composed by a hardware portion, with specific components, and a software portion, with microprocessor based architecture. This paper describes the Algorithm Architecture Adequation (AAA) design methodology - originally oriented to programmable multicomponent architectures, its extension to reconfigurable circuits and its application to design and development of nuclear instrumentation systems composed by programmable and configurable circuits. AAA methodology uses an unified model to describe algorithm, architecture and implementation, based on graph theory. The great advantage of AAA methodology is the utilization of a same model from the specification to the implementation of hardware/software systems, reducing the complexity and design time. (author)

  12. Honeywell Modular Automation System Computer Software Documentation for the Magnesium Hydroxide Precipitation Process

    International Nuclear Information System (INIS)

    STUBBS, A.M.

    2001-01-01

    The purpose of this Computer Software Document (CSWD) is to provide configuration control of the Honeywell Modular Automation System (MAS) in use at the Plutonium Finishing Plant (PFP) for the Magnesium Hydroxide Precipitation Process in Rm 230C/234-5Z. The magnesium hydroxide process control software Rev 0 is being updated to include control programming for a second hot plate. The process control programming was performed by the system administrator. Software testing for the additional hot plate was performed per PFP Job Control Work Package 2Z-00-1703. The software testing was verified by Quality Control to comply with OSD-Z-184-00044, Magnesium Hydroxide Precipitation Process

  13. A 'Toolbox' Equivalent Process for Safety Analysis Software

    International Nuclear Information System (INIS)

    O'Kula, K.R.; Eng, Tony

    2004-01-01

    Defense Nuclear Facilities Safety Board (DNFSB) Recommendation 2002-1 (Quality Assurance for Safety-Related Software) identified a number of quality assurance issues on the use of software in Department of Energy (DOE) facilities for analyzing hazards, and designing and operating controls that prevent or mitigate potential accidents. The development and maintenance of a collection, or 'toolbox', of multiple-site use, standard solution, Software Quality Assurance (SQA)-compliant safety software is one of the major improvements identified in the associated DOE Implementation Plan (IP). The DOE safety analysis toolbox will contain a set of appropriately quality-assured, configuration-controlled, safety analysis codes, recognized for DOE-broad, safety basis applications. Currently, six widely applied safety analysis computer codes have been designated for toolbox consideration. While the toolbox concept considerably reduces SQA burdens among DOE users of these codes, many users of unique, single-purpose, or single-site software may still have sufficient technical justification to continue use of their computer code of choice, but are thwarted by the multiple-site condition on toolbox candidate software. The process discussed here provides a roadmap for an equivalency argument, i.e., establishing satisfactory SQA credentials for single-site software that can be deemed ''toolbox-equivalent''. The process is based on the model established to meet IP Commitment 4.2.1.2: Establish SQA criteria for the safety analysis ''toolbox'' codes. Implementing criteria that establish the set of prescriptive SQA requirements are based on implementation plan/procedures from the Savannah River Site, also incorporating aspects of those from the Waste Isolation Pilot Plant (SNL component) and the Yucca Mountain Project. The major requirements are met with evidence of a software quality assurance plan, software requirements and design documentation, user's instructions, test report, a

  14. Understanding software faults and their role in software reliability modeling

    Science.gov (United States)

    Munson, John C.

    1994-01-01

    regression equation. Since most of the existing metrics have common elements and are linear combinations of these common elements, it seems reasonable to investigate the structure of the underlying common factors or components that make up the raw metrics. The technique we have chosen to use to explore this structure is a procedure called principal components analysis. Principal components analysis is a decomposition technique that may be used to detect and analyze collinearity in software metrics. When confronted with a large number of metrics measuring a single construct, it may be desirable to represent the set by some smaller number of variables that convey all, or most, of the information in the original set. Principal components are linear transformations of a set of random variables that summarize the information contained in the variables. The transformations are chosen so that the first component accounts for the maximal amount of variation of the measures of any possible linear transform; the second component accounts for the maximal amount of residual variation; and so on. The principal components are constructed so that they represent transformed scores on dimensions that are orthogonal. Through the use of principal components analysis, it is possible to have a set of highly related software attributes mapped into a small number of uncorrelated attribute domains. This definitively solves the problem of multi-collinearity in subsequent regression analysis. There are many software metrics in the literature, but principal component analysis reveals that there are few distinct sources of variation, i.e. dimensions, in this set of metrics. It would appear perfectly reasonable to characterize the measurable attributes of a program with a simple function of a small number of orthogonal metrics each of which represents a distinct software attribute domain.

  15. Permutation-invariant distance between atomic configurations

    Science.gov (United States)

    Ferré, Grégoire; Maillet, Jean-Bernard; Stoltz, Gabriel

    2015-09-01

    We present a permutation-invariant distance between atomic configurations, defined through a functional representation of atomic positions. This distance enables us to directly compare different atomic environments with an arbitrary number of particles, without going through a space of reduced dimensionality (i.e., fingerprints) as an intermediate step. Moreover, this distance is naturally invariant through permutations of atoms, avoiding the time consuming associated minimization required by other common criteria (like the root mean square distance). Finally, the invariance through global rotations is accounted for by a minimization procedure in the space of rotations solved by Monte Carlo simulated annealing. A formal framework is also introduced, showing that the distance we propose verifies the property of a metric on the space of atomic configurations. Two examples of applications are proposed. The first one consists in evaluating faithfulness of some fingerprints (or descriptors), i.e., their capacity to represent the structural information of a configuration. The second application concerns structural analysis, where our distance proves to be efficient in discriminating different local structures and even classifying their degree of similarity.

  16. Permutation-invariant distance between atomic configurations

    International Nuclear Information System (INIS)

    Ferré, Grégoire; Maillet, Jean-Bernard; Stoltz, Gabriel

    2015-01-01

    We present a permutation-invariant distance between atomic configurations, defined through a functional representation of atomic positions. This distance enables us to directly compare different atomic environments with an arbitrary number of particles, without going through a space of reduced dimensionality (i.e., fingerprints) as an intermediate step. Moreover, this distance is naturally invariant through permutations of atoms, avoiding the time consuming associated minimization required by other common criteria (like the root mean square distance). Finally, the invariance through global rotations is accounted for by a minimization procedure in the space of rotations solved by Monte Carlo simulated annealing. A formal framework is also introduced, showing that the distance we propose verifies the property of a metric on the space of atomic configurations. Two examples of applications are proposed. The first one consists in evaluating faithfulness of some fingerprints (or descriptors), i.e., their capacity to represent the structural information of a configuration. The second application concerns structural analysis, where our distance proves to be efficient in discriminating different local structures and even classifying their degree of similarity

  17. Programming Language Software For Graphics Applications

    Science.gov (United States)

    Beckman, Brian C.

    1993-01-01

    New approach reduces repetitive development of features common to different applications. High-level programming language and interactive environment with access to graphical hardware and software created by adding graphical commands and other constructs to standardized, general-purpose programming language, "Scheme". Designed for use in developing other software incorporating interactive computer-graphics capabilities into application programs. Provides alternative to programming entire applications in C or FORTRAN, specifically ameliorating design and implementation of complex control and data structures typifying applications with interactive graphics. Enables experimental programming and rapid development of prototype software, and yields high-level programs serving as executable versions of software-design documentation.

  18. Flow Characteristics Near to Stent Strut Configurations on Femoropopliteal Artery

    Science.gov (United States)

    Paisal, Muhammad Sufyan Amir; Fadhil Syed Adnan, Syed; Taib, Ishkrizat; Ismail, Al Emran; Kamil Abdullah, Mohammad; Nordin, Normayati; Seri, Suzairin Md; Darlis, Nofrizalidris

    2017-08-01

    Femoropopiteal artery stenting is a common procedure suggested by medical expert especially for patient who is diagnosed with severe stenosis. Many researchers reported that the growth of stenosis is significantly related to the geometry of stent strut configuration. The different shapes of stent geometry are presenting the different flow pattern and re-circulation in stented femoropopliteal artery. The blood flow characteristics near to the stent geometry are predicted for the possibility of thrombosis and atherosclerosis to be formed as well as increase the growth of stenosis. Thus, this study aims to determine the flow characteristic near to stent strut configuration based on different hemodynamic parameters. Three dimensional models of stent and simplified femoropopliteal artery are modelled using computer aided design (CAD) software. Three different models of stent shapes; hexagon, circle and rectangle are simulated using computational fluid dynamic (CFD) method. Then, parametric study is implemented to predict the performance of stent due to hemodynamic differences. The hemodynamic parameters considered are pressure, velocity, low wall shear stress (WSSlow) and wall shear stress (WSS). From the observation, flow re-circulation has been formed for all simulated stent models which the proximal region shown the severe vortices. However, rectangular shape of stent strut (Type P3) shows the lowest WSSlow and the highest WSS between the range of 4 dyne/cm2 and 70 dyne/cm2. Stent Type P3 also shows the best hemodynamic stent performance as compare to others. In conclusion, Type P3 has a favourable result in hemodynamic stent performance that predicted less probability of thrombosis and atherosclerosis to be formed as well as reduces the growth of restenosis.

  19. A Software Implementation of a Satellite Interface Message Processor.

    Science.gov (United States)

    Eastwood, Margaret A.; Eastwood, Lester F., Jr.

    A design for network control software for a computer network is described in which some nodes are linked by a communications satellite channel. It is assumed that the network has an ARPANET-like configuration; that is, that specialized processors at each node are responsible for message switching and network control. The purpose of the control…

  20. Strengthening Software Authentication with the ROSE Software Suite

    International Nuclear Information System (INIS)

    White, G

    2006-01-01

    Many recent nonproliferation and arms control software projects include a software authentication regime. These include U.S. Government-sponsored projects both in the United States and in the Russian Federation (RF). This trend toward requiring software authentication is only accelerating. Demonstrating assurance that software performs as expected without hidden ''backdoors'' is crucial to a project's success. In this context, ''authentication'' is defined as determining that a software package performs only its intended purpose and performs said purpose correctly and reliably over the planned duration of an agreement. In addition to visual inspections by knowledgeable computer scientists, automated tools are needed to highlight suspicious code constructs, both to aid visual inspection and to guide program development. While many commercial tools are available for portions of the authentication task, they are proprietary and not extensible. An open-source, extensible tool can be customized to the unique needs of each project (projects can have both common and custom rules to detect flaws and security holes). Any such extensible tool has to be based on a complete language compiler. ROSE is precisely such a compiler infrastructure developed within the Department of Energy (DOE) and targeted at the optimization of scientific applications and user-defined libraries within large-scale applications (typically applications of a million lines of code). ROSE is a robust, source-to-source analysis and optimization infrastructure currently addressing large, million-line DOE applications in C and C++ (handling the full C, C99, C++ languages and with current collaborations to support Fortran90). We propose to extend ROSE to address a number of security-specific requirements, and apply it to software authentication for nonproliferation and arms control projects

  1. Configuration management manual as a tool for improving plant change controls

    International Nuclear Information System (INIS)

    Craig, L.L.

    1991-01-01

    Early vintage plants, such as Turkey Point at Florida Power and Light (FP and L) Company, were not provided with as much design documentation as later plants. At FP and L, programs were initiated to reconstruct the design bases, correct and update drawings at Turkey Point, and develop an overall configuration management program for both Turkey Point and St. Lucie plants. This paper discusses the Configuration Management Manual developed by plant and engineering personnel, which is used to train personnel to a common language and achieve better understanding of individual impact on configuration management

  2. Specificity of foot configuration during bipedal stance in ballet dancers.

    Science.gov (United States)

    Casabona, Antonino; Leonardi, Giuseppa; Aimola, Ettore; La Grua, Giovanni; Polizzi, Cristina Maria; Cioni, Matteo; Valle, Maria Stella

    2016-05-01

    Learning highly specialized upright postures may be of benefit for more common as well as for novel stances. In this study, we asked whether this generalization occurs with foot configurations previously trained or depends on a generic increase in balance difficulty. We also explored the possibility that the benefit may concern not only the level of postural performance but also the structural organization of the upright standing. Ten elite professional ballet dancers were compared to ten untrained subjects, measuring the motion of the center of pressure (COP) across a set of five stances with different foot configurations. The balance stability was measured computing the area, the sway path, and the root mean square of the COP motion, whereas the structure of the postural control was assessed by compute approximate entropy, fractal dimension and the mean power frequency. The foot position included common and challenging stances, with the level of difficulty changed across the configurations. Among these conditions, only one foot configuration was familiar to the dancers. Statistically significant differences between the two groups, for all the parameters, were observed only for the stance with the foot position familiar to the dancers. Stability and structural parameters exhibited comparable differences. We concluded that the benefit from classical ballet is limited to a specific foot configuration, regardless of the level of stance difficulty or the component of postural control. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Core component integration tests for the back-end software sub-system in the ATLAS data acquisition and event filter prototype -1 project

    International Nuclear Information System (INIS)

    Badescu, E.; Caprini, M.; Niculescu, M.; Radu, A.

    2000-01-01

    The ATLAS data acquisition (DAQ) and Event Filter (EF) prototype -1 project was intended to produce a prototype system for evaluating candidate technologies and architectures for the final ATLAS DAQ system on the LHC accelerator at CERN. Within the prototype project, the back-end sub-system encompasses the software for configuring, controlling and monitoring the DAQ. The back-end sub-system includes core components and detector integration components. The core components provide the basic functionality and had priority in terms of time-scale for development in order to have a baseline sub-system that can be used for integration with the data-flow sub-system and event filter. The following components are considered to be the core of the back-end sub-system: - Configuration databases, describe a large number of parameters of the DAQ system architecture, hardware and software components, running modes and status; - Message reporting system (MRS), allows all software components to report messages to other components in the distributed environment; - Information service (IS) allows the information exchange for software components; - Process manager (PMG), performs basic job control of software components (start, stop, monitoring the status); - Run control (RC), controls the data taking activities by coordinating the operations of the DAQ sub-systems, back-end software and external systems. Performance and scalability tests have been made for individual components. The back-end subsystem integration tests bring together all the core components and several trigger/DAQ/detector integration components to simulate the control and configuration of data taking sessions. For back-end integration tests a test plan was provided. The tests have been done using a shell script that goes through different phases as follows: - starting the back-end server processes to initialize communication services and PMG; - launching configuration specific processes via DAQ supervisor as

  4. ClinGen Pathogenicity Calculator: a configurable system for assessing pathogenicity of genetic variants.

    Science.gov (United States)

    Patel, Ronak Y; Shah, Neethu; Jackson, Andrew R; Ghosh, Rajarshi; Pawliczek, Piotr; Paithankar, Sameer; Baker, Aaron; Riehle, Kevin; Chen, Hailin; Milosavljevic, Sofia; Bizon, Chris; Rynearson, Shawn; Nelson, Tristan; Jarvik, Gail P; Rehm, Heidi L; Harrison, Steven M; Azzariti, Danielle; Powell, Bradford; Babb, Larry; Plon, Sharon E; Milosavljevic, Aleksandar

    2017-01-12

    The success of the clinical use of sequencing based tests (from single gene to genomes) depends on the accuracy and consistency of variant interpretation. Aiming to improve the interpretation process through practice guidelines, the American College of Medical Genetics and Genomics (ACMG) and the Association for Molecular Pathology (AMP) have published standards and guidelines for the interpretation of sequence variants. However, manual application of the guidelines is tedious and prone to human error. Web-based tools and software systems may not only address this problem but also document reasoning and supporting evidence, thus enabling transparency of evidence-based reasoning and resolution of discordant interpretations. In this report, we describe the design, implementation, and initial testing of the Clinical Genome Resource (ClinGen) Pathogenicity Calculator, a configurable system and web service for the assessment of pathogenicity of Mendelian germline sequence variants. The system allows users to enter the applicable ACMG/AMP-style evidence tags for a specific allele with links to supporting data for each tag and generate guideline-based pathogenicity assessment for the allele. Through automation and comprehensive documentation of evidence codes, the system facilitates more accurate application of the ACMG/AMP guidelines, improves standardization in variant classification, and facilitates collaborative resolution of discordances. The rules of reasoning are configurable with gene-specific or disease-specific guideline variations (e.g. cardiomyopathy-specific frequency thresholds and functional assays). The software is modular, equipped with robust application program interfaces (APIs), and available under a free open source license and as a cloud-hosted web service, thus facilitating both stand-alone use and integration with existing variant curation and interpretation systems. The Pathogenicity Calculator is accessible at http

  5. Reactor Systems Technology Division code development and configuration/quality control procedures

    International Nuclear Information System (INIS)

    Johnson, E.C.

    1985-06-01

    Procedures are prescribed for executing a code development task and implementing the resulting coding in an official version of a computer code. The responsibilities of the project manager, development staff members, and the Code Configuration/Quality Control Group are defined. Examples of forms, logs, computer job control language, and suggested outlines for reports associated with software production and implementation are included in Appendix A. 1 raf., 2 figs

  6. Control software for the CBM readout chain

    Energy Technology Data Exchange (ETDEWEB)

    Loizeau, Pierre-Alain [GSI Helmholtzzentrum fuer Schwerionenforschung GmbH (Germany)

    2016-07-01

    The Compressed Baryonic Matter (CBM) experiment, which will be built at FAIR, will use free-streaming readout electronics to acquire high-statistics data-sets of physics probes in fixed target heavy-ion collisions. Since no simple signatures suitable for a hardware trigger are available for most of them, reconstruction and selection of the interesting collisions will be done in software, in a computer farm called First Level Event Selector (FLES). The raw data coming from the detectors is pre-processed, pre-calibrated and aggregated in a FPGA based layer called Data Preprocessing Boards (DPB). IPbus will be used to communicate with the DPBs and through them with the elements of the readout chain closer to detectors. A slow control environment based on this software is developed by CBM to configure in an efficient way the DPBs as well as the Front-End Electronics and monitor their performances. This contribution presents the layout planned for the slow control software, its first implementation and corresponding test results.

  7. Automated Cryocooler Monitor and Control System Software

    Science.gov (United States)

    Britchcliffe, Michael J.; Conroy, Bruce L.; Anderson, Paul E.; Wilson, Ahmad

    2011-01-01

    This software is used in an automated cryogenic control system developed to monitor and control the operation of small-scale cryocoolers. The system was designed to automate the cryogenically cooled low-noise amplifier system described in "Automated Cryocooler Monitor and Control System" (NPO-47246), NASA Tech Briefs, Vol. 35, No. 5 (May 2011), page 7a. The software contains algorithms necessary to convert non-linear output voltages from the cryogenic diode-type thermometers and vacuum pressure and helium pressure sensors, to temperature and pressure units. The control function algorithms use the monitor data to control the cooler power, vacuum solenoid, vacuum pump, and electrical warm-up heaters. The control algorithms are based on a rule-based system that activates the required device based on the operating mode. The external interface is Web-based. It acts as a Web server, providing pages for monitor, control, and configuration. No client software from the external user is required.

  8. A perspective on software quality management using microcomputers in safety-related activities

    International Nuclear Information System (INIS)

    Braudt, T.E.; Pratl, M.J.

    1992-01-01

    Software Quality Management, often referred to as Software Quality Assurance or SQA, is a belief or mindset in establishing and protecting the value of software as a corporate asset. It is often expressed in terms of a basic methodology for ensuring adequate controls to maintain the integrity of the configuration of a software system. SQA applies to all activities germane to the acquisition, installation, operation and maintenance of software systems and is key to calculational accuracy and completeness in an Engineering and/or Scientific arena. Simply, it is a vital management tool for ensuring cost-effective utilization of information management resources. The basis principles of SQA apply equally to software applications in microcomputer environments and mainframe environments alike. Regardless of the nature of the computing environment, divisions of responsibilities or logistical difficulties, quality measures must be established to ensure accuracy, completeness, reliability, and reproducibility of the results of the software application. The extent to which these measures are applied should be based upon regulation, economics and practicality

  9. NIF Projects Controls and Information Systems Software Quality Assurance Plan

    Energy Technology Data Exchange (ETDEWEB)

    Fishler, B

    2011-03-18

    Quality achievement for the National Ignition Facility (NIF) and the National Ignition Campaign (NIC) is the responsibility of the NIF Projects line organization as described in the NIF and Photon Science Directorate Quality Assurance Plan (NIF QA Plan). This Software Quality Assurance Plan (SQAP) is subordinate to the NIF QA Plan and establishes quality assurance (QA) activities for the software subsystems within Controls and Information Systems (CIS). This SQAP implements an activity level software quality assurance plan for NIF Projects as required by the LLNL Institutional Software Quality Assurance Program (ISQAP). Planned QA activities help achieve, assess, and maintain appropriate quality of software developed and/or acquired for control systems, shot data systems, laser performance modeling systems, business applications, industrial control and safety systems, and information technology systems. The objective of this SQAP is to ensure that appropriate controls are developed and implemented for management planning, work execution, and quality assessment of the CIS organization's software activities. The CIS line organization places special QA emphasis on rigorous configuration control, change management, testing, and issue tracking to help achieve its quality goals.

  10. NIF Projects Controls and Information Systems Software Quality Assurance Plan

    International Nuclear Information System (INIS)

    Fishler, B.

    2011-01-01

    Quality achievement for the National Ignition Facility (NIF) and the National Ignition Campaign (NIC) is the responsibility of the NIF Projects line organization as described in the NIF and Photon Science Directorate Quality Assurance Plan (NIF QA Plan). This Software Quality Assurance Plan (SQAP) is subordinate to the NIF QA Plan and establishes quality assurance (QA) activities for the software subsystems within Controls and Information Systems (CIS). This SQAP implements an activity level software quality assurance plan for NIF Projects as required by the LLNL Institutional Software Quality Assurance Program (ISQAP). Planned QA activities help achieve, assess, and maintain appropriate quality of software developed and/or acquired for control systems, shot data systems, laser performance modeling systems, business applications, industrial control and safety systems, and information technology systems. The objective of this SQAP is to ensure that appropriate controls are developed and implemented for management planning, work execution, and quality assessment of the CIS organization's software activities. The CIS line organization places special QA emphasis on rigorous configuration control, change management, testing, and issue tracking to help achieve its quality goals.

  11. Software ecosystems – a systematic literature review

    DEFF Research Database (Denmark)

    Manikas, Konstantinos; Hansen, Klaus Marius

    2013-01-01

    A software ecosystem is the interaction of a set of actors on top of a common technological platform that results in a number of software solutions or services. Arguably, software ecosystems are gaining importance with the advent of, e.g., the Google Android, Apache, and Salesforce.com ecosystems....... However, there exists no systematic overview of the research done on software ecosystems from a software engineering perspective. We performed a systematic literature review of software ecosystem research, analyzing 90 papers on the subject taken from a gross collection of 420. Our main conclusions...... are that while research on software ecosystems is increasing (a) there is little consensus on what constitutes a software ecosystem, (b) few analytical models of software ecosystems exist, and (c) little research is done in the context of real-world ecosystems. This work provides an overview of the field, while...

  12. Migration of nuclear criticality safety software from a mainframe to a workstation environment

    International Nuclear Information System (INIS)

    Bowie, L.J.; Robinson, R.C.; Cain, V.R.

    1993-01-01

    The Nuclear Criticality Safety Department (NCSD), Oak Ridge Y-12 Plant has undergone the transition of executing the Martin Marietta Energy Systems Nuclear Criticality Safety Software (NCSS) on IBM mainframes to a Hewlett-Packard (HP) 9000/730 workstation (NCSSHP). NCSSHP contains the following configuration controlled modules and cross-section libraries: BONAMI, CSAS, GEOMCHY, ICE, KENO IV, KENO Va, MODIIFY, NITAWL SCALE, SLTBLIB, XSDRN, UNIXLIB, albedos library, weights library, 16-Group HANSEN-ROACH master library, 27-Group ENDF/B-IV master library, and standard composition library. This paper will discuss the method used to choose the workstation, the hardware setup of the chosen workstation, an overview of Y-12 software quality assurance and configuration control methodology, code validation, difficulties encountered in migrating the codes, and advantages to migrating to a workstation environment

  13. Using MDA for integration of heterogeneous components in software supply chains

    NARCIS (Netherlands)

    Hartmann, Johan Herman; Keren, Mila; Matsinger, Aart; Rubin, Julia; Trew, Tim; Yatzkar-Haham, Tali

    2013-01-01

    Software product lines are increasingly built using components from specialized suppliers. A company that is in the middle of a supply chain has to integrate components from its suppliers and offer (partially configured) products to its customers. To satisfy both the variability required by each

  14. Design of a smart textile mat to study pressure distribution on multiple foam material configurations

    NARCIS (Netherlands)

    Donselaar, van R.; Chen, W.

    2011-01-01

    In this paper, we present a design of a smart textile pressure mat to study the pressure distribution with multiple foam material configurations for neonatal monitoring at Neonatal Intensive Care Units (NICU). A smart textile mat with 64 pressure sensors has been developed including software at the

  15. Optimizing infrastructure for software testing using virtualization

    International Nuclear Information System (INIS)

    Khalid, O.; Shaikh, A.; Copy, B.

    2012-01-01

    Virtualization technology and cloud computing have brought a paradigm shift in the way we utilize, deploy and manage computer resources. They allow fast deployment of multiple operating system as containers on physical machines which can be either discarded after use or check-pointed for later re-deployment. At European Organization for Nuclear Research (CERN), we have been using virtualization technology to quickly setup virtual machines for our developers with pre-configured software to enable them to quickly test/deploy a new version of a software patch for a given application. This paper reports both on the techniques that have been used to setup a private cloud on a commodity hardware and also presents the optimization techniques we used to remove deployment specific performance bottlenecks. (authors)

  16. Development of software for estimating clear sky solar radiation in Indonesia

    Science.gov (United States)

    Ambarita, H.

    2017-01-01

    Research on solar energy applications in Indonesia has come under scrutiny in recent years. Solar radiation is harvested by solar collector or solar cell and convert the energy into useful energy such as heat and or electricity. In order to provide a better configuration of a solar collector or a solar cell, clear sky radiation should be estimated properly. In this study, an in-house software for estimating clear sky radiation is developed. The governing equations are solved simultaneously. The software is tested in Medan city by performing a solar radiation measurements. For clear sky radiation, the results of the software and measurements ones show a good agreement. However, for the cloudy sky condition it cannot predict the solar radiation. This software can be used to estimate the clear sky radiation in Indonesia.

  17. Microsurgical Bypass Training Rat Model: Part 2-Anastomosis Configurations.

    Science.gov (United States)

    Tayebi Meybodi, Ali; Lawton, Michael T; Yousef, Sonia; Mokhtari, Pooneh; Gandhi, Sirin; Benet, Arnau

    2017-11-01

    Mastery of microsurgical anastomosis is key to achieving good outcomes in cerebrovascular bypass procedures. Animal models (especially rodents) provide an optimal preclinical bypass training platform. However, the existing models for practicing different anastomosis configurations have several limitations. We sought to optimize the use of the rat's abdominal aorta and common iliac arteries (CIA) for practicing the 3 main anastomosis configurations commonly used in cerebrovascular surgery. Thirteen male Sprague-Dawley rats underwent inhalant anesthesia. The abdominal aorta and the CIAs were exposed. The distances between the major branches of the aorta were measured to find the optimal location for an end-to-end anastomosis. Also, the feasibility of performing side-to-side and end-to-side anastomoses between the CIAs was assessed. All bypass configurations could be performed between the left renal artery and the CIA bifurcation. The longest segments of the aorta without major branches were 1) between the left renal and left iliolumbar arteries (16.9 mm ± 4.6), and 2) between the right iliolumbar artery and the aortic bifurcation (9.7 mm ± 4.7). The CIAs could be juxtaposed for an average length of 7.6 mm ± 1.3, for a side-to-side anastomosis. The left CIA could be successfully reimplanted on to the right CIA at an average distance of 9.1 mm ± 1.6 from the aortic bifurcation. Our results show that rat's abdominal aorta and CIAs may be effectively used for all the anastomosis configurations used in cerebral revascularization procedures. We also provide technical nuances and anatomic descriptions to plan for practicing each bypass configuration. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Real-time scheduling of software tasks

    International Nuclear Information System (INIS)

    Hoff, L.T.

    1995-01-01

    When designing real-time systems, it is often desirable to schedule execution of software tasks based on the occurrence of events. The events may be clock ticks, interrupts from a hardware device, or software signals from other software tasks. If the nature of the events, is well understood, this scheduling is normally a static part of the system design. If the nature of the events is not completely understood, or is expected to change over time, it may be necessary to provide a mechanism for adjusting the scheduling of the software tasks. RHIC front-end computers (FECs) provide such a mechanism. The goals in designing this mechanism were to be as independent as possible of the underlying operating system, to allow for future expansion of the mechanism to handle new types of events, and to allow easy configuration. Some considerations which steered the design were programming paradigm (object oriented vs. procedural), programming language, and whether events are merely interesting moments in time, or whether they intrinsically have data associated with them. The design also needed to address performance and robustness tradeoffs involving shared task contexts, task priorities, and use of interrupt service routine (ISR) contexts vs. task contexts. This paper will explore these considerations and tradeoffs

  19. Software engineering methodologies and tools

    Science.gov (United States)

    Wilcox, Lawrence M.

    1993-01-01

    Over the years many engineering disciplines have developed, including chemical, electronic, etc. Common to all engineering disciplines is the use of rigor, models, metrics, and predefined methodologies. Recently, a new engineering discipline has appeared on the scene, called software engineering. For over thirty years computer software has been developed and the track record has not been good. Software development projects often miss schedules, are over budget, do not give the user what is wanted, and produce defects. One estimate is there are one to three defects per 1000 lines of deployed code. More and more systems are requiring larger and more complex software for support. As this requirement grows, the software development problems grow exponentially. It is believed that software quality can be improved by applying engineering principles. Another compelling reason to bring the engineering disciplines to software development is productivity. It has been estimated that productivity of producing software has only increased one to two percent a year in the last thirty years. Ironically, the computer and its software have contributed significantly to the industry-wide productivity, but computer professionals have done a poor job of using the computer to do their job. Engineering disciplines and methodologies are now emerging supported by software tools that address the problems of software development. This paper addresses some of the current software engineering methodologies as a backdrop for the general evaluation of computer assisted software engineering (CASE) tools from actual installation of and experimentation with some specific tools.

  20. Large Scale Software Building with CMake in ATLAS

    Science.gov (United States)

    Elmsheuser, J.; Krasznahorkay, A.; Obreshkov, E.; Undrus, A.; ATLAS Collaboration

    2017-10-01

    The offline software of the ATLAS experiment at the Large Hadron Collider (LHC) serves as the platform for detector data reconstruction, simulation and analysis. It is also used in the detector’s trigger system to select LHC collision events during data taking. The ATLAS offline software consists of several million lines of C++ and Python code organized in a modular design of more than 2000 specialized packages. Because of different workflows, many stable numbered releases are in parallel production use. To accommodate specific workflow requests, software patches with modified libraries are distributed on top of existing software releases on a daily basis. The different ATLAS software applications also require a flexible build system that strongly supports unit and integration tests. Within the last year this build system was migrated to CMake. A CMake configuration has been developed that allows one to easily set up and build the above mentioned software packages. This also makes it possible to develop and test new and modified packages on top of existing releases. The system also allows one to detect and execute partial rebuilds of the release based on single package changes. The build system makes use of CPack for building RPM packages out of the software releases, and CTest for running unit and integration tests. We report on the migration and integration of the ATLAS software to CMake and show working examples of this large scale project in production.

  1. Hepsoft - an approach for up to date multi-platform deployment of HEP specific software

    International Nuclear Information System (INIS)

    Roiser, S

    2011-01-01

    LHC experiments are depending on a rich palette of software components to build their specific applications. These underlying software components include the ROOT analysis framework, the Geant4 simulation toolkit, Monte Carlo generators, grid middle-ware, graphics libraries, scripting languages, databases, tools, etc. which are provided centrally in up to date versions on multiple platforms (Linux, Mac, Windows). Until recently this set of packages has been tested and released in a tree like structure as a consistent set of versions across operating systems, architectures and compilers for LHC experiments only. Because of the tree like deployment these releases were only usable in connection with a configuration management tool which provided the proper build and run-time environments and was hindering other parties outside LHC from easily using this palette of packages. In a new approach the releases will be grouped in 'flat structure' such that interested parties can start using it without configuration management, retaining all the above mentioned advantages. In addition to an increased usability the software shall also be distributed via system provided package deployment systems (rpm, apt, etc.). The approach of software deployment is following the ideas of providing a wide range of HEP specific software packages and tools in a coherent, up to date and modular way on multiple platforms. The target audience for such software deployments are individual developers or smaller development groups / experiments who don't have the resources to maintain this kind of infrastructure. This new software deployment strategy has already been successfully implemented for groups at CERN.

  2. Industrial Application of Configurators: From Motivations to Realized Benefits

    DEFF Research Database (Denmark)

    Kristjansdottir, Katrin; Shafiee, Sara; Hvam, Lars

    to support companies applying mass customization strategies. This articles analysis the relationship between the initial motivations manufacturing companies have for implementing configurators and the realized benefits from the application of configurators. The results presented in this paper are based......Manufacturing companies are increasingly seeking to gain the benefits from mass customization strategies as a response to increased customers’ demand for customized products. To automate the process of generating products’ specifications and guide the sales process, configurators are commonly used...... on a survey followed with interviews in 22 industrial companies. The findings show that the main motivations can be grouped into seven categories, where the successfulness of achieving the targeted benefits varies between the individual categories. Furthermore, the results highlights that substantial benefits...

  3. Diversity requirements for safety critical software-based automation systems

    International Nuclear Information System (INIS)

    Korhonen, J.; Pulkkinen, U.; Haapanen, P.

    1998-03-01

    System vendors nowadays propose software-based systems even for the most critical safety functions in nuclear power plants. Due to the nature and mechanisms of influence of software faults new methods are needed for the safety and reliability evaluation of these systems. In the research project 'Programmable automation systems in nuclear power plants (OHA)' various safety assessment methods and tools for software based systems are developed and evaluated. This report first discusses the (common cause) failure mechanisms in software-based systems, then defines fault-tolerant system architectures to avoid common cause failures, then studies the various alternatives to apply diversity and their influence on system reliability. Finally, a method for the assessment of diversity is described. Other recently published reports in OHA-report series handles the statistical reliability assessment of software based (STUK-YTO-TR 119), usage models in reliability assessment of software-based systems (STUK-YTO-TR 128) and handling of programmable automation in plant PSA-studies (STUK-YTO-TR 129)

  4. Eprints Institutional Repository Software: A Review

    Directory of Open Access Journals (Sweden)

    Mike R. Beazley

    2011-01-01

    Full Text Available Setting up an institutional repository (IR can be a daunting task. There are many software packages out there, some commercial, some open source, all of which offer different features and functionality. This article will provide some thoughts about one of these software packages: Eprints. Eprints was one of the first IR software packages to appear and has been available for 10 years. It is under continual development by its creators at the University of Southampton and the current version is v3.2.3. Eprints is open-source, meaning that anyone can download and make use of the software for free and the software can be modified however the user likes. This presents clear advantages for institutions will smaller budgets and also for institutions that have programmers on staff. Eprints requires some additional software to run: Linux, Apache, MySQL, and Perl. This software is all open-source and already present on the servers of many institutions. There is now a version of Eprints that will run on Windows servers as well, which will make the adoption of Eprints even easier for some. In brief, Eprints is an excellent choice for any institution looking to get an IR up and running quickly and easily. Installation is straightforward as is the initial configuration. Once the IR is up and running, users may upload documents and provide the necessary metadata for the records by filling out a simple web form. Embargoes on published documents are handled elegantly by the software, and the software links to the SHERPA/RoMEO database so authors can easily verify their rights regarding IR submissions. Eprints has some drawbacks, which will be discussed later in the review, but on the whole it is easy to recommend to anyone looking to start an IR. However, It is less clear that an institution with an existing IR based on another software package should migrate to Eprints.

  5. Software Defined Cyberinfrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Foster, Ian; Blaiszik, Ben; Chard, Kyle; Chard, Ryan

    2017-07-17

    Within and across thousands of science labs, researchers and students struggle to manage data produced in experiments, simulations, and analyses. Largely manual research data lifecycle management processes mean that much time is wasted, research results are often irreproducible, and data sharing and reuse remain rare. In response, we propose a new approach to data lifecycle management in which researchers are empowered to define the actions to be performed at individual storage systems when data are created or modified: actions such as analysis, transformation, copying, and publication. We term this approach software-defined cyberinfrastructure because users can implement powerful data management policies by deploying rules to local storage systems, much as software-defined networking allows users to configure networks by deploying rules to switches.We argue that this approach can enable a new class of responsive distributed storage infrastructure that will accelerate research innovation by allowing any researcher to associate data workflows with data sources, whether local or remote, for such purposes as data ingest, characterization, indexing, and sharing. We report on early experiments with this approach in the context of experimental science, in which a simple if-trigger-then-action (IFTA) notation is used to define rules.

  6. Software Toolchain for Large-Scale RE-NFA Construction on FPGA

    Directory of Open Access Journals (Sweden)

    Yi-Hua E. Yang

    2009-01-01

    and O(n×m memory by our software. A large number of RE-NFAs are placed onto a two-dimensional staged pipeline, allowing scalability to thousands of RE-NFAs with linear area increase and little clock rate penalty due to scaling. On a PC with a 2 GHz Athlon64 processor and 2 GB memory, our prototype software constructs hundreds of RE-NFAs used by Snort in less than 10 seconds. We also designed a benchmark generator which can produce RE-NFAs with configurable pattern complexity parameters, including state count, state fan-in, loop-back and feed-forward distances. Several regular expressions with various complexities are used to test the performance of our RE-NFA construction software.

  7. An object-oriented software interface to VME

    International Nuclear Information System (INIS)

    Thomas, Timothy L; Gottlieb, Eric; Gold, Michael

    1996-01-01

    In the next millennium, data acquisition tasks for high energy physics will increasingly rely on distributed processing and the VME bus. To provide transparent, general-purpose access to VME hardware modules through a VME-embedded processor, we have created a simple, portable, easily configured object-oriented interface to the VME bus. This software is particularly well-suited for hardware development, providing rapid engineering level access to the VME interface of prototype modules. (author)

  8. Enhancing operability and reliability through configuration management

    International Nuclear Information System (INIS)

    Hancock, L.R.

    1993-01-01

    This paper describes the evolution of plant design control techniques from the early 1970's to today's operating environment that demands accurate, up-to-date design data. This evolution of design control is responsible for the increasingly troublesome scenario of design data being very difficult to locate and when found, its credibility is questioned. The design information could be suspect because there are discrepancies between two or more source documents or there is a difference between the design documents and the physical configuration of the plant. This paper discusses the impact these design control problems are having on plant operations and presents common sense solutions for improving configuration management techniques to ultimately enhance operability and reliability

  9. Software Construction and Composition Tools for Petascale Computing SCW0837 Progress Report

    Energy Technology Data Exchange (ETDEWEB)

    Epperly, T W; Hochstein, L

    2011-09-12

    The majority of scientific software is distributed as source code. As the number of library dependencies and supported platforms increases, so does the complexity of describing the rules for configuring and building software. In this project, we have performed an empirical study of the magnitude of the build problem by examining the development history of two DOE-funded scientific software projects. We have developed MixDown, a meta-build tool, to simplify the task of building applications that depend on multiple third-party libraries. The results of this research indicate that the effort that scientific programmers spend takes a significant fraction of the total development effort and that the use of MixDown can significantly simplify the task of building software with multiple dependencies.

  10. Evaluating the Potential of Commercial GIS for Accelerator Configuration Management

    International Nuclear Information System (INIS)

    Larrieu, T.L.; Roblin, Y.R.; White, K.; Slominski, R.

    2005-01-01

    The Geographic Information System (GIS) is a tool used by industries needing to track information about spatially distributed assets. A water utility, for example, must know not only the precise location of each pipe and pump, but also the respective pressure rating and flow rate of each. In many ways, an accelerator such as CEBAF (Continuous Electron Beam Accelerator Facility) can be viewed as an ''electron utility''. Whereas the water utility uses pipes and pumps, the ''electron utility'' uses magnets and RF cavities. At Jefferson lab we are exploring the possibility of implementing ESRI's ArcGIS as the framework for building an all-encompassing accelerator configuration database that integrates location, configuration, maintenance, and connectivity details of all hardware and software. The possibilities of doing so are intriguing. From the GIS, software such as the model server could always extract the most-up-to-date layout information maintained by the Survey and Alignment for lattice modeling. The Mechanical Engineering department could use ArcGIS tools to generate CAD drawings of machine segments from the same database. Ultimately, the greatest benefit of the GIS implementation could be to liberate operators and engineers from the limitations of the current system-by-system view of machine configuration and allow a more integrated regional approach. The commercial GIS package provides a rich set of tools for database-connectivity, versioning, distributed editing, importing and exporting, and graphical analysis and querying, and therefore obviates the need for much custom development. However, formidable challenges to implementation exist and these challenges are not only technical and manpower issues, but also organizational ones. The GIS approach would crosscut organizational boundaries and require departments, which heretofore have had free reign to manage their own data, to cede some control and agree to a centralized framework

  11. Evaluating the Potential of Commercial GIS for Accelerator Configuration Management

    Energy Technology Data Exchange (ETDEWEB)

    T.L. Larrieu; Y.R. Roblin; K. White; R. Slominski

    2005-10-10

    The Geographic Information System (GIS) is a tool used by industries needing to track information about spatially distributed assets. A water utility, for example, must know not only the precise location of each pipe and pump, but also the respective pressure rating and flow rate of each. In many ways, an accelerator such as CEBAF (Continuous Electron Beam Accelerator Facility) can be viewed as an ''electron utility''. Whereas the water utility uses pipes and pumps, the ''electron utility'' uses magnets and RF cavities. At Jefferson lab we are exploring the possibility of implementing ESRI's ArcGIS as the framework for building an all-encompassing accelerator configuration database that integrates location, configuration, maintenance, and connectivity details of all hardware and software. The possibilities of doing so are intriguing. From the GIS, software such as the model server could always extract the most-up-to-date layout information maintained by the Survey & Alignment for lattice modeling. The Mechanical Engineering department could use ArcGIS tools to generate CAD drawings of machine segments from the same database. Ultimately, the greatest benefit of the GIS implementation could be to liberate operators and engineers from the limitations of the current system-by-system view of machine configuration and allow a more integrated regional approach. The commercial GIS package provides a rich set of tools for database-connectivity, versioning, distributed editing, importing and exporting, and graphical analysis and querying, and therefore obviates the need for much custom development. However, formidable challenges to implementation exist and these challenges are not only technical and manpower issues, but also organizational ones. The GIS approach would crosscut organizational boundaries and require departments, which heretofore have had free reign to manage their own data, to cede some control and agree to a

  12. A Multi-Time Scale Morphable Software Milieu for Polymorphous Computing Architectures (PCA) - Composable, Scalable Systems

    National Research Council Canada - National Science Library

    Skjellum, Anthony

    2004-01-01

    Polymorphous Computing Architectures (PCA) rapidly "morph" (reorganize) software and hardware configurations in order to achieve high performance on computation styles ranging from specialized streaming to general threaded applications...

  13. A New Generation of Telecommunications for Mars: The Reconfigurable Software Radio

    Science.gov (United States)

    Adams, J.; Horne, W.

    2000-01-01

    Telecommunications is a critical component for any mission at Mars as it is an enabling function that provides connectivity back to Earth and provides a means for conducting science. New developments in telecommunications, specifically in software - configurable radios, expand the possible approaches for science missions at Mars. These radios provide a flexible and re-configurable platform that can evolve with the mission and that provide an integrated approach to communications and science data processing. Deep space telecommunication faces challenges not normally faced by terrestrial and near-earth communications. Radiation, thermal, highly constrained mass, volume, packaging and reliability all are significant issues. Additionally, once the spacecraft leaves earth, there is no way to go out and upgrade or replace radio components. The reconfigurable software radio is an effort to provide not only a product that is immediately usable in the harsh space environment but also to develop a radio that will stay current as the years pass and technologies evolve.

  14. Verification and validation of software related to nuclear power plant control and instrumentation

    International Nuclear Information System (INIS)

    Wall, N.; Kossilov, A.

    1994-01-01

    There has always been significant concern with introduction of software in industry and the nuclear industry is no different from any other sector save its safety demands are some of the most onerous. The problems associated with software have led to the well documented difficulties in the introduction of computer based systems. An important area of concern with software in systems is the processes of Verification and Validation. One of the many activities the IAEA is currently engaged in is the preparation of a document on the process of verification and validation of software. The document follows the safety classification of IEC 1226 but includes software important to plant operation to establish three levels of assurance. The software that might be deployed on a plant was then identified as one of four types: new software, existing software for which full access to the code and documentation is possible, existing software of a proprietary nature and finally configurable software. The document attempts to identify the appropriate methods and tools for conducting the verification and validation processes. (author). 5 refs, 5 figs, 7 tabs

  15. Software safety analysis techniques for developing safety critical software in the digital protection system of the LMR

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jang Soo; Cheon, Se Woo; Kim, Chang Hoi; Sim, Yun Sub

    2001-02-01

    This report has described the software safety analysis techniques and the engineering guidelines for developing safety critical software to identify the state of the art in this field and to give the software safety engineer a trail map between the code and standards layer and the design methodology and documents layer. We have surveyed the management aspects of software safety activities during the software lifecycle in order to improve the safety. After identifying the conventional safety analysis techniques for systems, we have surveyed in details the software safety analysis techniques, software FMEA(Failure Mode and Effects Analysis), software HAZOP(Hazard and Operability Analysis), and software FTA(Fault Tree Analysis). We have also surveyed the state of the art in the software reliability assessment techniques. The most important results from the reliability techniques are not the specific probability numbers generated, but the insights into the risk importance of software features. To defend against potential common-mode failures, high quality, defense-in-depth, and diversity are considered to be key elements in digital I and C system design. To minimize the possibility of CMFs and thus increase the plant reliability, we have provided D-in-D and D analysis guidelines.

  16. Software safety analysis techniques for developing safety critical software in the digital protection system of the LMR

    International Nuclear Information System (INIS)

    Lee, Jang Soo; Cheon, Se Woo; Kim, Chang Hoi; Sim, Yun Sub

    2001-02-01

    This report has described the software safety analysis techniques and the engineering guidelines for developing safety critical software to identify the state of the art in this field and to give the software safety engineer a trail map between the code and standards layer and the design methodology and documents layer. We have surveyed the management aspects of software safety activities during the software lifecycle in order to improve the safety. After identifying the conventional safety analysis techniques for systems, we have surveyed in details the software safety analysis techniques, software FMEA(Failure Mode and Effects Analysis), software HAZOP(Hazard and Operability Analysis), and software FTA(Fault Tree Analysis). We have also surveyed the state of the art in the software reliability assessment techniques. The most important results from the reliability techniques are not the specific probability numbers generated, but the insights into the risk importance of software features. To defend against potential common-mode failures, high quality, defense-in-depth, and diversity are considered to be key elements in digital I and C system design. To minimize the possibility of CMFs and thus increase the plant reliability, we have provided D-in-D and D analysis guidelines

  17. Knowledge Engineering for Embedded Configuration

    DEFF Research Database (Denmark)

    Oddsson, Gudmundur Valur

    2008-01-01

    into the system the knowledge needed to achieve them. In order to understand the system, one draws simplified functional streams and identifies archetypes from the product assortment, and then one maps the two together into a system breakdown model. The system model indicates how many encapsulation models (EMs......This thesis presents a way to simplify setup of complex product systems with the help of embedded configuration. To achieve this, one has to focus on what subsystems need to communicate between themselves. The required internal knowledge is then structured at three abstraction levels......, and predefined relation types are suggested. The models are stringent and thought out so they can be implemented in software. They should allow both import and export of product knowledge from the knowledge-based system. The purpose of this work is to simplify the installation process of product systems...

  18. Performance comparison between ISCSI and other hardware and software solutions

    CERN Document Server

    Gug, M

    2003-01-01

    We report on our investigations on some technologies that can be used to build disk servers and networks of disk servers using commodity hardware and software solutions. It focuses on the performance that can be achieved by these systems and gives measured figures for different configurations. It is divided into two parts : iSCSI and other technologies and hardware and software RAID solutions. The first part studies different technologies that can be used by clients to access disk servers using a gigabit ethernet network. It covers block access technologies (iSCSI, hyperSCSI, ENBD). Experimental figures are given for different numbers of clients and servers. The second part compares a system based on 3ware hardware RAID controllers, a system using linux software RAID and IDE cards and a system mixing both hardware RAID and software RAID. Performance measurements for reading and writing are given for different RAID levels.

  19. Method of V ampersand V for safety-critical software in NPPs

    International Nuclear Information System (INIS)

    Kim, Jang-Yeol; Lee, Jang-Soo; Kwon, Kee-Choon

    1997-01-01

    Safety-critical software is software used in systems in which a failure could affect personal or equipment safety or result in large financial or social loss. Examples of systems using safety-critical software are systems such as plant protection systems in nuclear power plants (NPPs), process control systems in chemical plants, and medical instruments such as the Therac-25 medical accelerator. This paper presents verification and validation (V ampersand V) methodology for safety-critical software in NPP safety systems. In addition, it addresses issues related to NPP safety systems, such as independence parameters, software safety analysis (SSA) concepts, commercial off-the-shelf (COTS) software evaluation criteria, and interrelationships among software and system assurance organizations. It includes the concepts of existing industrial standards on software V ampersand V, Institute of Electrical and Electronics Engineers (IEEE) Standards 1012 and 1059. This safety-critical software V ampersand V methodology covers V ampersand V scope, a regulatory framework as part of its acceptance criteria, V ampersand V activities and task entrance and exit criteria, reviews and audits, testing and quality assurance records of V ampersand V material, configuration management activities related to V ampersand V, and software V ampersand V (SVV) plan (SVVP) production

  20. The KNICS approach for verification and validation of safety software

    International Nuclear Information System (INIS)

    Cha, Kyung Ho; Sohn, Han Seong; Lee, Jang Soo; Kim, Jang Yeol; Cheon, Se Woo; Lee, Young Joon; Hwang, In Koo; Kwon, Kee Choon

    2003-01-01

    This paper presents verification and validation (VV) to be approached for safety software of POSAFE-Q Programmable Logic Controller (PLC) prototype and Plant Protection System (PPS) prototype, which consists of Reactor Protection System (RPS) and Engineered Safety Features-Component Control System (ESF-CCS) in development of Korea Nuclear Instrumentation and Control System (KNICS). The SVV criteria and requirements are selected from IEEE Std. 7-4.3.2, IEEE Std. 1012, IEEE Std. 1028 and BTP-14, and they have been considered for acceptance framework to be provided within SVV procedures. SVV techniques, including Review and Inspection (R and I), Formal Verification and Theorem Proving, and Automated Testing, are applied for safety software and automated SVV tools supports SVV tasks. Software Inspection Support and Requirement Traceability (SIS-RT) supports R and I and traceability analysis, a New Symbolic Model Verifier (NuSMV), Statemate MAGNUM (STM) ModelCertifier, and Prototype Verification System (PVS) are used for formal verification, and McCabe and Cantata++ are utilized for static and dynamic software testing. In addition, dedication of Commercial-Off-The-Shelf (COTS) software and firmware, Software Safety Analysis (SSA) and evaluation of Software Configuration Management (SCM) are being performed for the PPS prototype in the software requirements phase

  1. Advanced Transport Operating System (ATOPS) utility library software description

    Science.gov (United States)

    Clinedinst, Winston C.; Slominski, Christopher J.; Dickson, Richard W.; Wolverton, David A.

    1993-01-01

    The individual software processes used in the flight computers on-board the Advanced Transport Operating System (ATOPS) aircraft have many common functional elements. A library of commonly used software modules was created for general uses among the processes. The library includes modules for mathematical computations, data formatting, system database interfacing, and condition handling. The modules available in the library and their associated calling requirements are described.

  2. Numerical study of canister filters with alternatives filter cap configurations

    Science.gov (United States)

    Mohammed, A. N.; Daud, A. R.; Abdullah, K.; Seri, S. M.; Razali, M. A.; Hushim, M. F.; Khalid, A.

    2017-09-01

    Air filtration system and filter play an important role in getting a good quality air into turbo machinery such as gas turbine. The filtration system and filter has improved the quality of air and protect the gas turbine part from contaminants which could bring damage. During separation of contaminants from the air, pressure drop cannot be avoided but it can be minimized thus helps to reduce the intake losses of the engine [1]. This study is focused on the configuration of the filter in order to obtain the minimal pressure drop along the filter. The configuration used is the basic filter geometry provided by Salutary Avenue Manufacturing Sdn Bhd. and two modified canister filter cap which is designed based on the basic filter model. The geometries of the filter are generated by using SOLIDWORKS software and Computational Fluid Dynamics (CFD) software is used to analyse and simulates the flow through the filter. In this study, the parameters of the inlet velocity are 0.032 m/s, 0.063 m/s, 0.094 m/s and 0.126 m/s. The total pressure drop produce by basic, modified filter 1 and 2 is 292.3 Pa, 251.11 Pa and 274.7 Pa. The pressure drop reduction for the modified filter 1 is 41.19 Pa and 14.1% lower compared to basic filter and the pressure drop reduction for modified filter 2 is 17.6 Pa and 6.02% lower compared to the basic filter. The pressure drops for the basic filter are slightly different with the Salutary Avenue filter due to limited data and experiment details. CFD software are very reliable in running a simulation rather than produces the prototypes and conduct the experiment thus reducing overall time and cost in this study.

  3. Software-based annunciator replacement: a tale of two projects

    International Nuclear Information System (INIS)

    Simmons, G.T.

    2015-01-01

    Annunciator upgrade projects are often included as parts of operating plant life extension projects as the systems are old and replacement parts are difficult to source. This paper contains case studies of the software-based annunciator replacement projects at the Westinghouse SNUPPS training simulator in Pennsylvania and the Axpo Beznau nuclear power plant in Switzerland. Software-based annunciator systems can offer a number of feature enhancements including improved readability and operator awareness, easy configuration, alarm suppression features, and alarm management at operator workstations. This paper provides an overview of each project and discusses advantages, challenges, and lessons learned from both annunciator-replacement projects. (author)

  4. Software-based annunciator replacement: a tale of two projects

    Energy Technology Data Exchange (ETDEWEB)

    Simmons, G.T., E-mail: simmongt@westinghouse.com [Westinghouse Electric Company LLC, Cranberry Township, PA (United States)

    2015-07-01

    Annunciator upgrade projects are often included as parts of operating plant life extension projects as the systems are old and replacement parts are difficult to source. This paper contains case studies of the software-based annunciator replacement projects at the Westinghouse SNUPPS training simulator in Pennsylvania and the Axpo Beznau nuclear power plant in Switzerland. Software-based annunciator systems can offer a number of feature enhancements including improved readability and operator awareness, easy configuration, alarm suppression features, and alarm management at operator workstations. This paper provides an overview of each project and discusses advantages, challenges, and lessons learned from both annunciator-replacement projects. (author)

  5. Writing testable software requirements

    Energy Technology Data Exchange (ETDEWEB)

    Knirk, D. [Sandia National Labs., Albuquerque, NM (United States)

    1997-11-01

    This tutorial identifies common problems in analyzing requirements in the problem and constructing a written specification of what the software is to do. It deals with two main problem areas: identifying and describing problem requirements, and analyzing and describing behavior specifications.

  6. Experimental analysis of specification language impact on NPP software diversity

    International Nuclear Information System (INIS)

    Yoo, Chang Sik; Seong, Poong Hyun

    1998-01-01

    When redundancy and diversity is applied in NPP digital computer system, diversification of system software may be a critical point for the entire system dependability. As the means of enhancing software diversity, specification language diversity is suggested in this study. We set up a simple hypothesis for the specification language impact on common errors, and an experiment based on NPP protection system application was performed. Experiment result showed that this hypothesis could be justified and specification language diversity is effective in overcoming software common mode failure problem

  7. Enterprise network with software Asterisk PBX based on the PLC technology

    Directory of Open Access Journals (Sweden)

    Michal Maar

    2017-01-01

    Full Text Available This article presents the software Asterisk PBX solution design in enterprise PLC network (Power Line Communication. The description of the installation and configuration of software Asterisk PBX is involved in the design. The secure interconnection of two enterprise PLC network is implemented via the telecommunication tunnel with security grant using the Cisco routers. The connection between two Asterisk PBXs is designed in context of the establishment of the tunnel. The subject of the article is also cross/connection of exchanges Asterisk PBX and hardware PBX - IP Panasonic PBX K-NS500.

  8. SSE software test management STM capability: Using STM in the Ground Systems Development Environment (GSDE)

    Science.gov (United States)

    Church, Victor E.; Long, D.; Hartenstein, Ray; Perez-Davila, Alfredo

    1992-01-01

    This report is one of a series discussing configuration management (CM) topics for Space Station ground systems software development. It provides a description of the Software Support Environment (SSE)-developed Software Test Management (STM) capability, and discusses the possible use of this capability for management of developed software during testing performed on target platforms. This is intended to supplement the formal documentation of STM provided by the SEE Project. How STM can be used to integrate contractor CM and formal CM for software before delivery to operations is described. STM provides a level of control that is flexible enough to support integration and debugging, but sufficiently rigorous to insure the integrity of the testing process.

  9. Upgrade and integration of the configuration and monitoring tools for the ATLAS Online farm

    CERN Document Server

    Ballestrero, S; The ATLAS collaboration; Darlea, G L; Dumitru, I; Scannicchio, DA; Twomey, M S; Valsan, M L; Zaytsev, A

    2012-01-01

    The ATLAS Online farm is a non-homogeneous cluster of nearly 3000 PCs which run the data acquisition, trigger and control of the ATLAS detector. The systems are configured and monitored by a combination of open-source tools, such as Quattor and Nagios, and tools developed in-house, such as ConfDB. We report on the ongoing introduction of new provisioning and configuration tools, Puppet and ConfDB v2 which are more flexible and allow automation for previously uncovered needs, and on the upgrade and integration of the monitoring and alerting tools, including the interfacing of these with the TDAQ Shifter Assistant software and their integration with configuration tools. We discuss the selection of the tools and the assessment of their functionality and performance, and how they enabled the introduction of virtualization for selected services.

  10. Upgrade and integration of the configuration and monitoring tools for the ATLAS Online farm

    International Nuclear Information System (INIS)

    Ballestrero, S; Darlea, G–L; Twomey, M S; Brasolin, F; Dumitru, I; Valsan, M L; Scannicchio, D A; Zaytsev, A

    2012-01-01

    The ATLAS Online farm is a non-homogeneous cluster of nearly 3000 systems which run the data acquisition, trigger and control of the ATLAS detector. The systems are configured and monitored by a combination of open-source tools, such as Quattor and Nagios, and tools developed in-house, such as ConfDB. We report on the ongoing introduction of new provisioning and configuration tools, Puppet and ConfDB v2, which are more flexible and allow automation for previously uncovered needs, and on the upgrade and integration of the monitoring and alerting tools, including the interfacing of these with the TDAQ Shifter Assistant software and their integration with configuration tools. We discuss the selection of the tools and the assessment of their functionality and performance, and how they enabled the introduction of virtualization for selected services.

  11. Lessons Learned in Software Testing A Context-Driven Approach

    CERN Document Server

    Kaner, Cem; Pettichord, Bret

    2008-01-01

    Decades of software testing experience condensed into the most important lessons learned.The world's leading software testing experts lend you their wisdom and years of experience to help you avoid the most common mistakes in testing software. Each lesson is an assertion related to software testing, followed by an explanation or example that shows you the how, when, and why of the testing lesson. More than just tips, tricks, and pitfalls to avoid, Lessons Learned in Software Testing speeds you through the critical testing phase of the software development project without the extensive trial an

  12. 基于直流调速和实时组态的风力机模拟器设计%Design of Wind Turbine Simulator Based on DC Speed Control System and Real-Time Configuration Software

    Institute of Scientific and Technical Information of China (English)

    张计科; 王生铁; 刘广忱

    2013-01-01

    In order to meet the requirements of research on small-scale wind power generation system and improve reliability and development efficiency of software and hardware, the wind turbine simulator based on DC speed control system and real-time configuration environment was designed and implemented according to principle of standardized hardware and configurable software,and experiments were also done in this paper. Experimental results show that the wind turbine simulator not only can simulate accurately static output characteristics of a wind turbine and have good dynamic response so' as to behave almost the same as an actual wind turbine, but also can be configured and run to simulate off-line and run in realtime in MATLAB/Simulink environment,and has the merits of convenient model and parameters adjustment of wind turbine and wind speed as well as high software development efficiency. The wind turbine simulator presented in this paper has significant value for investigation on small wind power generation in a laboratory.%从研究小型风力发电系统需求出发,为了提高软硬件开发效率及可靠性,本着硬件标准化和软件组态化的原则,设计并实现了基于直流调速系统和实时组态环境的小型风力机模拟器,并进行了实验验证.实验结果表明,该风力机模拟器不但可以较为精准地模拟风力机的静态输出特性,动态响应好,达到不依赖于自然风资源而具有实际风力机特性的模拟效果,而且可以使离线仿真和在线运行统一在MATLAB/Simulink平台下组态和运行,风力机和风速模型及参数调整方便、软件开发效率高,对于实验室研究小型风力发电具有重要价值.

  13. Software hazard analysis for nuclear digital protection system by Colored Petri Net

    International Nuclear Information System (INIS)

    Bai, Tao; Chen, Wei-Hua; Liu, Zhen; Gao, Feng

    2017-01-01

    Highlights: •A dynamic hazard analysis method is proposed for the safety-critical software. •The mechanism relies on Colored Petri Net. •Complex interactions between software and hardware are captured properly. •Common failure mode in software are identified effectively. -- Abstract: The software safety of a nuclear digital protection system is critical for the safety of nuclear power plants as any software defect may result in severe damage. In order to ensure the safety and reliability of safety-critical digital system products and their applications, software hazard analysis is required to be performed during the lifecycle of software development. The dynamic software hazard modeling and analysis method based on Colored Petri Net is proposed and applied to the safety-critical control software of the nuclear digital protection system in this paper. The analysis results show that the proposed method can explain the complex interactions between software and hardware and identify the potential common cause failure in software properly and effectively. Moreover, the method can find the dominant software induced hazard to safety control actions, which aids in increasing software quality.

  14. Development of Methods and Means of Configuration Data Transfer For Use in an FPGA Based Trigger Controller Device

    International Nuclear Information System (INIS)

    2010-01-01

    To determine if klystrons will perform to the specifications of the LCLS (Linac Coherent Light Source) project, a new digital trigger controller is needed for the Klystron/Microwave Department Test Laboratory. The controller needed to be programmed and Windows based user interface software needed to be written to interface with the device over a USB (Universal Serial Bus). Programming the device consisted of writing logic in VHDL (VHSIC (Very High Speed Integrated Circuits) hardware description language), and the Windows interface software was written in C++. Xilinx ISE (Integrated Software Environment) was used to compile the VHDL code and program the device, and Microsoft Visual Studio 2005 was used to compile the C++ based Windows software. The device was programmed in such a way as to easily allow read/write operations to it using a simple addressing model, and Windows software was developed to interface with the device over a USB connection. A method of setting configuration registers in the trigger device is absolutely necessary to the development of a new triggering system, and the method developed will fulfill this need adequately. More work is needed before the new trigger system is ready for use. The configuration registers in the device need to be fully integrated with the logic that will generate the RF signals, and this system will need to be tested extensively to determine if it meets the requirements for low noise trigger outputs.

  15. Byte evolution: software transforming oilpatch operations

    International Nuclear Information System (INIS)

    Roche, P.

    2000-01-01

    Changes in the nature of computer software for tracking exploration and production companies' assets, are discussed. One prediction is that 'industry-specific' software will replace the common electronic spreadsheet, while another foresees business-to-business electronic transactions, and outsourcing of software purchasing and maintenance to 'application service providers' (ASPs). To date, at least two companies have launched their own ASPs; if the trend continues, clients will pay just one monthly fee to the ASP, which will assume the headaches and hassles of software installations, upgrades and maintenance. That would spell the end of in-house networks and information technology people on staff. It is also suggested that in due course business-to-business e-commerce will far exceed in importance the consumer-oriented e-commerce of today. Procurement is a commonly cited example where the electronic exchange of funds and data could replace scores of manual processes. The idea is to simplify business processes through automatic routing among companies via the Internet, with ASPs serving as the central hub of the information flow. Experiences, current products and services, and future plans of the two existing ASP companies, -- Applied Terravision Systems Inc., and QByte Services Ltd. -- are reviewed

  16. Evolution of Secondary Software Businesses: Understanding Industry Dynamics

    Science.gov (United States)

    Tyrväinen, Pasi; Warsta, Juhani; Seppänen, Veikko

    Primary software industry originates from IBM's decision to unbundle software-related computer system development activities to external partners. This kind of outsourcing from an enterprise internal software development activity is a common means to start a new software business serving a vertical software market. It combines knowledge of the vertical market process with competence in software development. In this research, we present and analyze the key figures of the Finnish secondary software industry, in order to quantify its interaction with the primary software industry during the period of 2000-2003. On the basis of the empirical data, we present a model for evolution of a secondary software business, which makes explicit the industry dynamics. It represents the shift from internal software developed for competitive advantage to development of products supporting standard business processes on top of standardized technologies. We also discuss the implications for software business strategies in each phase.

  17. Numerical Study of Traffic Pollutant Dispersion within Different Street Canyon Configurations

    OpenAIRE

    Yucong Miao; Shuhua Liu; Yijia Zheng; Shu Wang; Yuan Li

    2014-01-01

    The objective of this study is to numerically study flow and traffic exhaust dispersion in urban street canyons with different configurations to find out the urban-planning strategies to ease the air pollution. The Computational Fluid Dynamics (CFD) model used in this study—Open Source Field Operation and Manipulation (OpenFOAM) software package—was firstly validated against the wind-tunnel experiment data by using three different k-ε turbulence models. And then the patterns of flow and dispe...

  18. Why and how Mastering an Incremental and Iterative Software Development Process

    Science.gov (United States)

    Dubuc, François; Guichoux, Bernard; Cormery, Patrick; Mescam, Jean Christophe

    2004-06-01

    One of the key issues regularly mentioned in the current software crisis of the space domain is related to the software development process that must be performed while the system definition is not yet frozen. This is especially true for complex systems like launchers or space vehicles.Several more or less mature solutions are under study by EADS SPACE Transportation and are going to be presented in this paper. The basic principle is to develop the software through an iterative and incremental process instead of the classical waterfall approach, with the following advantages:- It permits systematic management and incorporation of requirements changes over the development cycle with a minimal cost. As far as possible the most dimensioning requirements are analyzed and developed in priority for validating very early the architecture concept without the details.- A software prototype is very quickly available. It improves the communication between system and software teams, as it enables to check very early and efficiently the common understanding of the system requirements.- It allows the software team to complete a whole development cycle very early, and thus to become quickly familiar with the software development environment (methodology, technology, tools...). This is particularly important when the team is new, or when the environment has changed since the previous development. Anyhow, it improves a lot the learning curve of the software team.These advantages seem very attractive, but mastering efficiently an iterative development process is not so easy and induces a lot of difficulties such as:- How to freeze one configuration of the system definition as a development baseline, while most of thesystem requirements are completely and naturally unstable?- How to distinguish stable/unstable and dimensioning/standard requirements?- How to plan the development of each increment?- How to link classical waterfall development milestones with an iterative approach: when

  19. Configuration Management

    International Nuclear Information System (INIS)

    Morcos, A.; Taylor, H. S.

    1989-01-01

    This paper will briefly discuss the reason for and content of configuration management both for new plants and, when adapted, for older plants. It will then address three types of activities a utility may undertake as part of a nuclear CAM program and with which Sargent and Leyden has been actively involved. The first activity is a methodology for preparing design-basis documentation. The second is the identification of essential data required to be kept by the utility in support of the operation of a nuclear plant. The third activity is a computerized classification system of plant components, allowing ready identification of plant functional and physical characteristics. Plant configuration documentation describes plant components, the ways they arranged to interact, and the ways they are enabled to interact. Configuration management, on the other hand, is more than the control of such documentation. It is a dynamic process for ensuring that a plant configuration meets all relevant requirements for safety and economy, even while the configuration changes and even while the requirements change. Configuration management for a nuclear plant is so complex that it must be implemented in phases and modules. It takes advantage of and integrates existing programs. Managing complexity and streamlining the change process become important additional objectives of configuration management. The example activities fulfill essential goals of an overall CAM program: definition of design baseline, definition of essential plant data, and classification of plant components

  20. Network, system, and status software enhancements for the autonomously managed electrical power system breadboard. Volume 1: Project summary

    Science.gov (United States)

    Mckee, James W.

    1990-01-01

    This volume (1 of 4) gives a summary of the original AMPS software system configuration, points out some of the problem areas in the original software design that this project is to address, and in the appendix collects all the bimonthly status reports. The purpose of AMPS is to provide a self reliant system to control the generation and distribution of power in the space station. The software in the AMPS breadboard can be divided into three levels: the operating environment software, the protocol software, and the station specific software. This project deals only with the operating environment software and the protocol software. The present station specific software will not change except as necessary to conform to new data formats.