WorldWideScience

Sample records for legacy software systems

  1. Revisiting Legacy Software System Modernization

    NARCIS (Netherlands)

    Khadka, R.

    2016-01-01

    Legacy software systems are those that significantly resist modification and evolution while still being valuable to its stakeholders to the extent that their failure has a detrimental impact on business. Despite several drawbacks of legacy software systems, they are still being extensively used in

  2. Traceability of Software Safety Requirements in Legacy Safety Critical Systems

    Science.gov (United States)

    Hill, Janice L.

    2007-01-01

    How can traceability of software safety requirements be created for legacy safety critical systems? Requirements in safety standards are imposed most times during contract negotiations. On the other hand, there are instances where safety standards are levied on legacy safety critical systems, some of which may be considered for reuse for new applications. Safety standards often specify that software development documentation include process-oriented and technical safety requirements, and also require that system and software safety analyses are performed supporting technical safety requirements implementation. So what can be done if the requisite documents for establishing and maintaining safety requirements traceability are not available?

  3. Software Safety Risk in Legacy Safety-Critical Computer Systems

    Science.gov (United States)

    Hill, Janice L.; Baggs, Rhoda

    2007-01-01

    Safety Standards contain technical and process-oriented safety requirements. Technical requirements are those such as "must work" and "must not work" functions in the system. Process-Oriented requirements are software engineering and safety management process requirements. Address the system perspective and some cover just software in the system > NASA-STD-8719.13B Software Safety Standard is the current standard of interest. NASA programs/projects will have their own set of safety requirements derived from the standard. Safety Cases: a) Documented demonstration that a system complies with the specified safety requirements. b) Evidence is gathered on the integrity of the system and put forward as an argued case. [Gardener (ed.)] c) Problems occur when trying to meet safety standards, and thus make retrospective safety cases, in legacy safety-critical computer systems.

  4. Architecture-driven Migration of Legacy Systems to Cloud-enabled Software

    DEFF Research Database (Denmark)

    Ahmad, Aakash; Babar, Muhammad Ali

    2014-01-01

    With the widespread adoption of cloud computing, an increasing number of organizations view it as an important business strategy to evolve their legacy applications to cloud-enabled infrastructures. We present a framework, named Legacy-to-Cloud Migration Horseshoe, for supporting the migration...... of legacy systems to cloud computing. The framework leverages the software reengineering concepts that aim to recover the architecture from legacy source code. Then the framework exploits the software evolution concepts to support architecture-driven migration of legacy systems to cloud-based architectures....... The Legacy-to-Cloud Migration Horseshoe comprises of four processes: (i) architecture migration planning, (ii) architecture recovery and consistency, (iii) architecture transformation and (iv) architecture-based development of cloud-enabled software. We aim to discover, document and apply the migration...

  5. Why Replacing Legacy Systems Is So Hard in Global Software Development: An Information Infrastructure Perspective

    DEFF Research Database (Denmark)

    Matthiesen, Stina; Bjørn, Pernille

    2015-01-01

    to be obvious explanations for why GSD tasks fail to reach completion; however, we account for the difficulties within the technical nature of software system task. We use the framework of information infrastructure to show how replacing a legacy system in governmental information infrastructures includes...... the work of tracing back to knowledge concerning law, technical specifications, as well as how information infrastructures have dynamically evolved over time. Not easily carried out in a GSD setup is the work around technical tasks that requires careful examination of mundane technical aspects, standards......, and bureaucratic forms, as well as the excavation work that keeps the information infrastructure afloat....

  6. Space Shuttle Program Primary Avionics Software System (PASS) Success Legacy - Quality and Reliability Date

    Science.gov (United States)

    Orr, James K.; Peltier, Daryl

    2010-01-01

    Thsi slide presentation reviews the avionics software system on board the space shuttle, with particular emphasis on the quality and reliability. The Primary Avionics Software System (PASS) provides automatic and fly-by-wire control of critical shuttle systems which executes in redundant computers. Charts given show the number of space shuttle flights vs time, PASS's development history, and other charts that point to the reliability of the system's development. The reliability of the system is also compared to predicted reliability.

  7. Exploring legacy systems using types

    NARCIS (Netherlands)

    A. van Deursen (Arie); L.M.F. Moonen (Leon)

    2000-01-01

    textabstractWe show how hypertext-based program understanding tools can achieve new levels of abstraction by using inferred type information for cases where the subject software system is written in a weakly typed language. We propose TypeExplorer, a tool for browsing COBOL legacy systems based on

  8. Developing a TTCN-3 Test Harness for Legacy Software

    DEFF Research Database (Denmark)

    Okika, Joseph C.; Ravn, Anders Peter; Siddalingaiah, Lokesh

    2006-01-01

    challenge in developing the test harness is to interface a generic test driver to the legacy software and provide a suitable interface for test engineers. The main contribution of this paper is a demonstration of a suitable design for such a test harness. It includes: a TTCN-3 test driver in C++, the legacy...... control software in C, a Graphical User Interface (GUI) and the connectors in Java. Our experience shows that it is feasible to use TTCN-3 in developing a test harness for a legacy software for an embedded system, even when it involves different heterogeneous components.......We describe a prototype test harness for an embedded system which is the control software for a modern marine diesel engine. The operations of such control software requires complete certification. We adopt Testing and Test Control Notation (TTCN-3) to define test cases for this purpose. The main...

  9. Developing a TTCN-3 Test Harness for Legacy Software

    DEFF Research Database (Denmark)

    Okika, Joseph C.; Ravn, Anders Peter; Siddalingaiah, Lokesh

    2006-01-01

    We describe a prototype test harness for an embedded system which is the control software for a modern marine diesel engine. The operations of such control software requires complete certification. We adopt Testing and Test Control Notation (TTCN-3) to define test cases for this purpose. The main...... challenge in developing the test harness is to interface a generic test driver to the legacy software and provide a suitable interface for test engineers. The main contribution of this paper is a demonstration of a suitable design for such a test harness. It includes: a TTCN-3 test driver in C++, the legacy...... control software in C, a Graphical User Interface (GUI) and the connectors in Java. Our experience shows that it is feasible to use TTCN-3 in developing a test harness for a legacy software for an embedded system, even when it involves different heterogeneous components....

  10. The ATLAS Trigger Simulation with Legacy Software

    CERN Document Server

    Bernius, Catrin; The ATLAS collaboration

    2017-01-01

    Physics analyses at the LHC require accurate simulations of the detector response and the event selection processes, generally done with the most recent software releases. The trigger response simulation is crucial for determination of overall selection efficiencies and signal sensitivities and should be done with the same software release with which data were recorded. This requires potentially running with software dating many years back, the so-called legacy software. Therefore having a strategy for running legacy software in a modern environment becomes essential when data simulated for past years start to present a sizeable fraction of the total. The requirements and possibilities for such a simulation scheme within the ATLAS software framework were examined and a proof-of-concept simulation chain has been successfully implemented. One of the greatest challenges was the choice of a data format which promises long term compatibility with old and new software releases. Over the time periods envisaged, data...

  11. A methodology based on openEHR archetypes and software agents for developing e-health applications reusing legacy systems.

    Science.gov (United States)

    Cardoso de Moraes, João Luís; de Souza, Wanderley Lopes; Pires, Luís Ferreira; do Prado, Antonio Francisco

    2016-10-01

    In Pervasive Healthcare, novel information and communication technologies are applied to support the provision of health services anywhere, at anytime and to anyone. Since health systems may offer their health records in different electronic formats, the openEHR Foundation prescribes the use of archetypes for describing clinical knowledge in order to achieve semantic interoperability between these systems. Software agents have been applied to simulate human skills in some healthcare procedures. This paper presents a methodology, based on the use of openEHR archetypes and agent technology, which aims to overcome the weaknesses typically found in legacy healthcare systems, thereby adding value to the systems. This methodology was applied in the design of an agent-based system, which was used in a realistic healthcare scenario in which a medical staff meeting to prepare a cardiac surgery has been supported. We conducted experiments with this system in a distributed environment composed by three cardiology clinics and a center of cardiac surgery, all located in the city of Marília (São Paulo, Brazil). We evaluated this system according to the Technology Acceptance Model. The case study confirmed the acceptance of our agent-based system by healthcare professionals and patients, who reacted positively with respect to the usefulness of this system in particular, and with respect to task delegation to software agents in general. The case study also showed that a software agent-based interface and a tools-based alternative must be provided to the end users, which should allow them to perform the tasks themselves or to delegate these tasks to other people. A Pervasive Healthcare model requires efficient and secure information exchange between healthcare providers. The proposed methodology allows designers to build communication systems for the message exchange among heterogeneous healthcare systems, and to shift from systems that rely on informal communication of actors to

  12. The ATLAS Trigger Simulation with Legacy Software

    CERN Document Server

    Bernius, Catrin; The ATLAS collaboration

    2017-01-01

    Physics analyses at the LHC which search for rare physics processes or measure Standard Model parameters with high precision require accurate simulations of the detector response and the event selection processes. The accurate simulation of the trigger response is crucial for determination of overall selection efficiencies and signal sensitivities. For the generation and the reconstruction of simulated event data, generally the most recent software releases are used to ensure the best agreement between simulated data and real data. For the simulation of the trigger selection process, however, the same software release with which real data were taken should be ideally used. This requires potentially running with software dating many years back, the so-called legacy software. Therefore having a strategy for running legacy software in a modern environment becomes essential when data simulated for past years start to present a sizeable fraction of the total. The requirements and possibilities for such a simulatio...

  13. Types and concept analysis for legacy systems

    NARCIS (Netherlands)

    T. Kuipers (Tobias); L.M.F. Moonen (Leon)

    2000-01-01

    textabstractWe combine type inference and concept analysis in order to gain insight into legacy software systems. Type inference for Cobol yields the types for variables and program parameters. These types are used to perform mathematical concept analysis on legacy systems. We have developed

  14. Legacy System Wrapping for Department of Defense Information System Modernization

    National Research Council Canada - National Science Library

    Jordan, Kathleen

    1995-01-01

    This document explains the activities, benefits, problems, and issues in using the object-oriented technique of software wrapping to support the migration from legacy information systems to modernized systems...

  15. The Legacy of Space Shuttle Flight Software

    Science.gov (United States)

    Hickey, Christopher J.; Loveall, James B.; Orr, James K.; Klausman, Andrew L.

    2011-01-01

    The initial goals of the Space Shuttle Program required that the avionics and software systems blaze new trails in advancing avionics system technology. Many of the requirements placed on avionics and software were accomplished for the first time on this program. Examples include comprehensive digital fly-by-wire technology, use of a digital databus for flight critical functions, fail operational/fail safe requirements, complex automated redundancy management, and the use of a high-order software language for flight software development. In order to meet the operational and safety goals of the program, the Space Shuttle software had to be extremely high quality, reliable, robust, reconfigurable and maintainable. To achieve this, the software development team evolved a software process focused on continuous process improvement and defect elimination that consistently produced highly predictable and top quality results, providing software managers the confidence needed to sign each Certificate of Flight Readiness (COFR). This process, which has been appraised at Capability Maturity Model (CMM)/Capability Maturity Model Integration (CMMI) Level 5, has resulted in one of the lowest software defect rates in the industry. This paper will present an overview of the evolution of the Primary Avionics Software System (PASS) project and processes over thirty years, an argument for strong statistical control of software processes with examples, an overview of the success story for identifying and driving out errors before flight, a case study of the few significant software issues and how they were either identified before flight or slipped through the process onto a flight vehicle, and identification of the valuable lessons learned over the life of the project.

  16. Solving the Software Legacy Problem with RISA

    Science.gov (United States)

    Ibarra, A.; Gabriel, C.

    2012-09-01

    Nowadays hardware and system infrastructure evolve on time scales much shorter than the typical duration of space astronomy missions. Data processing software capabilities have to evolve to preserve the scientific return during the entire experiment life time. Software preservation is a key issue that has to be tackled before the end of the project to keep the data usable over many years. We present RISA (Remote Interface to Science Analysis) as a solution to decouple data processing software and infrastructure life-cycles, using JAVA applications and web-services wrappers to existing software. This architecture employs embedded SAS in virtual machines assuring a homogeneous job execution environment. We will also present the first studies to reactivate the data processing software of the EXOSAT mission, the first ESA X-ray astronomy mission launched in 1983, using the generic RISA approach.

  17. Transforming Cobol Legacy Software to a Generic Imperative Model

    National Research Council Canada - National Science Library

    Moraes, DinaL

    1999-01-01

    .... This research develops a transformation system to convert COBOL code into a generic imperative model, recapturing the initial design and deciphering the requirements implemented by the legacy code...

  18. Integrating commercial and legacy systems with EPICS

    International Nuclear Information System (INIS)

    Hill, J.O.; Kasemir, K.U.

    1997-01-01

    The Experimental Physics and Industrial Control System (EPICS) is a software toolkit, developed by a worldwide collaboration, which significantly reduces the level of effort required to implement a new control system. Recent developments now also significantly reduce the level of effort required to integrate commercial, legacy and/or site-authored control systems with EPICS. This paper will illustrate with an example both the level and type of effort required to use EPICS with other control system components as well as the benefits that may arise

  19. Multicore Considerations for Legacy Flight Software Migration

    Science.gov (United States)

    Vines, Kenneth; Day, Len

    2013-01-01

    In this paper we will discuss potential benefits and pitfalls when considering a migration from an existing single core code base to a multicore processor implementation. The results of this study present options that should be considered before migrating fault managers, device handlers and tasks with time-constrained requirements to a multicore flight software environment. Possible future multicore test bed demonstrations are also discussed.

  20. Software exorcism a handbook for debugging and optimizing legacy code

    CERN Document Server

    Blunden, Bill

    2013-01-01

    Software Exorcism: A Handbook for Debugging and Optimizing Legacy Code takes an unflinching, no bulls and look at behavioral problems in the software engineering industry, shedding much-needed light on the social forces that make it difficult for programmers to do their job. Do you have a co-worker who perpetually writes bad code that you are forced to clean up? This is your book. While there are plenty of books on the market that cover debugging and short-term workarounds for bad code, Reverend Bill Blunden takes a revolutionary step beyond them by bringing our atten

  1. A Heuristic for Improving Legacy Software Quality during Maintenance: An Empirical Case Study

    Science.gov (United States)

    Sale, Michael John

    2017-01-01

    Many organizations depend on the functionality of mission-critical legacy software and the continued maintenance of this software is vital. Legacy software is defined here as software that contains no testing suite, is often foreign to the developer performing the maintenance, lacks meaningful documentation, and over time, has become difficult to…

  2. Legacy system retirement plan for HANDI 2000 business management system

    Energy Technology Data Exchange (ETDEWEB)

    Adams, D.E.

    1998-09-29

    The implementation of the Business Management System (BMS) will replace a number of systems currently in use at Hanford. These systems will be retired when the replacement is complete and the data from the old systems adequately stored and/or converted to the new system. The replacement is due to a number of factors: (1) Year 2000 conversion: Most of the systems being retired are not year 2000 compliant. Estimates on making these systems compliant approach the costs of replacing with the enterprise system. (2) Many redundant custom-made systems: Maintenance costs on the aging custom developed systems is high. The systems also have overlapping functionality. Replacement with an enterprise system is expected to lower the maintenance costs. (3) Shift inefficient/complex work processes to commercial standards: Many business practices have been developed in isolation from competitive pressures and without a good business foundation. Replacement of the systems allows an opportunity to upgrade the business practices to conform to a market driven approach. (4) Questionable legacy data: Significant amount of data contained within the legacy systems is of questionable origin and value. Replacement of the systems allows for a new beginning with a clean slate and stronger data validation rules. A number of the systems being retired depend on hardware and software technologies that are no longer adequately supported in the market place. The IRM Application Software System Life Cycle Standards, HNF-PRO-2778, and the Data Systems Review Board (DSRB) define a system retirement process which involves the removal of an existing system from active support or use either by: ceasing its operation or support; or replacing it with a new system; or replacing it with an upgraded version of the existing system. It is important to note, that activities associated with the recovery of the system, once archived, relates to the ability for authorized personnel to gain access to the data and

  3. Software system safety

    Science.gov (United States)

    Uber, James G.

    1988-01-01

    Software itself is not hazardous, but since software and hardware share common interfaces there is an opportunity for software to create hazards. Further, these software systems are complex, and proven methods for the design, analysis, and measurement of software safety are not yet available. Some past software failures, future NASA software trends, software engineering methods, and tools and techniques for various software safety analyses are reviewed. Recommendations to NASA are made based on this review.

  4. The Political Legacy of School Accountability Systems

    Directory of Open Access Journals (Sweden)

    Sherman Dorn

    1998-01-01

    Full Text Available The recent battle reported from Washington about proposed national testing program does not tell the most important political story about high stakes tests. Politically popular school accountability systems in many states already revolve around statistical results of testing with high-stakes environments. The future of high stakes tests thus does not depend on what happens on Capitol Hill. Rather, the existence of tests depends largely on the political culture of published test results. Most critics of high-stakes testing do not talk about that culture, however. They typically focus on the practice legacy of testing, the ways in which testing creates perverse incentives against good teaching. More important may be the political legacy, or how testing defines legitimate discussion about school politics. The consequence of statistical accountability systems will be the narrowing of purpose for schools, impatience with reform, and the continuing erosion of political support for publicly funded schools. Dissent from the high-stakes accountability regime that has developed around standardized testing, including proposals for professionalism and performance assessment, commonly fails to consider these political legacies. Alternatives to standardized testing which do not also connect schooling with the public at large will not be politically viable.

  5. Software systems as cities

    OpenAIRE

    Wettel, Richard; Lanza, Michele

    2010-01-01

    Software understanding takes up a large share of the total cost of a software system. The high costs attributed to software understanding activities are caused by the size and complexity of software systems, by the continuous evolution that these systems are subject to, and by the lack of physical presence which makes software intangible. Reverse engineering helps practitioners deal with the intrinsic complexity of software, by providing a broad range of patterns and techniques. One of...

  6. Application Programmer's Interface (API) for Heterogeneous Language Environment and Upgrading the Legacy Embedded Software

    National Research Council Canada - National Science Library

    Moua, Theng

    2001-01-01

    .... The shortage of original software designs, lack of corporate knowledge and software design documentation, unsupported programming languages, and obsolete real-time operating system and development...

  7. From legacy and client/server systems to components in healthcare information systems in Finland.

    Science.gov (United States)

    Mykkänen, J; Korpela, M; Eerola, A; Porrasmaa, J; Ruonamaa, H; Sormunen, M

    2001-01-01

    A strategy and toolset (FixIT) for migrating a specific type of legacy systems--based on the FileMan DBMS of the U.S. Department of Veterans Affairs--to a two-tier client/server and web browser-based architecture was presented in MEDINFO'98. In the current paper we discuss the further migration to a multitier software component architecture. A literature survey and industry contacts were used to specify an open, component-based target architecture for healthcare information systems to be reached by the year 2005, as well as a phased migration strategy from the present FileMan/FixIT-based systems towards the target. The target architecture is based on large-grained business components and accommodates heterogeneous elements on the intra-component, intra-application, intra-organization and inter-organizational levels. Four logical tiers are identified within a business component. Three migration paths are specified for different cases: the tier-by-tier, piece-by-piece, and web-wrapping paths. It is argued that the architecture, supported by off-the-shelf toolsets, application frameworks and a new software development process, makes it possible to turn legacy systems into a valuable asset, split monolithic applications into reusable components, and ultimately replace the legacy parts at a feasible pace

  8. Software systems for astronomy

    CERN Document Server

    Conrad, Albert R

    2014-01-01

    This book covers the use and development of software for astronomy. It describes the control systems used to point the telescope and operate its cameras and spectrographs, as well as the web-based tools used to plan those observations. In addition, the book also covers the analysis and archiving of astronomical data once it has been acquired. Readers will learn about existing software tools and packages, develop their own software tools, and analyze real data sets.

  9. Deprogramming Large Software Systems

    OpenAIRE

    Coppel, Yohann; Candea, George

    2008-01-01

    Developers turn ideas, designs and patterns into source code, then compile the source code into executables. Decompiling turns executables back into source code, and deprogramming turns code back into designs and patterns. In this paper we introduce DeP, a tool for deprogramming software systems. DeP abstracts code into a dependency graph and mines this graph for patterns. It also gives programmers visual means for manipulating the program. We describe DeP's use in several software engineerin...

  10. Implementing Provenance Collection in a Legacy Data Product Generation System

    Science.gov (United States)

    Conover, H.; Ramachandran, R.; Kulkarni, A.; Beaumont, B.; McEniry, M.; Graves, S. J.; Goodman, H.

    2012-12-01

    NASA has been collecting, storing, archiving and distributing vast amounts of Earth science data derived from satellite observations for several decades now. The raw data collected from the different sensors undergoes many different transformations before it is distributed to the science community as climate-research-quality data products. These data transformations include calibration, geolocation, and conversion of the instrument counts into meaningful geophysical parameters, and may include reprojection and/or spatial and temporal averaging as well. In the case of many Earth science data systems, the science algorithms and any ancillary data files used for these transformations are delivered as a "black box" to be integrated into the data system's processing framework. In contrast to an experimental workflow that may vary with each iteration, such systems use consistent, well-engineered processes to apply the same science algorithm to each well-defined set of inputs in order to create standard data products. Even so, variability is inevitably introduced. There may be changes made to the algorithms, different ancillary datasets may be used, underlying hardware and software may get upgraded, etc. Furthermore, late-arriving input data, operator error, or other processing anomalies may necessitate regeneration and replacement of a particular set of data files and any downstream products. These variations need to be captured, documented and made accessible to the scientific community so they can be properly accounted for in analyses. This presentation describes an approach to provenance capture, storage and dissemination implemented at the NASA Science Investigator-led Processing System (SIPS) for the AMSR-E (Advanced Microwave Scanning Radiometer - Earth Observing System) instrument. Key considerations in adding provenance capabilities to this legacy data system include: (1) granularity of provenance information captured, (2) additional context information needed

  11. Forging Links between the Web and Legacy Systems.

    Science.gov (United States)

    Chapman, Noleen

    1998-01-01

    Discusses why information technology managers are exploring the economic and strategic advantages of Web technology and finding that legacy systems still have an important role. Presents benefits: centralized management, reduced cost of ownership, wide user access; models of Web-to-host access; the Citrix thin client model; and future of…

  12. The PANIC software system

    Science.gov (United States)

    Ibáñez Mengual, José M.; Fernández, Matilde; Rodríguez Gómez, Julio F.; García Segura, Antonio J.; Storz, Clemens

    2010-07-01

    PANIC is the Panoramic Near Infrared Camera for the 2.2m and 3.5m telescopes at Calar Alto observatory. The aim of the project is to build a wide-field general purpose NIR camera. In this paper we describe the software system of the instrument, which comprises four main packages: GEIRS for the instrument control and the data acquisition; the Observation Tool (OT), the software used for detailed definition and pre-planning the observations, developed in Java; the Quick Look tool (PQL) for easy inspection of the data in real-time and a scientific pipeline (PAPI), both based on the Python programming language.

  13. Evolvable Neural Software System

    Science.gov (United States)

    Curtis, Steven A.

    2009-01-01

    The Evolvable Neural Software System (ENSS) is composed of sets of Neural Basis Functions (NBFs), which can be totally autonomously created and removed according to the changing needs and requirements of the software system. The resulting structure is both hierarchical and self-similar in that a given set of NBFs may have a ruler NBF, which in turn communicates with other sets of NBFs. These sets of NBFs may function as nodes to a ruler node, which are also NBF constructs. In this manner, the synthetic neural system can exhibit the complexity, three-dimensional connectivity, and adaptability of biological neural systems. An added advantage of ENSS over a natural neural system is its ability to modify its core genetic code in response to environmental changes as reflected in needs and requirements. The neural system is fully adaptive and evolvable and is trainable before release. It continues to rewire itself while on the job. The NBF is a unique, bilevel intelligence neural system composed of a higher-level heuristic neural system (HNS) and a lower-level, autonomic neural system (ANS). Taken together, the HNS and the ANS give each NBF the complete capabilities of a biological neural system to match sensory inputs to actions. Another feature of the NBF is the Evolvable Neural Interface (ENI), which links the HNS and ANS. The ENI solves the interface problem between these two systems by actively adapting and evolving from a primitive initial state (a Neural Thread) to a complicated, operational ENI and successfully adapting to a training sequence of sensory input. This simulates the adaptation of a biological neural system in a developmental phase. Within the greater multi-NBF and multi-node ENSS, self-similar ENI s provide the basis for inter-NBF and inter-node connectivity.

  14. Software Intensive Systems

    National Research Council Canada - National Science Library

    Horvitz, E; Katz, D. J; Rumpf, R. L; Shrobe, H; Smith, T. B; Webber, G. E; Williamson, W. E; Winston, P. H; Wolbarsht, James L

    2006-01-01

    .... Recommend that DoN create a software acquisition specialty, mandate basic schooling for software acquisition specialists, close certain acquisition loopholes that permit poor development practices...

  15. BLTC control system software

    Energy Technology Data Exchange (ETDEWEB)

    Logan, J.B., Fluor Daniel Hanford

    1997-02-10

    This is a direct revision to Rev. 0 of the BLTC Control System Software. The entire document is being revised and released as HNF-SD-FF-CSWD-025, Rev 1. The changes incorporated by this revision include addition of a feature to automate the sodium drain when removing assemblies from sodium wetted facilities. Other changes eliminate locked in alarms during cold operation and improve the function of the Oxygen Analyzer. See FCN-620498 for further details regarding these changes. Note the change in the document number prefix, in accordance with HNF-MD-003.

  16. Software for microcircuit systems

    International Nuclear Information System (INIS)

    Kunz, P.F.

    1978-10-01

    Modern Large Scale Integration (LSI) microcircuits are meant to be programed in order to control the function that they perform. The basics of microprograming and new microcircuits have already been discussed. In this course, the methods of developing software for these microcircuits are explored. This generally requires a package of support software in order to assemble the microprogram, and also some amount of support software to test the microprograms and to test the microprogramed circuit itself. 15 figures, 2 tables

  17. Legacy Vehicle Fuel System Testing with Intermediate Ethanol Blends

    Energy Technology Data Exchange (ETDEWEB)

    Davis, G. W.; Hoff, C. J.; Borton, Z.; Ratcliff, M. A.

    2012-03-01

    The effects of E10 and E17 on legacy fuel system components from three common mid-1990s vintage vehicle models (Ford, GM, and Toyota) were studied. The fuel systems comprised a fuel sending unit with pump, a fuel rail and integrated pressure regulator, and the fuel injectors. The fuel system components were characterized and then installed and tested in sample aging test rigs to simulate the exposure and operation of the fuel system components in an operating vehicle. The fuel injectors were cycled with varying pulse widths during pump operation. Operational performance, such as fuel flow and pressure, was monitored during the aging tests. Both of the Toyota fuel pumps demonstrated some degradation in performance during testing. Six injectors were tested in each aging rig. The Ford and GM injectors showed little change over the aging tests. Overall, based on the results of both the fuel pump testing and the fuel injector testing, no major failures were observed that could be attributed to E17 exposure. The unknown fuel component histories add a large uncertainty to the aging tests. Acquiring fuel system components from operational legacy vehicles would reduce the uncertainty.

  18. Recommendation systems in software engineering

    CERN Document Server

    Robillard, Martin P; Walker, Robert J; Zimmermann, Thomas

    2014-01-01

    With the growth of public and private data stores and the emergence of off-the-shelf data-mining technology, recommendation systems have emerged that specifically address the unique challenges of navigating and interpreting software engineering data.This book collects, structures and formalizes knowledge on recommendation systems in software engineering. It adopts a pragmatic approach with an explicit focus on system design, implementation, and evaluation. The book is divided into three parts: "Part I - Techniques" introduces basics for building recommenders in software engineering, including techniques for collecting and processing software engineering data, but also for presenting recommendations to users as part of their workflow.?"Part II - Evaluation" summarizes methods and experimental designs for evaluating recommendations in software engineering.?"Part III - Applications" describes needs, issues and solution concepts involved in entire recommendation systems for specific software engineering tasks, fo...

  19. System support software for TSTA

    International Nuclear Information System (INIS)

    Claborn, G.W.; Mann, L.W.; Nielson, C.W.

    1987-01-01

    The software at the Tritium Systems Test Assembly (TSTA) is logically broken into two parts, the system support software and the subsystem software. The purpose of the system support software is to isolate the subsystem software from the physical hardware. In this sense the system support software forms the kernel of the software at TSTA. The kernel software performs several functions. It gathers data from CAMAC modules and makes that data available for subsystem processes. It services requests to send commands to CAMAC modules. It provides a system of logging functions and provides for a system-wide global program state that allows highly structured interaction between subsystem processes. The kernel's most visible function is to provide the Man-Machine Interface (MMI). The MMI allows the operators a window into the physical hardware and subsystem process state. Finally the kernel provides a data archiving and compression function that allows archival data to be accessed and plotted. Such kernel software as developed and implemented at TSTA is described

  20. Modeling software systems by domains

    Science.gov (United States)

    Dippolito, Richard; Lee, Kenneth

    1992-01-01

    The Software Architectures Engineering (SAE) Project at the Software Engineering Institute (SEI) has developed engineering modeling techniques that both reduce the complexity of software for domain-specific computer systems and result in systems that are easier to build and maintain. These techniques allow maximum freedom for system developers to apply their domain expertise to software. We have applied these techniques to several types of applications, including training simulators operating in real time, engineering simulators operating in non-real time, and real-time embedded computer systems. Our modeling techniques result in software that mirrors both the complexity of the application and the domain knowledge requirements. We submit that the proper measure of software complexity reflects neither the number of software component units nor the code count, but the locus of and amount of domain knowledge. As a result of using these techniques, domain knowledge is isolated by fields of engineering expertise and removed from the concern of the software engineer. In this paper, we will describe kinds of domain expertise, describe engineering by domains, and provide relevant examples of software developed for simulator applications using the techniques.

  1. Software Intensive Systems

    Science.gov (United States)

    2006-07-01

    Mr. Carl Siel - CHENG Executive Secretaries: • Dr. William Bail, MITRE • Ms. Cathy Ricketts, PEO - IWS • Mr. Fred Heinemann, EDO Study...computer producers by location China US Japan Globalizing of Software and Hardware In order to fulfill the growing needs, companies have been...computer manufacturers, the trend towards offshoring has been significant. In a three year period , the proportion of 300mm fabrication plants in the U.S

  2. Software Build and Delivery Systems

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-10

    This presentation deals with the hierarchy of software build and delivery systems. One of the goals is to maximize the success rate of new users and developers when first trying your software. First impressions are important. Early successes are important. This also reduces critical documentation costs. This is a presentation focused on computer science and goes into detail about code documentation.

  3. System Software 7 Macintosh

    CERN Multimedia

    1991-01-01

    System 7 is a single-user graphical user interface-based operating system for Macintosh computers and was part of the classic Mac OS line of operating systems. It was introduced on May 13, 1991, by Apple Computer. It succeeded System 6, and was the main Macintosh operating system until it was succeeded by Mac OS 8 in 1997. Features added with the System 7 release included virtual memory, personal file sharing, QuickTime, QuickDraw 3D, and an improved user interface. This is the first real major evolution of the Macintosh system, bringing a significant improvement in the user interface, improved stability and many new features such as the ability to use multiple applications at the same time. "System 7" is the last operating system name of the Macintosh that contains the word "system". Macintosh operating systems were later called "Mac OS" (for Macintosh Operating System).

  4. On the ergodic capacity of legacy systems in the presence of next generation interference

    KAUST Repository

    Mahmood, Nurul Huda

    2011-11-01

    Next generation wireless systems facilitating better utilization of the scarce radio spectrum have emerged as a response to inefficient rigid spectrum assignment policies. These are comprised of intelligent radio nodes that opportunistically operate in the radio spectrum of existing legacy systems; yet unwanted interference at the legacy receivers is unavoidable. In order to design efficient next generation systems and to minimize their harmful consequences, it is necessary to realize their impact on the performance of legacy systems. In this work, a generalized framework for the ergodic capacity analysis of such legacy systems in the presence of interference from next generation systems is presented. The analysis is built around a model developed for the statistical representation of the interference at the legacy receivers, which is then used to evaluate the ergodic capacity of the legacy system. Moreover, this analysis is not limited to the context of legacy systems, and is in fact applicaple to any interference limited system. Findings of analytical performance analyses are confirmed through selected computer-based Monte-Carlo simulations. © 2011 IEEE.

  5. Software archeology: a case study in software quality assurance and design

    Energy Technology Data Exchange (ETDEWEB)

    Macdonald, John M [Los Alamos National Laboratory; Lloyd, Jane A [Los Alamos National Laboratory; Turner, Cameron J [COLORADO SCHOOL OF MINES

    2009-01-01

    Ideally, quality is designed into software, just as quality is designed into hardware. However, when dealing with legacy systems, demonstrating that the software meets required quality standards may be difficult to achieve. As the need to demonstrate the quality of existing software was recognized at Los Alamos National Laboratory (LANL), an effort was initiated to uncover and demonstrate that legacy software met the required quality standards. This effort led to the development of a reverse engineering approach referred to as software archaeology. This paper documents the software archaeology approaches used at LANL to document legacy software systems. A case study for the Robotic Integrated Packaging System (RIPS) software is included.

  6. Legacy effects in linked ecological-soil-geomorphic systems of drylands

    Science.gov (United States)

    Monger, Curtis; Sala, Osvaldo E.; Duniway, Michael C.; Goldfus, Haim; Meir, Isaac A.; Poch, Rosa M.; Throop, Heather L.; Vivoni, Enrique R.

    2015-01-01

    A legacy effect refers to the impacts that previous conditions have on current processes or properties. Legacies have been recognized by many disciplines, from physiology and ecology to anthropology and geology. Within the context of climatic change, ecological legacies in drylands (eg vegetative patterns) result from feedbacks between biotic, soil, and geomorphic processes that operate at multiple spatial and temporal scales. Legacy effects depend on (1) the magnitude of the original phenomenon, (2) the time since the occurrence of the phenomenon, and (3) the sensitivity of the ecological–soil–geomorphic system to change. Here we present a conceptual framework for legacy effects at short-term (days to months), medium-term (years to decades), and long-term (centuries to millennia) timescales, which reveals the ubiquity of such effects in drylands across research disciplines.

  7. The primary protection system software

    International Nuclear Information System (INIS)

    Tooley, P.A.

    1992-01-01

    This paper continues the detailed description of the Primary Protection System for Sizewell-B by providing an overview of design and implementation of the software, including the features of the design process which ensure that quality is delivered by the contractor. The Nuclear Electric software assessment activities are also described. The argument for the excellence of the software is made on the basis of a quality product delivered by the equipment supplier's design process, and the confirmation of this provided by the Nuclear Electric assessment process, which is as searching and complete an examination as is reasonably practicable to achieve. (author)

  8. Resilient Software Systems

    Science.gov (United States)

    2015-06-01

    cluster. The cluster resource managers guarantee that this information is available from all CRM instances on all nodes. The CRM also provides information...not used widely in space systems. With the advent of recent developments to adopt many small and cheap fractionated satellites in place of the

  9. SPINning parallel systems software

    International Nuclear Information System (INIS)

    Matlin, O.S.; Lusk, E.; McCune, W.

    2002-01-01

    We describe our experiences in using Spin to verify parts of the Multi Purpose Daemon (MPD) parallel process management system. MPD is a distributed collection of processes connected by Unix network sockets. MPD is dynamic processes and connections among them are created and destroyed as MPD is initialized, runs user processes, recovers from faults, and terminates. This dynamic nature is easily expressible in the Spin/Promela framework but poses performance and scalability challenges. We present here the results of expressing some of the parallel algorithms of MPD and executing both simulation and verification runs with Spin

  10. Computer systems and software engineering

    Science.gov (United States)

    Mckay, Charles W.

    1988-01-01

    The High Technologies Laboratory (HTL) was established in the fall of 1982 at the University of Houston Clear Lake. Research conducted at the High Tech Lab is focused upon computer systems and software engineering. There is a strong emphasis on the interrelationship of these areas of technology and the United States' space program. In Jan. of 1987, NASA Headquarters announced the formation of its first research center dedicated to software engineering. Operated by the High Tech Lab, the Software Engineering Research Center (SERC) was formed at the University of Houston Clear Lake. The High Tech Lab/Software Engineering Research Center promotes cooperative research among government, industry, and academia to advance the edge-of-knowledge and the state-of-the-practice in key topics of computer systems and software engineering which are critical to NASA. The center also recommends appropriate actions, guidelines, standards, and policies to NASA in matters pertinent to the center's research. Results of the research conducted at the High Tech Lab/Software Engineering Research Center have given direction to many decisions made by NASA concerning the Space Station Program.

  11. Experimental research control software system

    International Nuclear Information System (INIS)

    Cohn, I A; Kovalenko, A G; Vystavkin, A N

    2014-01-01

    A software system, intended for automation of a small scale research, has been developed. The software allows one to control equipment, acquire and process data by means of simple scripts. The main purpose of that development is to increase experiment automation easiness, thus significantly reducing experimental setup automation efforts. In particular, minimal programming skills are required and supervisors have no reviewing troubles. Interactions between scripts and equipment are managed automatically, thus allowing to run multiple scripts simultaneously. Unlike well-known data acquisition commercial software systems, the control is performed by an imperative scripting language. This approach eases complex control and data acquisition algorithms implementation. A modular interface library performs interaction with external interfaces. While most widely used interfaces are already implemented, a simple framework is developed for fast implementations of new software and hardware interfaces. While the software is in continuous development with new features being implemented, it is already used in our laboratory for automation of a helium-3 cryostat control and data acquisition. The software is open source and distributed under Gnu Public License.

  12. Experimental research control software system

    Science.gov (United States)

    Cohn, I. A.; Kovalenko, A. G.; Vystavkin, A. N.

    2014-05-01

    A software system, intended for automation of a small scale research, has been developed. The software allows one to control equipment, acquire and process data by means of simple scripts. The main purpose of that development is to increase experiment automation easiness, thus significantly reducing experimental setup automation efforts. In particular, minimal programming skills are required and supervisors have no reviewing troubles. Interactions between scripts and equipment are managed automatically, thus allowing to run multiple scripts simultaneously. Unlike well-known data acquisition commercial software systems, the control is performed by an imperative scripting language. This approach eases complex control and data acquisition algorithms implementation. A modular interface library performs interaction with external interfaces. While most widely used interfaces are already implemented, a simple framework is developed for fast implementations of new software and hardware interfaces. While the software is in continuous development with new features being implemented, it is already used in our laboratory for automation of a helium-3 cryostat control and data acquisition. The software is open source and distributed under Gnu Public License.

  13. Virtual Exercise Training Software System

    Science.gov (United States)

    Vu, L.; Kim, H.; Benson, E.; Amonette, W. E.; Barrera, J.; Perera, J.; Rajulu, S.; Hanson, A.

    2018-01-01

    The purpose of this study was to develop and evaluate a virtual exercise training software system (VETSS) capable of providing real-time instruction and exercise feedback during exploration missions. A resistive exercise instructional system was developed using a Microsoft Kinect depth-camera device, which provides markerless 3-D whole-body motion capture at a small form factor and minimal setup effort. It was hypothesized that subjects using the newly developed instructional software tool would perform the deadlift exercise with more optimal kinematics and consistent technique than those without the instructional software. Following a comprehensive evaluation in the laboratory, the system was deployed for testing and refinement in the NASA Extreme Environment Mission Operations (NEEMO) analog.

  14. Remote Evaluation of the Coherence of Indirect Manipulation Interface Systems For Agent-Mediated Legacy Data

    National Research Council Canada - National Science Library

    Schafer, Joseph

    2000-01-01

    Many information systems depend heavily on distributed legacy data sources. These data sources introduce a number of significant problems, especially when the sources must be combined and displayed to remote users...

  15. Analysis of Architecture Pattern Usage in Legacy System Architecture Documentation

    NARCIS (Netherlands)

    Harrison, Neil B.; Avgeriou, Paris

    2008-01-01

    Architecture patterns are an important tool in architectural design. However, while many architecture patterns have been identified, there is little in-depth understanding of their actual use in software architectures. For instance, there is no overview of how many patterns are used per system or

  16. Packaging of control system software

    International Nuclear Information System (INIS)

    Zagar, K.; Kobal, M.; Saje, N.; Zagar, A.; Sabjan, R.; Di Maio, F.; Stepanov, D.

    2012-01-01

    Control system software consists of several parts - the core of the control system, drivers for integration of devices, configuration for user interfaces, alarm system, etc. Once the software is developed and configured, it must be installed to computers where it runs. Usually, it is installed on an operating system whose services it needs, and also in some cases dynamically links with the libraries it provides. Operating system can be quite complex itself - for example, a typical Linux distribution consists of several thousand packages. To manage this complexity, we have decided to rely on Red Hat Package Management system (RPM) to package control system software, and also ensure it is properly installed (i.e., that dependencies are also installed, and that scripts are run after installation if any additional actions need to be performed). As dozens of RPM packages need to be prepared, we are reducing the amount of effort and improving consistency between packages through a Maven-based infrastructure that assists in packaging (e.g., automated generation of RPM SPEC files, including automated identification of dependencies). So far, we have used it to package EPICS, Control System Studio (CSS) and several device drivers. We perform extensive testing on Red Hat Enterprise Linux 5.5, but we have also verified that packaging works on CentOS and Scientific Linux. In this article, we describe in greater detail the systematic system of packaging we are using, and its particular application for the ITER CODAC Core System. (authors)

  17. Software for graphic display systems

    International Nuclear Information System (INIS)

    Karlov, A.A.

    1978-01-01

    In this paper some aspects of graphic display systems are discussed. The design of a display subroutine library is described, with an example, and graphic dialogue software is considered primarily from the point of view of the programmer who uses a high-level language. (Auth.)

  18. Software metrics: Software quality metrics for distributed systems. [reliability engineering

    Science.gov (United States)

    Post, J. V.

    1981-01-01

    Software quality metrics was extended to cover distributed computer systems. Emphasis is placed on studying embedded computer systems and on viewing them within a system life cycle. The hierarchy of quality factors, criteria, and metrics was maintained. New software quality factors were added, including survivability, expandability, and evolvability.

  19. The economics of information systems and software

    CERN Document Server

    Veryard, Richard

    2014-01-01

    The Economics of Information Systems and Software focuses on the economic aspects of information systems and software, including advertising, evaluation of information systems, and software maintenance. The book first elaborates on value and values, software business, and scientific information as an economic category. Discussions focus on information products and information services, special economic properties of information, culture and convergence, hardware and software products, materiality and consumption, technological progress, and software flexibility. The text then takes a look at a

  20. Automating software design system DESTA

    Science.gov (United States)

    Lovitsky, Vladimir A.; Pearce, Patricia D.

    1992-01-01

    'DESTA' is the acronym for the Dialogue Evolutionary Synthesizer of Turnkey Algorithms by means of a natural language (Russian or English) functional specification of algorithms or software being developed. DESTA represents the computer-aided and/or automatic artificial intelligence 'forgiving' system which provides users with software tools support for algorithm and/or structured program development. The DESTA system is intended to provide support for the higher levels and earlier stages of engineering design of software in contrast to conventional Computer Aided Design (CAD) systems which provide low level tools for use at a stage when the major planning and structuring decisions have already been taken. DESTA is a knowledge-intensive system. The main features of the knowledge are procedures, functions, modules, operating system commands, batch files, their natural language specifications, and their interlinks. The specific domain for the DESTA system is a high level programming language like Turbo Pascal 6.0. The DESTA system is operational and runs on an IBM PC computer.

  1. Product Engineering Class in the Software Safety Risk Taxonomy for Building Safety-Critical Systems

    Science.gov (United States)

    Hill, Janice; Victor, Daniel

    2008-01-01

    When software safety requirements are imposed on legacy safety-critical systems, retrospective safety cases need to be formulated as part of recertifying the systems for further use and risks must be documented and managed to give confidence for reusing the systems. The SEJ Software Development Risk Taxonomy [4] focuses on general software development issues. It does not, however, cover all the safety risks. The Software Safety Risk Taxonomy [8] was developed which provides a construct for eliciting and categorizing software safety risks in a straightforward manner. In this paper, we present extended work on the taxonomy for safety that incorporates the additional issues inherent in the development and maintenance of safety-critical systems with software. An instrument called a Software Safety Risk Taxonomy Based Questionnaire (TBQ) is generated containing questions addressing each safety attribute in the Software Safety Risk Taxonomy. Software safety risks are surfaced using the new TBQ and then analyzed. In this paper we give the definitions for the specialized Product Engineering Class within the Software Safety Risk Taxonomy. At the end of the paper, we present the tool known as the 'Legacy Systems Risk Database Tool' that is used to collect and analyze the data required to show traceability to a particular safety standard

  2. THEMIS Data and Software Systems

    Science.gov (United States)

    Goethel, C.; Angelopoulos, V.

    2009-12-01

    THEMIS consists of five spacecraft and 31 ground observatories, including 10 education and public outreach sites. The spacecraft carry a comprehensive suite of particle and field instruments providing measurements with different sampling rates and modes, including survey and burst collection. The distributed array of ground based observatories equipped with 21 all-sky imagers and 31 ground magnetometers provide continuous monitoring of aurora and magnetic field variations from Alaska to Greenland. Data are automatically processed within hours of receipt, stored in daily Common Data Format (CDF) files, plotted and distributed along with corresponding calibration files via a central site at SSL/UCB and several mirror sites worldwide. THEMIS software is an open source, platform independent, IDL-based library of utilities. The system enables downloads of calibrated (L2) or raw (L1) data, data analysis, ingestion of data from other missions and ground stations, and production of publication quality plots. The combination of a user-friendly graphical user interface and a command line interface support a wide range of users. In addition, IDL scripts (crib sheets) are provided for manipulation of THEMIS and ancillary data sets. The system design philosophy will be described along with examples to demonstrate the software capabilities in streamlining data/software distribution and exchange, thereby further enhancing science productivity.

  3. Domain management OSSs: bridging the gap between legacy and standards-based network management systems

    Science.gov (United States)

    Lemley, Todd A.

    1996-11-01

    The rapid change in the telecommunications environment is forcing carriers to re-assess not only their service offering, but also their network management philosophy. The competitive carrier environment has taken away the luxury of throwing technology at a problem by using legacy and proprietary systems and architectures. A more flexible management environment is necessary to effectively gain, and maintain operating margins in the new market era. Competitive forces are driving change which gives carriers more choices than those that are available in legacy and standards-based solutions alone. However, creating an operational support system (OSS) with this gap between legacy and standards has become as dynamic as the services which it supports. A philosophy which helps to integrate the legacy and standards systems is domain management. Domain management relates to a specific service or market 'domain,'and its associated operational support requirements. It supports a companies definition of its business model, which drives the definition of each domain. It also attempts to maximize current investment while injecting new technology available in a practical approach. The following paragraphs offer an overview of legacy systems, standards-based philosophy, and the potential of domain management to help bridge the gap between the two types of systems.

  4. Automatic Generation of Machine Emulators: Efficient Synthesis of Robust Virtual Machines for Legacy Software Migration

    DEFF Research Database (Denmark)

    Franz, Michael; Gal, Andreas; Probst, Christian

    2006-01-01

    As older mainframe architectures become obsolete, the corresponding le- gacy software is increasingly executed via platform emulators running on top of more modern commodity hardware. These emulators are virtual machines that often include a combination of interpreters and just-in-time compilers....... Implementing interpreters and compilers for each combination of emulated and target platform independently of each other is a redundant and error-prone task. We describe an alternative approach that automatically synthesizes specialized virtual-machine interpreters and just-in-time compilers, which...

  5. Implementation Challenges of an Enterprise System and Its Advantages over Legacy Systems

    OpenAIRE

    Dr. Nabie Y. Conteh; M. Jalil Akhtar

    2015-01-01

    This paper explores the implementation challenges of Enterprise Resource Planning in the industry and its advantages over legacy systems. The paper depicts the historical background of ERPs and their significance in facilitating coordination among the functional areas of organizations in the industry. It also discusses the role it plays in promoting the activities of Supply Chain Management (SCM) and Customer Relationship Management (CRM). The paper presents empirical data on ERPs and their c...

  6. High Confidence Software and Systems Research Needs

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — This White Paper presents a survey of high confidence software and systems research needs. It has been prepared by the High Confidence Software and Systems...

  7. Source Code Vulnerabilities in IoT Software Systems

    Directory of Open Access Journals (Sweden)

    Saleh Mohamed Alnaeli

    2017-08-01

    Full Text Available An empirical study that examines the usage of known vulnerable statements in software systems developed in C/C++ and used for IoT is presented. The study is conducted on 18 open source systems comprised of millions of lines of code and containing thousands of files. Static analysis methods are applied to each system to determine the number of unsafe commands (e.g., strcpy, strcmp, and strlen that are well-known among research communities to cause potential risks and security concerns, thereby decreasing a system’s robustness and quality. These unsafe statements are banned by many companies (e.g., Microsoft. The use of these commands should be avoided from the start when writing code and should be removed from legacy code over time as recommended by new C/C++ language standards. Each system is analyzed and the distribution of the known unsafe commands is presented. Historical trends in the usage of the unsafe commands of 7 of the systems are presented to show how the studied systems evolved over time with respect to the vulnerable code. The results show that the most prevalent unsafe command used for most systems is memcpy, followed by strlen. These results can be used to help train software developers on secure coding practices so that they can write higher quality software systems.

  8. Harvesting software systems for MDA-based reengineering

    NARCIS (Netherlands)

    Reus, T.; Geers, H.; Van Deursen, A.

    2006-01-01

    In this paper we report on a feasibility study in reengineering legacy systems towards a model-driven architecture (MDA). Steps in our approach consist of (1) parsing the source code of the legacy system according to a grammar; (2) mapping the abstract syntax trees thus obtained to a grammar model

  9. A Variability Viewpoint for Enterprise Software Systems

    NARCIS (Netherlands)

    Galster, Matthias; Avgeriou, Paris

    2012-01-01

    Many of today’s enterprise software systems are subject to variability. For example, enterprise software systems often run in different business units of an organization, with each unit having its own detailed requirements. Systematic handling of variability allows a software system to be adjusted

  10. Space Flight Software Development Software for Intelligent System Health Management

    Science.gov (United States)

    Trevino, Luis C.; Crumbley, Tim

    2004-01-01

    The slide presentation examines the Marshall Space Flight Center Flight Software Branch, including software development projects, mission critical space flight software development, software technical insight, advanced software development technologies, and continuous improvement in the software development processes and methods.

  11. Software Quality Assurance for Nuclear Safety Systems

    International Nuclear Information System (INIS)

    Sparkman, D R; Lagdon, R

    2004-01-01

    The US Department of Energy has undertaken an initiative to improve the quality of software used to design and operate their nuclear facilities across the United States. One aspect of this initiative is to revise or create new directives and guides associated with quality practices for the safety software in its nuclear facilities. Safety software includes the safety structures, systems, and components software and firmware, support software and design and analysis software used to ensure the safety of the facility. DOE nuclear facilities are unique when compared to commercial nuclear or other industrial activities in terms of the types and quantities of hazards that must be controlled to protect workers, public and the environment. Because of these differences, DOE must develop an approach to software quality assurance that ensures appropriate risk mitigation by developing a framework of requirements that accomplishes the following goals: (sm b ullet) Ensures the software processes developed to address nuclear safety in design, operation, construction and maintenance of its facilities are safe (sm b ullet) Considers the larger system that uses the software and its impacts (sm b ullet) Ensures that the software failures do not create unsafe conditions Software designers for nuclear systems and processes must reduce risks in software applications by incorporating processes that recognize, detect, and mitigate software failure in safety related systems. It must also ensure that fail safe modes and component testing are incorporated into software design. For nuclear facilities, the consideration of risk is not necessarily sufficient to ensure safety. Systematic evaluation, independent verification and system safety analysis must be considered for software design, implementation, and operation. The software industry primarily uses risk analysis to determine the appropriate level of rigor applied to software practices. This risk-based approach distinguishes safety

  12. OpenMI: the essential concepts and their implications for legacy software

    Science.gov (United States)

    Gregersen, J. B.; Gijsbers, P. J. A.; Westen, S. J. P.; Blind, M.

    2005-08-01

    Information & Communication Technology (ICT) tools such as computational models are very helpful in designing river basin management plans (rbmp-s). However, in the scientific world there is consensus that a single integrated modelling system to support e.g. the implementation of the Water Framework Directive cannot be developed and that integrated systems need to be very much tailored to the local situation. As a consequence there is an urgent need to increase the flexibility of modelling systems, such that dedicated model systems can be developed from available building blocks. The HarmonIT project aims at precisely that. Its objective is to develop and implement a standard interface for modelling components and other relevant tools: The Open Modelling Interface (OpenMI) standard. The OpenMI standard has been completed and documented. It relies entirely on the "pull" principle, where data are pulled by one model from the previous model in the chain. This paper gives an overview of the OpenMI standard, explains the foremost concepts and the rational behind it.

  13. The Chroma Software System for Lattice QCD

    International Nuclear Information System (INIS)

    Robert Edwards; Balint Joo

    2004-01-01

    We describe aspects of the Chroma software system for lattice QCD calculations. Chroma is an open source C++ based software system developed using the software infrastructure of the US SciDAC initiative. Chroma interfaces with output from the BAGEL assembly generator for optimized lattice fermion kernels on some architectures. It can be run on workstations, clusters and the QCDOC supercomputer

  14. Software Architecture Patterns for System Administration Support

    NARCIS (Netherlands)

    Wiebe Wiersema; Ronald Bijvank; Christian Köppe

    2013-01-01

    Many quality aspects of software systems are addressed in the existing literature on software architecture patterns. But the aspect of system administration seems to be a bit overlooked, even though it is an important aspect too. In this work we present three software architecture patterns that,

  15. Management information systems software evaluation

    International Nuclear Information System (INIS)

    Al-Tunisi, N.; Ghazzawi, A.; Gruyaert, F.; Clarke, D.

    1995-01-01

    In November 1993, Saudi Aramco management endorsed a proposal to coordinate the development of the Management Information Systems (MISs) of four concurrent projects for its facilities Controls Modernization Program. The affected projects were the Ras Tanura Refinery Upgrade Project, the Abqaiq Plant Controls Modernization and the Shedgum and Uthmaniyah Gas plants Control Upgrade Projects. All of these projects had a significant requirement of MISs in their scope. Under the leadership of the Process and Control Systems Department, and MIS Coordination Team was formed with representatives of several departments. An MIS Applications Evaluation procedure was developed based on the Kepner Tregoe Decisions Analysis Process and general questionnaires were sent to over a hundred potential Vendors. The applications were divided into several categories, such as: Data Capture and Historization, Human User Interface, Trending, Reporting, Graphic Displays, Data Reconciliation, Statistical Analysis, Expert Systems, Maintenance Applications, Document Management and Operations Planning and Scheduling. For each of the MIS Application areas, detailed follow-up questionnaires were used to short list the candidate products. In May and June 1994, selected Vendors were invited to Saudi Arabia for an Exhibition which was open to all Saudi Aramco employees. In conjunction with this, the Vendors were subjected to a rigorous product testing exercise by independent teams of testers. The paper will describe the methods used and the lessons learned in this extensive software evaluation phase, which was a first for Saudi Aramco

  16. Management information systems software evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Al-Tunisi, N.; Ghazzawi, A.; Gruyaert, F.; Clarke, D. [Saudi Aramco, Dhahran (Saudi Arabia). Process and Control Systems Dept.

    1995-11-01

    In November 1993, Saudi Aramco management endorsed a proposal to coordinate the development of the Management Information Systems (MISs) of four concurrent projects for its facilities Controls Modernization Program. The affected projects were the Ras Tanura Refinery Upgrade Project, the Abqaiq Plant Controls Modernization and the Shedgum and Uthmaniyah Gas plants Control Upgrade Projects. All of these projects had a significant requirement of MISs in their scope. Under the leadership of the Process and Control Systems Department, and MIS Coordination Team was formed with representatives of several departments. An MIS Applications Evaluation procedure was developed based on the Kepner Tregoe Decisions Analysis Process and general questionnaires were sent to over a hundred potential Vendors. The applications were divided into several categories, such as: Data Capture and Historization, Human User Interface, Trending, Reporting, Graphic Displays, Data Reconciliation, Statistical Analysis, Expert Systems, Maintenance Applications, Document Management and Operations Planning and Scheduling. For each of the MIS Application areas, detailed follow-up questionnaires were used to short list the candidate products. In May and June 1994, selected Vendors were invited to Saudi Arabia for an Exhibition which was open to all Saudi Aramco employees. In conjunction with this, the Vendors were subjected to a rigorous product testing exercise by independent teams of testers. The paper will describe the methods used and the lessons learned in this extensive software evaluation phase, which was a first for Saudi Aramco.

  17. The current system of higher education in India inherits the legacy of ...

    Indian Academy of Sciences (India)

    The current system of higher education in India inherits the legacy of colonial proposals and legislations dating back to the early 19th century. The social sciences and humanities still carry disciplinary burdens that need revisiting as we attempt to think of new educational strategies for the 21st century.

  18. The current system of higher education in India inherits the legacy

    Indian Academy of Sciences (India)

    The current system of higher education in India inherits the legacy of colonial proposals and legislations dating back to the early 19th century. The social sciences and humanities still carry disciplinary burdens that need revisiting as we attempt to think of new educational strategies for the 21st century.

  19. Resurrecting Legacy Code Using Ontosoft Knowledge-Sharing and Digital Object Management to Revitalize and Reproduce Software for Groundwater Management Research

    Science.gov (United States)

    Kwon, N.; Gentle, J.; Pierce, S. A.

    2015-12-01

    Software code developed for research is often used for a relatively short period of time before it is abandoned, lost, or becomes outdated. This unintentional abandonment of code is a valid problem in the 21st century scientific process, hindering widespread reusability and increasing the effort needed to develop research software. Potentially important assets, these legacy codes may be resurrected and documented digitally for long-term reuse, often with modest effort. Furthermore, the revived code may be openly accessible in a public repository for researchers to reuse or improve. For this study, the research team has begun to revive the codebase for Groundwater Decision Support System (GWDSS), originally developed for participatory decision making to aid urban planning and groundwater management, though it may serve multiple use cases beyond those originally envisioned. GWDSS was designed as a java-based wrapper with loosely federated commercial and open source components. If successfully revitalized, GWDSS will be useful for both practical applications as a teaching tool and case study for groundwater management, as well as informing theoretical research. Using the knowledge-sharing approaches documented by the NSF-funded Ontosoft project, digital documentation of GWDSS is underway, from conception to development, deployment, characterization, integration, composition, and dissemination through open source communities and geosciences modeling frameworks. Information assets, documentation, and examples are shared using open platforms for data sharing and assigned digital object identifiers. Two instances of GWDSS version 3.0 are being created: 1) a virtual machine instance for the original case study to serve as a live demonstration of the decision support tool, assuring the original version is usable, and 2) an open version of the codebase, executable installation files, and developer guide available via an open repository, assuring the source for the

  20. Software dependability in the Tandem GUARDIAN system

    Science.gov (United States)

    Lee, Inhwan; Iyer, Ravishankar K.

    1995-01-01

    Based on extensive field failure data for Tandem's GUARDIAN operating system this paper discusses evaluation of the dependability of operational software. Software faults considered are major defects that result in processor failures and invoke backup processes to take over. The paper categorizes the underlying causes of software failures and evaluates the effectiveness of the process pair technique in tolerating software faults. A model to describe the impact of software faults on the reliability of an overall system is proposed. The model is used to evaluate the significance of key factors that determine software dependability and to identify areas for improvement. An analysis of the data shows that about 77% of processor failures that are initially considered due to software are confirmed as software problems. The analysis shows that the use of process pairs to provide checkpointing and restart (originally intended for tolerating hardware faults) allows the system to tolerate about 75% of reported software faults that result in processor failures. The loose coupling between processors, which results in the backup execution (the processor state and the sequence of events) being different from the original execution, is a major reason for the measured software fault tolerance. Over two-thirds (72%) of measured software failures are recurrences of previously reported faults. Modeling, based on the data, shows that, in addition to reducing the number of software faults, software dependability can be enhanced by reducing the recurrence rate.

  1. The Chroma Software System for Lattice QCD

    International Nuclear Information System (INIS)

    Edwards, Robert G.; Joo, Balint

    2005-01-01

    We describe aspects of the Chroma software for lattice QCD calculations. Chroma is an open source C++ based software system developed using the software infrastructure of the US SciDAC initiative. Chroma interfaces with output from the BAGEL assembly generator for optimised lattice fermion kernels on some architectures. It can be run on workstations, clusters and the QCDOC supercomputer

  2. Software Architecture for Big Data Systems

    Science.gov (United States)

    2014-03-27

    Eventual Software Architecture : Trends and New Directions #SEIswArch © 2014 Carnegie Mellon University NoSQL Landscape https... landscape 2.  Identify the architecturally -significant requirements and decision criteria 3.  Evaluate candidate technologies against quality...Software Architecture : Trends and New Directions #SEIswArch © 2014 Carnegie Mellon University Software Architecture for Big Data Systems

  3. Creation and implementation of the international information system for radiation legacy of the USSR 'RADLEG'

    International Nuclear Information System (INIS)

    Iskra, A.A.

    2002-01-01

    The stating of radiological problem of the radiation legacy of the Soviet and Russian military and civil programs of the nuclear fuel cycle have became possible after 'cold war' termination. The objective of the 'RADLEG' project is 'Development of a sophisticated computer based data system for evaluation of the radiation legacy of the former USSR and setting priorities on remediation and prevention policy'. The goal of the 'RADLEG' Project Phase 1 was creation of a simple operational database to be linked to GIS, describing currently available information on radiation legacy of the former USSR. During the Project Phase 2 the public accessible database linked to GIS has been developed. This GIS data system containing comprehensive information on the radiation legacy of the former Soviet Union has been developed in order to aid policy makers in two principle areas: to identify and set priorities on radiation safety problems, and to provide guidance for the development of technically, economically and socially sound policies to reduce health and environmental impact of radioactively contaminated sites. (author)

  4. Honeywell Modular Automation System Computer Software Documentation

    International Nuclear Information System (INIS)

    STUBBS, A.M.

    2000-01-01

    The purpose of this Computer Software Document (CSWD) is to provide configuration control of the Honeywell Modular Automation System (MAS) in use at the Plutonium Finishing Plant (PFP). This CSWD describes hardware and PFP developed software for control of stabilization furnaces. The Honeywell software can generate configuration reports for the developed control software. These reports are described in the following section and are attached as addendum's. This plan applies to PFP Engineering Manager, Thermal Stabilization Cognizant Engineers, and the Shift Technical Advisors responsible for the Honeywell MAS software/hardware and administration of the Honeywell System

  5. A Software Development Platform for Mechatronic Systems

    DEFF Research Database (Denmark)

    Guan, Wei

    Software has become increasingly determinative for development of mechatronic systems, which underscores the importance of demands for shortened time-to-market, increased productivity, higher quality, and improved dependability. As the complexity of systems is dramatically increasing, these demands......-based software engineering, whereby we enable incremental software development using component models to address the essential design issues of real-time embedded systems. To this end, this dissertation presents a software development platform that provides an incremental model-driven development process based...... present a challenge to the practitioners who adopt conventional software development approach. An effective approach towards industrial production of software for mechatronic systems is needed. This approach requires a disciplined engineering process that encompasses model-driven engineering and component...

  6. Veterans Affairs Information Technology: Management Attention Needed to Improve Critical System Modernizations, Consolidate Data Centers, and Retire Legacy Systems

    Science.gov (United States)

    2017-02-07

    Veterans Affairs, Volume 1: Integrated Report (Washington, D.C.: Sept. 1, 2015). This assessment was conducted in response to a requirement in the Veterans...VETERANS AFFAIRS INFORMATION TECHNOLOGY Management Attention Needed to Improve Critical System Modernizations...Management Attention Needed to Improve Critical System Modernizations, Consolidate Data Centers, and Retire Legacy Systems What GAO Found GAO

  7. Getting Objects Methods and Interactions by Extracting Business Rules from Legacy Systems

    Directory of Open Access Journals (Sweden)

    Omar El Beggar

    2014-08-01

    Full Text Available The maintenance of legacy systems becomes over the years extremely complex and highly expensive due to the incessant changes of company activities and policies. In this case, a new or an improved system must replace the previous one. However, replacing those systems completely from scratch is also very expensive and it represents a huge risk. The optimal scenario is evolving those systems by profiting from the valuable knowledge embedded in them. This paper aims to present an approach for knowledge acquisition from existing legacy systems by extracting business rules from source code. In fact, the business rules are extracted and assigned next to the domain entities in order to generate objects methods and interactions in an object-oriented platform. Furthermore, a rules translation in natural language is given. The aim is advancing a solution for re-engineering legacy systems, minimize the cost of their modernization and keep very small the gap between the company business and the renovated systems.

  8. A NEW EXHAUST VENTILATION SYSTEM DESIGN SOFTWARE

    Directory of Open Access Journals (Sweden)

    H. Asilian Mahabady

    2007-09-01

    Full Text Available A Microsoft Windows based ventilation software package is developed to reduce time-consuming and boring procedure of exhaust ventilation system design. This program Assure accurate and reliable air pollution control related calculations. Herein, package is tentatively named Exhaust Ventilation Design Software which is developed in VB6 programming environment. Most important features of Exhaust Ventilation Design Software that are ignored in formerly developed packages are Collector design and fan dimension data calculations. Automatic system balance is another feature of this package. Exhaust Ventilation Design Software algorithm for design is based on two methods: Balance by design (Static pressure balance and design by Blast gate. The most important section of software is a spreadsheet that is designed based on American Conference of Governmental Industrial Hygienists calculation sheets. Exhaust Ventilation Design Software is developed so that engineers familiar with American Conference of Governmental Industrial Hygienists datasheet can easily employ it for ventilation systems design. Other sections include Collector design section (settling chamber, cyclone, and packed tower, fan geometry and dimension data section, a unit converter section (that helps engineers to deal with units, a hood design section and a Persian HTML help. Psychometric correction is also considered in Exhaust Ventilation Design Software. In Exhaust Ventilation Design Software design process, efforts are focused on improving GUI (graphical user interface and use of programming standards in software design. Reliability of software has been evaluated and results show acceptable accuracy.

  9. Expert System Software Assistant for Payload Operations

    Science.gov (United States)

    Rogers, Mark N.

    1997-01-01

    The broad objective of this expert system software based application was to demonstrate the enhancements and cost savings that can be achieved through expert system software utilization in a spacecraft ground control center. Spacelab provided a valuable proving ground for this advanced software technology; a technology that will be exploited and expanded for future ISS operations. Our specific focus was on demonstrating payload cadre command and control efficiency improvements through the use of "smart" software which monitors flight telemetry, provides enhanced schematic-based data visualization, and performs advanced engineering data analysis.

  10. Tools for Embedded Computing Systems Software

    Science.gov (United States)

    1978-01-01

    A workshop was held to assess the state of tools for embedded systems software and to determine directions for tool development. A synopsis of the talk and the key figures of each workshop presentation, together with chairmen summaries, are presented. The presentations covered four major areas: (1) tools and the software environment (development and testing); (2) tools and software requirements, design, and specification; (3) tools and language processors; and (4) tools and verification and validation (analysis and testing). The utility and contribution of existing tools and research results for the development and testing of embedded computing systems software are described and assessed.

  11. An Analysis of Electronic Commerce Acquisition Systems: Comparison of a New Pure Electronic Purchasing and Exchange System (Electronic Storefront) and Other Legacy On-line Purchasing Systems

    National Research Council Canada - National Science Library

    Rowe, Arthur

    2002-01-01

    ... as they relate to contracting and purchasing of supplies and services, The issues and concerns with legacy on-line procurement systems will be compared to a newly developed Pure Electronic Ordering System...

  12. Achieving Critical System Survivability Through Software Architectures

    National Research Council Canada - National Science Library

    Knight, John C; Strunk, Elisabeth A

    2006-01-01

    .... In a system with a survivability architecture, under adverse conditions such as system damage or software failures, some desirable function will be eliminated but critical services will be retained...

  13. Gas characterization system software acceptance test report

    International Nuclear Information System (INIS)

    Vo, C.V.

    1996-01-01

    This document details the results of software acceptance testing of gas characterization systems. The gas characterization systems will be used to monitor the vapor spaces of waste tanks known to contain measurable concentrations of flammable gases

  14. Software Reliability Issues Concerning Large and Safety Critical Software Systems

    Science.gov (United States)

    Kamel, Khaled; Brown, Barbara

    1996-01-01

    This research was undertaken to provide NASA with a survey of state-of-the-art techniques using in industrial and academia to provide safe, reliable, and maintainable software to drive large systems. Such systems must match the complexity and strict safety requirements of NASA's shuttle system. In particular, the Launch Processing System (LPS) is being considered for replacement. The LPS is responsible for monitoring and commanding the shuttle during test, repair, and launch phases. NASA built this system in the 1970's using mostly hardware techniques to provide for increased reliability, but it did so often using custom-built equipment, which has not been able to keep up with current technologies. This report surveys the major techniques used in industry and academia to ensure reliability in large and critical computer systems.

  15. Software engineering practices for control system reliability

    International Nuclear Information System (INIS)

    S. K. Schaffner; K. S White

    1999-01-01

    This paper will discuss software engineering practices used to improve Control System reliability. The authors begin with a brief discussion of the Software Engineering Institute's Capability Maturity Model (CMM) which is a framework for evaluating and improving key practices used to enhance software development and maintenance capabilities. The software engineering processes developed and used by the Controls Group at the Thomas Jefferson National Accelerator Facility (Jefferson Lab), using the Experimental Physics and Industrial Control System (EPICS) for accelerator control, are described. Examples are given of how their procedures have been used to minimized control system downtime and improve reliability. While their examples are primarily drawn from their experience with EPICS, these practices are equally applicable to any control system. Specific issues addressed include resource allocation, developing reliable software lifecycle processes and risk management

  16. Requirements engineering for software and systems

    CERN Document Server

    Laplante, Phillip A

    2014-01-01

    Solid requirements engineering has increasingly been recognized as the key to improved, on-time and on-budget delivery of software and systems projects. This book provides practical teaching for graduate and professional systems and software engineers. It uses extensive case studies and exercises to help students grasp concepts and techniques. With a focus on software-intensive systems, this text provides a probing and comprehensive review of recent developments in intelligent systems, soft computing techniques, and their diverse applications in manufacturing. The second edition contains 100% revised content and approximately 30% new material

  17. Identifying dependability requirements for space software systems

    Directory of Open Access Journals (Sweden)

    Edgar Toshiro Yano

    2010-09-01

    Full Text Available Computer systems are increasingly used in space, whether in launch vehicles, satellites, ground support and payload systems. Software applications used in these systems have become more complex, mainly due to the high number of features to be met, thus contributing to a greater probability of hazards related to software faults. Therefore, it is fundamental that the specification activity of requirements have a decisive role in the effort of obtaining systems with high quality and safety standards. In critical systems like the embedded software of the Brazilian Satellite Launcher, ambiguity, non-completeness, and lack of good requirements can cause serious accidents with economic, material and human losses. One way to assure quality with safety, reliability and other dependability attributes may be the use of safety analysis techniques during the initial phases of the project in order to identify the most adequate dependability requirements to minimize possible fault or failure occurrences during the subsequent phases. This paper presents a structured software dependability requirements analysis process that uses system software requirement specifications and traditional safety analysis techniques. The main goal of the process is to help to identify a set of essential software dependability requirements which can be added to the software requirement previously specified for the system. The final results are more complete, consistent, and reliable specifications.

  18. The use of unmanned aerial systems for the mapping of legacy uranium mines.

    Science.gov (United States)

    Martin, P G; Payton, O D; Fardoulis, J S; Richards, D A; Scott, T B

    2015-05-01

    Historical mining of uranium mineral veins within Cornwall, England, has resulted in a significant amount of legacy radiological contamination spread across numerous long disused mining sites. Factors including the poorly documented and aged condition of these sites as well as the highly localised nature of radioactivity limit the success of traditional survey methods. A newly developed terrain-independent unmanned aerial system [UAS] carrying an integrated gamma radiation mapping unit was used for the radiological characterisation of a single legacy mining site. Using this instrument to produce high-spatial-resolution maps, it was possible to determine the radiologically contaminated land areas and to rapidly identify and quantify the degree of contamination and its isotopic nature. The instrument was demonstrated to be a viable tool for the characterisation of similar sites worldwide. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. Software quality assurance: in large scale and complex software-intensive systems

    NARCIS (Netherlands)

    Mistrik, I.; Soley, R.; Ali, N.; Grundy, J.; Tekinerdogan, B.

    2015-01-01

    Software Quality Assurance in Large Scale and Complex Software-intensive Systems presents novel and high-quality research related approaches that relate the quality of software architecture to system requirements, system architecture and enterprise-architecture, or software testing. Modern software

  20. REDLetr: Workflow and tools to support the migration of legacy clinical data capture systems to REDCap.

    Science.gov (United States)

    Dunn, William D; Cobb, Jake; Levey, Allan I; Gutman, David A

    2016-09-01

    A memory clinic at an academic medical center has relied on several ad hoc data capture systems including Microsoft Access and Excel for cognitive assessments over the last several years. However these solutions are challenging to maintain and limit the potential of hypothesis-driven or longitudinal research. REDCap, a secure web application based on PHP and MySQL, is a practical solution for improving data capture and organization. Here, we present a workflow and toolset to facilitate legacy data migration and real-time clinical research data collection into REDCap as well as challenges encountered. Legacy data consisted of neuropsychological tests stored in over 4000 Excel workbooks. Functions for data extraction, norm scoring, converting to REDCap-compatible formats, accessing the REDCap API, and clinical report generation were developed and executed in Python. Over 400 unique data points for each workbook were migrated and integrated into our REDCap database. Moving forward, our REDCap-based system replaces the Excel-based data collection method as well as eases the integration into the standard clinical research workflow and Electronic Health Record. In the age of growing data, efficient organization and storage of clinical and research data is critical for advancing research and providing efficient patient care. We believe that the workflow and tools described in this work to promote legacy data integration as well as real time data collection into REDCap ultimately facilitate these goals. Published by Elsevier Ireland Ltd.

  1. Trend Monitoring System (TMS) graphics software

    Science.gov (United States)

    Brown, J. S.

    1979-01-01

    A prototype bus communications systems, which is being used to support the Trend Monitoring System (TMS) and to evaluate the bus concept is considered. A set of FORTRAN-callable graphics subroutines for the host MODCOMP comuter, and an approach to splitting graphics work between the host and the system's intelligent graphics terminals are described. The graphics software in the MODCOMP and the operating software package written for the graphics terminals are included.

  2. Software Development Standard for Mission Critical Systems

    Science.gov (United States)

    2014-03-17

    6.2 for the OCD DID identifier. 5.3.3 System Requirements Definition 1. Based on the analysis of user needs, the operational concepts, and other...AEROSPACE REPORT NO. TR-RS-2015-00012 Software Development Standard for Mission Critical Systems March 17, 2014 Richard. J. Adams1, Suellen...Final 3. DATES COVERED - 4. TITLE AND SUBTITLE Software Development Standard for Mission Critical Systems 5a. CONTRACT NUMBER FA8802-14-C-0001

  3. Software design for resilient computer systems

    CERN Document Server

    Schagaev, Igor

    2016-01-01

    This book addresses the question of how system software should be designed to account for faults, and which fault tolerance features it should provide for highest reliability. The authors first show how the system software interacts with the hardware to tolerate faults. They analyze and further develop the theory of fault tolerance to understand the different ways to increase the reliability of a system, with special attention on the role of system software in this process. They further develop the general algorithm of fault tolerance (GAFT) with its three main processes: hardware checking, preparation for recovery, and the recovery procedure. For each of the three processes, they analyze the requirements and properties theoretically and give possible implementation scenarios and system software support required. Based on the theoretical results, the authors derive an Oberon-based programming language with direct support of the three processes of GAFT. In the last part of this book, they introduce a simulator...

  4. Enterprise Framework for the Disciplined Evolution of Legacy Systems

    National Research Council Canada - National Science Library

    Bergey, John

    1997-01-01

    .... This report describes an enterprise framework that characterizes the global environment in which system evolution takes place and provides insight into the activities, processes, and work products...

  5. Migrating Legacy Systems in the Global Merger & Acquisition Environment

    Science.gov (United States)

    Katerattanakul, Pairin; Kam, Hwee-Joo; Lee, James J.; Hong, Soongoo

    2009-01-01

    The MetaFrame system migration project at WorldPharma, while driven by merger and acquisition, had faced complexities caused by both technical challenges and organizational issues in the climate of uncertainties. However, WorldPharma still insisted on instigating this post-merger system migration project. This project served to (1) consolidate the…

  6. Driver education program status report : software system.

    Science.gov (United States)

    1981-01-01

    In April of 1980, a joint decision between Research Council personnel and representatives of the Department of Education was reached, and a project was undertaken by the Research Council to provide a software system to process the annual Driver Educa...

  7. Coordination Approaches for Complex Software Systems

    NARCIS (Netherlands)

    Bosse, T.; Hoogendoorn, M.; Treur, J.

    2006-01-01

    This document presents the results of a collaboration between the Vrije Universiteit Amsterdam, Department of Artificial Intelligence and Force Vision to investigate coordination approaches for complex software systems. The project was funded by Force Vision.

  8. Assessing Resistance to Change During Shifting from Legacy to Open Web-Based Systems in the Air Transport Industry

    Science.gov (United States)

    Brewer, Denise

    The air transport industry (ATI) is a dynamic, communal, international, and intercultural environment in which the daily operations of airlines, airports, and service providers are dependent on information technology (IT). Many of the IT legacy systems are more than 30 years old, and current regulations and the globally distributed workplace have brought profound changes to the way the ATI community interacts. The purpose of the study was to identify the areas of resistance to change in the ATI community and the corresponding factors in change management requirements that minimize product development delays and lead to a successful and timely shift from legacy to open web-based systems in upgrading ATI operations. The research questions centered on product development team processes as well as the members' perceived need for acceptance of change. A qualitative case study approach rooted in complexity theory was employed using a single case of an intercultural product development team dispersed globally. Qualitative data gathered from questionnaires were organized using Nvivo software, which coded the words and themes. Once coded, themes emerged identifying the areas of resistance within the product development team. Results of follow-up interviews with team members suggests that intercultural relationship building prior to and during project execution; focus on common team goals; and, development of relationships to enhance interpersonal respect, understanding and overall communication help overcome resistance to change. Positive social change in the form of intercultural group effectiveness evidenced in increased team functioning during major project transitions is likely to result when global managers devote time to cultural understanding.

  9. Corruption in Russia - Historic Legacy and Systemic Nature

    OpenAIRE

    Schulze, Günther G.; Zakharov, Nikita

    2018-01-01

    This paper argues that corruption in Russia is systemic in nature. Low wage levels of public officials provide strong incentives to engage in corruption. As corruption is illegal, corrupt officials can be exposed any time, which enforces loyalty towards the powers that be; thus corruption is a method of governance. We trace the systemic corruption back to the Mongolian empire and demonstrate its persistence to the current regime. We show the geographic distribution of contemporary corruption ...

  10. Architecting Fault-Tolerant Software Systems

    NARCIS (Netherlands)

    Sözer, Hasan

    2009-01-01

    The increasing size and complexity of software systems makes it hard to prevent or remove all possible faults. Faults that remain in the system can eventually lead to a system failure. Fault tolerance techniques are introduced for enabling systems to recover and continue operation when they are

  11. DCE and Legacy Systems - An Experience Report What Really Happened

    Science.gov (United States)

    Diehl, J.; Parlier, R.; Graham, T.

    1994-01-01

    The Multimission Ground Data System (MGDS) in use at the Jet Propulsion Laboratory (JPL) was developed in the latter half of the 1980s. It was a major departure from the one-of-a-kind, non-distributed ground data system previously employed. Today, a project is underway to determine if the Distributed Computing Environment (DCE) has a place in MGDS. The initial component targeted for replacement is an application layer built on top of TCP/IP which handles the MGDS message-passing requirements.

  12. Adaptive intrusion data system (AIDS) software routines

    International Nuclear Information System (INIS)

    Corlis, N.E.

    1980-07-01

    An Adaptive Intrusion Data System (AIDS) was developed to collect information from intrusion alarm sensors as part of an evaluation system to improve sensor performance. AIDS is a unique digital data-compression, storage, and formatting system; it also incorporates a capability for video selection and recording for assessment of the sensors monitored by the system. The system is software reprogrammable to numerous configurations that may be used for the collection of environmental, bilevel, analog, and video data. This report describes the software routines that control the different AIDS data-collection modes, the diagnostic programs to test the operating hardware, and the data format. Sample data printouts are also included

  13. Re-Engineering Complex Legacy Systems at NASA

    Science.gov (United States)

    Ruszkowski, James; Meshkat, Leila

    2010-01-01

    The Flight Production Process (FPP) Re-engineering project has established a Model-Based Systems Engineering (MBSE) methodology and the technological infrastructure for the design and development of a reference, product-line architecture as well as an integrated workflow model for the Mission Operations System (MOS) for human space exploration missions at NASA Johnson Space Center. The design and architectural artifacts have been developed based on the expertise and knowledge of numerous Subject Matter Experts (SMEs). The technological infrastructure developed by the FPP Re-engineering project has enabled the structured collection and integration of this knowledge and further provides simulation and analysis capabilities for optimization purposes. A key strength of this strategy has been the judicious combination of COTS products with custom coding. The lean management approach that has led to the success of this project is based on having a strong vision for the whole lifecycle of the project and its progress over time, a goal-based design and development approach, a small team of highly specialized people in areas that are critical to the project, and an interactive approach for infusing new technologies into existing processes. This project, which has had a relatively small amount of funding, is on the cutting edge with respect to the utilization of model-based design and systems engineering. An overarching challenge that was overcome by this project was to convince upper management of the needs and merits of giving up more conventional design methodologies (such as paper-based documents and unwieldy and unstructured flow diagrams and schedules) in favor of advanced model-based systems engineering approaches.

  14. Importance Of Penetration Testing For Legacy Operating System

    OpenAIRE

    Poorvi Bhatt

    2017-01-01

    Penetration testing is very important technique to find vulnerabilities in commercial networks. There are various techniques for ethical hacking via penetration testing. This report explains a white hat hacker approach of penetration testing. I have performed this test on private network where three PCs are connected through LAN via switch and without firewall. This network is not connected with Internet. All the PCs have windows operating system. The attacker host has windows server 2003 wi...

  15. Importance Of Penetration Testing For Legacy Operating System

    Directory of Open Access Journals (Sweden)

    Poorvi Bhatt

    2017-12-01

    Full Text Available Penetration testing is very important technique to find vulnerabilities in commercial networks. There are various techniques for ethical hacking via penetration testing. This report explains a white hat hacker approach of penetration testing. I have performed this test on private network where three PCs are connected through LAN via switch and without firewall. This network is not connected with Internet. All the PCs have windows operating system. The attacker host has windows server 2003 with Service Pack1 second host has windows XP with Service Pack 2 and third host has windows 2000 with service pack 4.

  16. Knowledge systems and the colonial legacies in African science education

    Science.gov (United States)

    Ziegler, John R.; Lehner, Edward

    2017-10-01

    This review surveys Femi Otulaja and Meshach Ogunniyi's, Handbook of research in science education in sub-Saharan Africa, Sense, Rotterdam, 2017, noting the significance of the theoretically rich content and how this book contributes to the field of education as well as to the humanities more broadly. The volume usefully outlines the ways in which science education and scholarship in sub-Saharan Africa continue to be impacted by the region's colonial history. Several of the chapters also enumerate proposals for teaching and learning science and strengthening academic exchange. Concerns that recur across many of the chapters include inadequate implementation of reforms; a lack of resources, such as for classroom materials and teacher training; and the continued and detrimental linguistic, financial, and ideological domination of African science education by the West. After a brief overview of the work and its central issues, this review closely examines two salient chapters that focus on scholarly communications and culturally responsive pedagogy. The scholarly communication section addresses the ways in which African science education research may in fact be too closely mirroring Western knowledge constructions without fully integrating indigenous knowledge systems in the research process. The chapter on pedagogy makes a similar argument for integrating Western and indigenous knowledge systems into teaching approaches.

  17. Software Management in the LHCb Online System

    CERN Document Server

    Neufeld, N; Brarda, L; Closier, J; Moine, G; Degaudenzi, H

    2009-01-01

    LHCb has a large online IT infrastructure with thousands of servers and embedded systems, network routers and switches, databases and storage appliances. These systems run a large number of different applications on various operating systems. The dominant operating systems are Linux and MS-Windows. This large heterogeneous environment, operated by a small number of administrators, requires that new software or updates can be pushed quickly, reliably and as automated as possible. We present here the general design of LHCb's software management along with the main tools: LinuxFC / Quattor and Microsoft SMS, how they have been adapted and integrated and discuss experiences and problems.

  18. MPS [Multiparticle Spectrometer] data acquisition software system

    International Nuclear Information System (INIS)

    Saulys, A.C.; Etkin, A.; Foley, K.J.

    1989-01-01

    A description of the software for a FASTBUS based data acquisition system in use at the Brookhaven National Laboratory Multiparticle Spectrometer is presented. Data reading and formatting is done by the SLAC Scanner Processors (SSP's) resident in the FASTBUS system. A multiprocess software system on VAX computers is used to communicate with the SSP's, record the data, and monitor on-line the progress of high energy and heavy ion experiments. The structure and the performance of this system are discussed. 4 refs., 1 fig

  19. Concept of software interface for BCI systems

    Science.gov (United States)

    Svejda, Jaromir; Zak, Roman; Jasek, Roman

    2016-06-01

    Brain Computer Interface (BCI) technology is intended to control external system by brain activity. One of main part of such system is software interface, which carries about clear communication between brain and either computer or additional devices connected to computer. This paper is organized as follows. Firstly, current knowledge about human brain is briefly summarized to points out its complexity. Secondly, there is described a concept of BCI system, which is then used to build an architecture of proposed software interface. Finally, there are mentioned disadvantages of sensing technology discovered during sensing part of our research.

  20. Verification and validation of control system software

    International Nuclear Information System (INIS)

    Munro, J.K. Jr.; Kisner, R.A.; Bhadtt, S.C.

    1991-01-01

    The following guidelines are proposed for verification and validation (V ampersand V) of nuclear power plant control system software: (a) use risk management to decide what and how much V ampersand V is needed; (b) classify each software application using a scheme that reflects what type and how much V ampersand V is needed; (c) maintain a set of reference documents with current information about each application; (d) use Program Inspection as the initial basic verification method; and (e) establish a deficiencies log for each software application. The following additional practices are strongly recommended: (a) use a computer-based configuration management system to track all aspects of development and maintenance; (b) establish reference baselines of the software, associated reference documents, and development tools at regular intervals during development; (c) use object-oriented design and programming to promote greater software reliability and reuse; (d) provide a copy of the software development environment as part of the package of deliverables; and (e) initiate an effort to use formal methods for preparation of Technical Specifications. The paper provides background information and reasons for the guidelines and recommendations. 3 figs., 3 tabs

  1. Software fault tolerance in computer operating systems

    Science.gov (United States)

    Iyer, Ravishankar K.; Lee, Inhwan

    1994-01-01

    This chapter provides data and analysis of the dependability and fault tolerance for three operating systems: the Tandem/GUARDIAN fault-tolerant system, the VAX/VMS distributed system, and the IBM/MVS system. Based on measurements from these systems, basic software error characteristics are investigated. Fault tolerance in operating systems resulting from the use of process pairs and recovery routines is evaluated. Two levels of models are developed to analyze error and recovery processes inside an operating system and interactions among multiple instances of an operating system running in a distributed environment. The measurements show that the use of process pairs in Tandem systems, which was originally intended for tolerating hardware faults, allows the system to tolerate about 70% of defects in system software that result in processor failures. The loose coupling between processors which results in the backup execution (the processor state and the sequence of events occurring) being different from the original execution is a major reason for the measured software fault tolerance. The IBM/MVS system fault tolerance almost doubles when recovery routines are provided, in comparison to the case in which no recovery routines are available. However, even when recovery routines are provided, there is almost a 50% chance of system failure when critical system jobs are involved.

  2. Building Blocks for Control System Software

    NARCIS (Netherlands)

    Broenink, Johannes F.; Hilderink, G.H.; Amerongen van, J.; Jonker, B.; Regtien, P.P.L

    2001-01-01

    Software implementation of control laws for industrial systems seem straightforward, but is not. The computer code stemming from the control laws is mostly not more than 10 to 30% of the total. A building-block approach for embedded control system development is advocated to enable a fast and

  3. Honeywell Modular Automation System Computer Software Documentation

    International Nuclear Information System (INIS)

    CUNNINGHAM, L.T.

    1999-01-01

    This document provides a Computer Software Documentation for a new Honeywell Modular Automation System (MAS) being installed in the Plutonium Finishing Plant (PFP). This system will be used to control new thermal stabilization furnaces in HA-211 and vertical denitration calciner in HC-230C-2

  4. Agile: From Software to Mission Systems

    Science.gov (United States)

    Trimble, Jay; Shirley, Mark; Hobart, Sarah

    2017-01-01

    To maximize efficiency and flexibility in Mission Operations System (MOS) design, we are evolving principles from agile and lean methods for software, to the complete mission system. This allows for reduced operational risk at reduced cost, and achieves a more effective design through early integration of operations into mission system engineering and flight system design. The core principles are assessment of capability through demonstration, risk reduction through targeted experiments, early test and deployment, and maturation of processes and tools through use.

  5. Software quality assessment for health care systems.

    Science.gov (United States)

    Braccini, G; Fabbrini, F; Fusani, M

    1997-01-01

    The problem of defining a quality model to be used in the evaluation of the software components of a Health Care System (HCS) is addressed. The model, based on the ISO/IEC 9126 standard, has been interpreted to fit the requirements of some classes of applications representative of Health Care Systems, on the basis of the experience gained both in the field of medical Informatics and assessment of software products. The values resulting from weighing the quality characteristics according to their criticality outline a set of quality profiles that can be used both for evaluation and certification.

  6. Model-integrating software components engineering flexible software systems

    CERN Document Server

    Derakhshanmanesh, Mahdi

    2015-01-01

    In his study, Mahdi Derakhshanmanesh builds on the state of the art in modeling by proposing to integrate models into running software on the component-level without translating them to code. Such so-called model-integrating software exploits all advantages of models: models implicitly support a good separation of concerns, they are self-documenting and thus improve understandability and maintainability and in contrast to model-driven approaches there is no synchronization problem anymore between the models and the code generated from them. Using model-integrating components, software will be

  7. Isolating crosscutting concerns in system software

    NARCIS (Netherlands)

    M. Bruntink (Magiel); A. van Deursen (Arie); T. Tourwé (Tom)

    2005-01-01

    textabstractThis paper reports upon our experience in automatically migrating the crosscutting concerns of a large-scale software system, written in C, to an aspect-oriented implementation. We zoom in on one particular crosscutting concern, and show how detailed information about it is extracted

  8. Software system for reducing PAM-2 data

    Science.gov (United States)

    Pepin, T. J.

    1982-01-01

    A software system for reducing PAM-II data was constructed. The data reduction process concatenates data tapes; determines ephemeris; and inverts full sun extinction data. Tests of this data reduction process show that PAM-II data can be compared with data from other, similar satellites.

  9. Consys Linear Control System Design Software Package

    International Nuclear Information System (INIS)

    Diamantidis, Z.

    1987-01-01

    This package is created in order to help engineers, researchers, students and all who work on linear control systems. The software includes all time and frequency domain analysises, spectral analysises and networks, active filters and regulators design aids. The programmes are written on Hewlett Packard computer in Basic 4.0

  10. Hotel software-comprehensive hotel systems

    OpenAIRE

    Šilhová, Lenka

    2010-01-01

    This bachelor's thesis deals with the usage of computer systems in the hotel industry. First part is focused on history, development and integration of technology into this field. Second part is dedicated to concrete products of the company Micros-Fidelio, which is the leader of hotel software market in the Czech Republic.

  11. From legacy systems via client/server to web browser technology in hospital informatics in Finland.

    Science.gov (United States)

    Korpela, M

    1998-01-01

    The majority of hospital information system installations in Finland are based on a legacy technology from the U.S. Department of Veterans Affairs (VA). This paper presents an architecture and a tool set which provide a migration path from terminal-based to client/server technology, conserving much of the investments in existing applications. It is argued, though, that a new technological revolution is required in the form of extending the web browser/server technology to operational information systems in hospitals. A blueprint is presented for a further migration path from client/server to browser/server technology. The browser technology is regarded as a major challenge to hospital information systems in the next few years.

  12. Reliable Software Development for Machine Protection Systems

    CERN Document Server

    Anderson, D; Dragu, M; Fuchsberger, K; Garnier, JC; Gorzawski, AA; Koza, M; Krol, K; Misiowiec, K; Stamos, K; Zerlauth, M

    2014-01-01

    The Controls software for the Large Hadron Collider (LHC) at CERN, with more than 150 millions lines of code, resides amongst the largest known code bases in the world1. Industry has been applying Agile software engineering techniques for more than two decades now, and the advantages of these techniques can no longer be ignored to manage the code base for large projects within the accelerator community. Furthermore, CERN is a particular environment due to the high personnel turnover and manpower limitations, where applying Agile processes can improve both, the codebase management as well as its quality. This paper presents the successful application of the Agile software development process Scrum for machine protection systems at CERN, the quality standards and infrastructure introduced together with the Agile process as well as the challenges encountered to adapt it to the CERN environment.

  13. Software Defined Common Processing System (SDCPS), Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Coherent Logix, Incorporated proposes the Software Defined Common Processing System (SDCPS) program to facilitate the development of a Software Defined Radio...

  14. Software tools for microprocessor based systems

    CERN Document Server

    Halatsis, C

    1981-01-01

    After a short review of the hardware and/or software tools for the development of single-chip, fixed instruction set microprocessor-based systems the author focuses on the software tools for designing systems based on microprogrammed bit-sliced microprocessors. Emphasis is placed on meta-microassemblers and simulation facilities at the register-transfer-level and architecture level. The author reviews available meta-microassemblers giving their most important features, advantages and disadvantages. He also makes extensions to higher-level microprogramming languages and associated systems specifically developed for bit-slices. In the area of simulation facilities the author first discusses the simulation objectives and the criteria for choosing the right simulation language. He concentrates on simulation facilities already used in bit-slices projects and discusses the gained experience, and concludes by describing the way the Signetics meta-microassembler and the ISPS simulation tool have been employed in the ...

  15. A multichannel analyzer software system realized by C Language

    International Nuclear Information System (INIS)

    Zheng Lifang; Xue Liudong

    1995-01-01

    The specialty of a multichannel analyzer software system realized by C Language is introduced. Because of its superior performance, the software has brilliant prospects for applications. The function of the software is also introduced

  16. A software Event Summation System for MDSplus

    International Nuclear Information System (INIS)

    Davis, W.M.; Mastrovito, D.M.; Roney, P.G.; Sichta, P.

    2008-01-01

    The MDSplus data acquisition and management system uses software events for communication among interdependent processes anywhere on the network. Actions can then be triggered, such as a data-acquisition routine, or analysis or display programs waiting for data. A small amount of data, such as a shot number, can be passed with these events. Since programs sometimes need more than one data set, we developed a system on NSTX to declare composite events using logical AND and OR operations. The system is written in the IDL language, so it can be run on Linux, Macintosh or Windows platforms. Like MDSplus, the Experimental Physics and Industrial Control System (EPICS) is a core component of the NSTX software environment. The Event Summation System provides an IDL-based interface to EPICS. This permits EPICS-aware processes to be synchronized with MDSplus-aware processes, to provide, for example, engineering operators information about physics data acquisition and analysis. Reliability was a more important design consideration than performance for this system; the system's architecture includes features to support this. The system has run for weeks at a time without requiring manual intervention. Hundreds of incoming events per second can be handled reliably. All incoming and declared events are logged with a timestamp. The system can be configured easily through a single, easy-to-read text file

  17. Assessing Resistance to Change during Shifting from Legacy to Open Web-Based Systems in the Air Transport Industry

    Science.gov (United States)

    Brewer, Denise

    2012-01-01

    The air transport industry (ATI) is a dynamic, communal, international, and intercultural environment in which the daily operations of airlines, airports, and service providers are dependent on information technology (IT). Many of the IT legacy systems are more than 30 years old, and current regulations and the globally distributed workplace have…

  18. Creating the next generation control system software

    International Nuclear Information System (INIS)

    Schultz, D.E.

    1989-01-01

    A new 1980's style support package for future accelerator control systems is proposed. It provides a way to create accelerator applications software without traditional programming. Visual Interactive Applications (VIA) is designed to meet the needs of expanded accelerator complexes in a more cost effective way than past experience with procedural languages by using technology from the personal computer and artificial intelligence communities. 4 refs

  19. The legacy of pesticide pollution: An overlooked factor in current risk assessments of freshwater systems

    DEFF Research Database (Denmark)

    Rasmussen, Jes Jessen; Wiberg-Larsen, Peter; Baattrup-Pedersen, Annette

    2015-01-01

    and suspended sediment samples exceeded safety thresholds in 50% of the samples and the average contribution of legacy pesticides to the SumTUC.riparius was >90%. Our results suggest that legacy pesticides can be highly significant contributors to the current toxic exposure of stream biota, especially...

  20. Application software for new BEPC interlock system

    International Nuclear Information System (INIS)

    Tang Shuming; Na Xiangyin; Chen Jiansong; Yu Yulan

    1997-01-01

    New BEPC (Beijing electron Positron collider) interlock system has been built in order to improve the reliability of personnel safety and interlock functions. Moreover, the system updates BEPC operation message once every 6 seconds, which are displayed on TV screens at the major entrances. Since March of 1996, new BEPC interlock system has been operating reliably. The hardware of the system is based on Programmable Logic Controllers (PLC). A multimedia IBM/PC-586 as the host computer of the PLCs, monitors the PLC system via serial port COM2. The PC communicates with the central computer VAX-4500 of BEPC control system and gets operating massage of the accelerator through serial port COM3. The application software on the host computer has been developed. Visual C++ for MS-Windows 3.2 TM is selected as the work bench. It provides nice tools for building programs, such as APP STUDIO, CLASS WIZARD, APP WIZARD and debugger tool. The author describes the design idea and the structure of the application software. Error tolerance is taken into consideration. The author also presents a small database and its data structure for the application

  1. Ground test accelerator control system software

    International Nuclear Information System (INIS)

    Burczyk, L.; Dalesio, R.; Dingler, R.; Hill, J.; Howell, J.A.; Kerstiens, D.; King, R.; Kozubal, A.; Little, C.; Martz, V.; Rothrock, R.; Sutton, J.

    1988-01-01

    This paper reports on the GTA control system that provides an environment in which the automation of a state-of-the-art accelerator can be developed. It makes use of commercially available computers, workstations, computer networks, industrial 110 equipment, and software. This system has built-in supervisory control (like most accelerator control systems), tools to support continuous control (like the process control industry), and sequential control for automatic start-up and fault recovery (like few other accelerator control systems). Several software tools support these levels of control: a real-time operating system (VxWorks) with a real-time kernel (VRTX), a configuration database, a sequencer, and a graphics editor. VxWorks supports multitasking, fast context-switching, and preemptive scheduling. VxWorks/VRTX is a network-based development environment specifically designed to work in partnership with the UNIX operating system. A data base provides the interface to the accelerator components. It consists of a run time library and a database configuration and editing tool. A sequencer initiates and controls the operation of all sequence programs (expressed as state programs). A graphics editor gives the user the ability to create color graphic displays showing the state of the machine in either text or graphics form

  2. The Upgrade Path from Legacy VME to VXS Dual Star Connectivity for Large Scale Data Acquisition and Trigger Systems

    Energy Technology Data Exchange (ETDEWEB)

    Cuevas, C; Barbosa, F J; Dong, H; Gu, W; Jastrzembski, E; Kaneta, S R; Moffitt, B; Nganga, N; Raydo, B J; Somov, A; Taylor, W M

    2011-10-01

    New instrumentation modules have been designed by Jefferson Lab and to take advantage of the higher performance and elegant backplane connectivity of the VITA 41 VXS standard. These new modules are required to meet the 200KHz trigger rates envisioned for the 12GeV experimental program. Upgrading legacy VME designs to the high speed gigabit serial extensions that VXS offers, comes with significant challenges, including electronic engineering design, plus firmware and software development issues. This paper will detail our system design approach including the critical system requirement stages, and explain the pipeline design techniques and selection criteria for the FPGA that require embedded Gigabit serial transceivers. The entire trigger system is synchronous and operates at 250MHz clock with synchronization signals, and the global trigger signals distributed to each front end readout crate via the second switch slot in the 21 slot, dual star VXS backplane. The readout of the buffered detector signals relies on 2eSST over the standard VME64x path at >200MB/s. We have achieved 20Gb/s transfer rate of trigger information within one VXS crate and will present results using production modules in a two crate test configuration with both VXS crates fully populated. The VXS trigger modules that reside in the front end crates, will be ready for production orders by the end of the 2011 fiscal year. VXS Global trigger modules are in the design stage now, and will be complete to meet the installation schedule for the 12GeV Physics program.

  3. Migration Performance for Legacy Data Access

    Directory of Open Access Journals (Sweden)

    Kam Woods

    2008-12-01

    Full Text Available We present performance data relating to the use of migration in a system we are creating to provide web access to heterogeneous document collections in legacy formats. Our goal is to enable sustained access to collections such as these when faced with increasing obsolescence of the necessary supporting applications and operating systems. Our system allows searching and browsing of the original files within their original contexts utilizing binary images of the original media. The system uses static and dynamic file migration to enhance collection browsing, and emulation to support both the use of legacy programs to access data and long-term preservation of the migration software. While we provide an overview of the architectural issues in building such a system, the focus of this paper is an in-depth analysis of file migration using data gathered from testing our software on 1,885 CD-ROMs and DVDs. These media are among the thousands of collections of social and scientific data distributed by the United States Government Printing Office (GPO on legacy media (CD-ROM, DVD, floppy disk under the Federal Depository Library Program (FDLP over the past 20 years.

  4. Software Engineering and Swarm-Based Systems

    Science.gov (United States)

    Hinchey, Michael G.; Sterritt, Roy; Pena, Joaquin; Rouff, Christopher A.

    2006-01-01

    We discuss two software engineering aspects in the development of complex swarm-based systems. NASA researchers have been investigating various possible concept missions that would greatly advance future space exploration capabilities. The concept mission that we have focused on exploits the principles of autonomic computing as well as being based on the use of intelligent swarms, whereby a (potentially large) number of similar spacecraft collaborate to achieve mission goals. The intent is that such systems not only can be sent to explore remote and harsh environments but also are endowed with greater degrees of protection and longevity to achieve mission goals.

  5. The VAXONLINE software system at Fermilab

    International Nuclear Information System (INIS)

    White, V.; Heinicke, P.; Berman, E.

    1987-06-01

    The VAXONLINE software system, started in late 1984, is now in use at 12 experiments at Fermilab, with at least one VAX or MicroVax. Data acquisition features now provide for the collection and combination of data from one or more sources, via a list-driven Event Builder program. Supported sources include CAMAC, FASTBUS, Front-end PDP-11's, Disk, Tape, DECnet, and other processors running VAXONLINE. This paper describes the functionality provided by the VAXONLINE system, gives performance figures, and discusses the ongoing program of enhancements

  6. Digital PIV (DPIV) Software Analysis System

    Science.gov (United States)

    Blackshire, James L.

    1997-01-01

    A software package was developed to provide a Digital PIV (DPIV) capability for NASA LaRC. The system provides an automated image capture, test correlation, and autocorrelation analysis capability for the Kodak Megaplus 1.4 digital camera system for PIV measurements. The package includes three separate programs that, when used together with the PIV data validation algorithm, constitutes a complete DPIV analysis capability. The programs are run on an IBM PC/AT host computer running either Microsoft Windows 3.1 or Windows 95 using a 'quickwin' format that allows simple user interface and output capabilities to the windows environment.

  7. The architecture of a reliable software monitoring system for embedded software systems

    International Nuclear Information System (INIS)

    Munson, J.; Krings, A.; Hiromoto, R.

    2006-01-01

    We develop the notion of a measurement-based methodology for embedded software systems to ensure properties of reliability, survivability and security, not only under benign faults but under malicious and hazardous conditions as well. The driving force is the need to develop a dynamic run-time monitoring system for use in these embedded mission critical systems. These systems must run reliably, must be secure and they must fail gracefully. That is, they must continue operating in the face of the departures from their nominal operating scenarios, the failure of one or more system components due to normal hardware and software faults, as well as malicious acts. To insure the integrity of embedded software systems, the activity of these systems must be monitored as they operate. For each of these systems, it is possible to establish a very succinct representation of nominal system activity. Furthermore, it is possible to detect departures from the nominal operating scenario in a timely fashion. Such departure may be due to various circumstances, e.g., an assault from an outside agent, thus forcing the system to operate in an off-nominal environment for which it was neither tested nor certified, or a hardware/software component that has ceased to operate in a nominal fashion. A well-designed system will have the property of graceful degradation. It must continue to run even though some of the functionality may have been lost. This involves the intelligent re-mapping of system functions. Those functions that are impacted by the failure of a system component must be identified and isolated. Thus, a system must be designed so that its basic operations may be re-mapped onto system components still operational. That is, the mission objectives of the software must be reassessed in terms of the current operational capabilities of the software system. By integrating the mechanisms to support observation and detection directly into the design methodology, we propose to shift

  8. A legacy endures. A Maine system emphasizes its sponsor's mission in all aspects of its work.

    Science.gov (United States)

    Stapleton, Marguerite

    2005-01-01

    The Sisters of Charity Health System, Lewiston, ME, a member of Covenant Health Systems, Lexington, MA, remains deeply committed to the mission of service begun by its foundress, St. Marguerite d'Youville. Although St. Marguerite experienced a hard life, her resilience and her commitment to the poor and disadvantaged serve as an inspiration to those who continue her legacy of compassionate care. The founding work of St. Marguerite and the sisters has helped to foster a culture in which the mission of service thrives among the system's 2,000 employees. This culture can be attributed to two things: the system's organizational values of compassion, stewardship, respect, and excellence; and the recognition of those employees whose work embodies these values. From the boardroom to the patient room, mission is integrated into each decision and action. Every two years, each of Covenant Health System's member facilities engages in a mission assessment process that examines various aspects of mission, including Catholic identity, holistic care, care for the poor, mission values integration, ethics and employee relations. In addition, the Sisters of Charity Health System's board has its own standing Mission and Community Committee, which looks strategically at how creatively and faithfully the system is continuing to live its mission.

  9. Deriving the Cost of Software Maintenance for Software Intensive Systems

    Science.gov (United States)

    2011-08-29

    about software engineering program management and giving me a gentle nudge in the right direction when needed. A tremendous amount of thanks to Dr...mean of our known y . ( Nussbaum , 2010) The coefficient of determination can be further explained by 2adjR , which removes one degree of freedom and...New (added)–The number of new human -generated SLOC added to the new version or release.  Auto-generated–The number of auto-generated SLOC added to

  10. Legacy question

    International Nuclear Information System (INIS)

    Healy, J.W.

    1977-01-01

    The legacy question discussed refers to the definition of appropriate actions in this generation to provide a world that will allow future generations to use the earth without excessive limitations caused by our use and disposal of potentially hazardous materials

  11. Data acquisition and test system software

    International Nuclear Information System (INIS)

    Bourgeois, N.A. Jr.

    1979-03-01

    Sandia Laboratories has been assigned the task by the Base and Installation Security Systems (BISS) Program Office to develop various aspects of perimeter security systems. One part of this effort involves the development of advanced signal processing techniques to reduce the false and nuisance alarms from sensor systems while improving the probability of intrusion detection. The need existed for both data acquisition hardware and software. Also, the hardware is used to implement and test the signal processing algorithms in real time. The hardware developed for this signal processing task is the Data Acquisition and Test System (DATS). The programs developed for use on DATS are described. The descriptions are taken directly from the documentation included within the source programs themselves

  12. Agile: From Software to Mission System

    Science.gov (United States)

    Trimble, Jay; Shirley, Mark H.; Hobart, Sarah Groves

    2016-01-01

    The Resource Prospector (RP) is an in-situ resource utilization (ISRU) technology demonstration mission, designed to search for volatiles at the Lunar South Pole. This is NASA's first near real time tele-operated rover on the Moon. The primary objective is to search for volatiles at one of the Lunar Poles. The combination of short mission duration, a solar powered rover, and the requirement to explore shadowed regions makes for an operationally challenging mission. To maximize efficiency and flexibility in Mission System design and thus to improve the performance and reliability of the resulting Mission System, we are tailoring Agile principles that we have used effectively in ground data system software development and applying those principles to the design of elements of the mission operations system.

  13. 14 CFR 417.123 - Computing systems and software.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Computing systems and software. 417.123... systems and software. (a) A launch operator must document a system safety process that identifies the... systems and software. (b) A launch operator must identify all safety-critical functions associated with...

  14. Data systems and computer science: Software Engineering Program

    Science.gov (United States)

    Zygielbaum, Arthur I.

    1991-01-01

    An external review of the Integrated Technology Plan for the Civil Space Program is presented. This review is specifically concerned with the Software Engineering Program. The goals of the Software Engineering Program are as follows: (1) improve NASA's ability to manage development, operation, and maintenance of complex software systems; (2) decrease NASA's cost and risk in engineering complex software systems; and (3) provide technology to assure safety and reliability of software in mission critical applications.

  15. User systems guidelines for software projects

    Energy Technology Data Exchange (ETDEWEB)

    Abrahamson, L. (ed.)

    1986-04-01

    This manual presents guidelines for software standards which were developed so that software project-development teams and management involved in approving the software could have a generalized view of all phases in the software production procedure and the steps involved in completing each phase. Guidelines are presented for six phases of software development: project definition, building a user interface, designing software, writing code, testing code, and preparing software documentation. The discussions for each phase include examples illustrating the recommended guidelines. 45 refs. (DWL)

  16. The SOFIA Mission Control System Software

    Science.gov (United States)

    Heiligman, G. M.; Brock, D. R.; Culp, S. D.; Decker, P. H.; Estrada, J. C.; Graybeal, J. B.; Nichols, D. M.; Paluzzi, P. R.; Sharer, P. J.; Pampell, R. J.; Papke, B. L.; Salovich, R. D.; Schlappe, S. B.; Spriestersbach, K. K.; Webb, G. L.

    1999-05-01

    The Stratospheric Observatory for Infrared Astronomy (SOFIA) will be delivered with a computerized mission control system (MCS). The MCS communicates with the aircraft's flight management system and coordinates the operations of the telescope assembly, mission-specific subsystems, and the science instruments. The software for the MCS must be reliable and flexible. It must be easily usable by many teams of observers with widely differing needs, and it must support non-intrusive access for education and public outreach. The technology must be appropriate for SOFIA's 20-year lifetime. The MCS software development process is an object-oriented, use case driven approach. The process is iterative: delivery will be phased over four "builds"; each build will be the result of many iterations; and each iteration will include analysis, design, implementation, and test activities. The team is geographically distributed, coordinating its work via Web pages, teleconferences, T.120 remote collaboration, and CVS (for Internet-enabled configuration management). The MCS software architectural design is derived in part from other observatories' experience. Some important features of the MCS are: * distributed computing over several UNIX and VxWorks computers * fast throughput of time-critical data * use of third-party components, such as the Adaptive Communications Environment (ACE) and the Common Object Request Broker Architecture (CORBA) * extensive configurability via stored, editable configuration files * use of several computer languages so developers have "the right tool for the job". C++, Java, scripting languages, Interactive Data Language (from Research Systems, Int'l.), XML, and HTML will all be used in the final deliverables. This paper reports on work in progress, with the final product scheduled for delivery in 2001. This work was performed for Universities Space Research Association for NASA under contract NAS2-97001.

  17. System Risk Balancing Profiles: Software Component

    Science.gov (United States)

    Kelly, John C.; Sigal, Burton C.; Gindorf, Tom

    2000-01-01

    The Software QA / V&V guide will be reviewed and updated based on feedback from NASA organizations and others with a vested interest in this area. Hardware, EEE Parts, Reliability, and Systems Safety are a sample of the future guides that will be developed. Cost Estimates, Lessons Learned, Probability of Failure and PACTS (Prevention, Avoidance, Control or Test) are needed to provide a more complete risk management strategy. This approach to risk management is designed to help balance the resources and program content for risk reduction for NASA's changing environment.

  18. Automated remedial assessment methodology software system

    International Nuclear Information System (INIS)

    Whiting, M.; Wilkins, M.; Stiles, D.

    1994-11-01

    The Automated Remedial Analysis Methodology (ARAM) software system has been developed by the Pacific Northwest Laboratory to assist the U.S. Department of Energy (DOE) in evaluating cleanup options for over 10,000 contaminated sites across the DOE complex. The automated methodology comprises modules for decision logic diagrams, technology applicability and effectiveness rules, mass balance equations, cost and labor estimating factors and equations, and contaminant stream routing. ARAM is used to select technologies for meeting cleanup targets; determine the effectiveness of the technologies in destroying, removing, or immobilizing contaminants; decide the nature and amount of secondary waste requiring further treatment; and estimate the cost and labor involved when applying technologies

  19. Fielding a structural health monitoring system on legacy military aircraft: A business perspective

    International Nuclear Information System (INIS)

    Bos, Marcel J.

    2015-01-01

    An important trend in the sustainment of military aircraft is the transition from preventative maintenance to condition based maintenance (CBM). For CBM, it is essential that the actual system condition can be measured and the measured condition can be reliably extrapolated to a convenient moment in the future in order to facilitate the planning process while maintaining flight safety. Much research effort is currently being made for the development of technologies that enable CBM, including structural health monitoring (SHM) systems. Great progress has already been made in sensors, sensor networks, data acquisition, models and algorithms, data fusion/mining techniques, etc. However, the transition of these technologies into service is very slow. This is because business cases are difficult to define and the certification of the SHM systems is very challenging. This paper describes a possibility for fielding a SHM system on legacy military aircraft with a minimum amount of certification issues and with a good prospect of a positive return on investment. For appropriate areas in the airframe the application of SHM will reconcile the fail-safety and slow crack growth damage tolerance approaches that can be used for safeguarding the continuing airworthiness of these areas, combining the benefits of both approaches and eliminating the drawbacks

  20. Education System for Software Engineers in the Mitsubishi Electric Group

    Science.gov (United States)

    Seo, Katsuhiko; Murata, Hiroshi; Yamaguchi, Yoshikazu; Hosoi, Machio

    By the progress of digitalization, software has come to have big influence on development of the embedded system such as electronic equipments and control devices. Increasing a software engineer, the improvement of technical skill, and training of the project leader have been the main subjects which should be tackled immediately by expansion of the scope of software, and increase of an application scale. This paper describes the concept of the software education system for software engineers in the Mitsubishi Electric group. It reports on the outline and the result about the software freshman training course and the software project leader training course newly constructed based on this concept.

  1. Software Configuration Management Plan for the Sodium Removal System

    Energy Technology Data Exchange (ETDEWEB)

    HILL, L.F.

    2000-03-06

    This document establishers the Software Configuration Management Plan (SCMP) for the software associated with the control system of the Sodium Removal System (SRS) located in the Interim Examination and Maintenance (IEM Cell) Facility of the FFTF Flux Test.

  2. Software Configuration Management Plan for the Sodium Removal System

    International Nuclear Information System (INIS)

    HILL, L.F.

    2000-01-01

    This document establishers the Software Configuration Management Plan (SCMP) for the software associated with the control system of the Sodium Removal System (SRS) located in the Interim Examination and Maintenance (IEM Cell) Facility of the FFTF Flux Test

  3. Software qualification for digital safety system in KNICS project

    International Nuclear Information System (INIS)

    Kwon, Kee-Choon; Lee, Dong-Young; Choi, Jong-Gyun

    2012-01-01

    In order to achieve technical self-reliance in the area of nuclear instrumentation and control, the Korea Nuclear Instrumentation and Control System (KNICS) project had been running for seven years from 2001. The safety-grade Programmable Logic Controller (PLC) and the digital safety system were developed by KNICS project. All the software of the PLC and digital safety system were developed and verified following the software development life cycle Verification and Validation (V and V) procedure. The main activities of the V and V process are preparation of software planning documentations, verification of the Software Requirement Specification (SRS), Software Design Specification (SDS) and codes, and a testing of the software components, the integrated software, and the integrated system. In addition, a software safety analysis and a software configuration management are included in the activities. For the software safety analysis at the SRS and SDS phases, the software Hazard Operability (HAZOP) was performed and then the software fault tree analysis was applied. The software fault tree analysis was applied to a part of software module with some critical defects identified by the software HAZOP in SDS phase. The software configuration management was performed using the in-house tool developed in the KNICS project. (author)

  4. Data mining : open systems drill through layers of legacy data to manage the flow of information

    International Nuclear Information System (INIS)

    Polczer, S.

    1999-01-01

    Information management challenges facing the petroleum and natural gas industry are discussed in conjunction with the increasing difficulty of accessing information because of the sheer volume of it, plus the fact that most data systems are proprietary 'closed' systems. In this context, reference is made to a newly developed software system named PetroDesk, developed by Merak Petroleum. PetroDesk is a geographical information browser used for integration and analysis of public, proprietary and personal data under a common interface. The software can be used to plot land position, chart productivity of wells, and produce graphs of decline rates, reserves and production. The software, which was originally designed for engineering data, also has been found useful in determining costs, revenue projections and other information needed to obtain a real-time net present worth of a company, and also in identifying business opportunities. 2 figs

  5. Assessing waste management systems using reginalt software

    International Nuclear Information System (INIS)

    Meshkov, N.K.; Camasta, S.F.; Gilbert, T.L.

    1988-03-01

    A method for assessing management systems for low-level radioactive waste is being developed for US Department of Energy. The method is based on benefit-cost-risk analysis. Waste management is broken down into its component steps, which are generation, treatment, packaging, storage, transportation, and disposal. Several different alternatives available for each waste management step are described. A particular waste management system consists of a feasible combination of alternatives for each step. Selecting an optimal waste management system would generally proceed as follows: (1) qualitative considerations are used to narrow down the choice of waste management system alternatives to a manageable number; (2) the costs and risks for each of these system alternatives are evaluated; (3) the number of alternatives is further reduced by eliminating alternatives with similar risks but higher costs, or those with similar costs but higher risks; (4) a trade-off factor between cost and risk is chosen and used to compute the objective function (sum of the cost and risk); and (5) the selection of the optimal waste management system among the remaining alternatives is made by choosing the alternative with the smallest value for the objective function. The authors propose that the REGINALT software system, developed by EG and G Idaho, Inc., as an acid for managers of low-level commerical waste, be augmented for application to the managment of DOE-generated waste. Specific recommendations for modification of the REGINALT system are made. 51 refs., 3 figs., 2 tabs

  6. Software tools for microprocessor based systems

    International Nuclear Information System (INIS)

    Halatsis, C.

    1981-01-01

    After a short review of the hardware and/or software tools for the development of single-chip, fixed instruction set microprocessor-based sytems we focus on the software tools for designing systems based on microprogrammed bit-sliced microprocessors. Emphasis is placed on meta-microassemblers and simulation facilties at the register-transfer-level and architecture level. We review available meta-microassemblers giving their most important features, advantages and disadvantages. We also make extentions to higher-level microprogramming languages and associated systems specifically developed for bit-slices. In the area of simulation facilities we first discuss the simulation objectives and the criteria for chosing the right simulation language. We consertrate to simulation facilities already used in bit-slices projects and discuss the gained experience. We conclude by describing the way the Signetics meta-microassembler and the ISPS simulation tool have been employed in the design of a fast microprogrammed machine, called MICE, made out of ECL bit-slices. (orig.)

  7. AIRMaster: Compressed air system audit software

    International Nuclear Information System (INIS)

    Wheeler, G.M.; Bessey, E.G.; McGill, R.D.; Vischer, K.

    1997-01-01

    The project goal was to develop a software tool, AIRMaster, and a methodology for performing compressed air system audits. AIRMaster and supporting manuals are designed for general auditors or plant personnel to evaluate compressed air system operation with simple instrumentation during a short-term audit. AIRMaster provides a systematic approach to compressed air system audits, analyzing collected data, and reporting results. AIRMaster focuses on inexpensive Operation and Maintenance (O and M) measures, such as fixing air leaks and improving controls that can significantly improve performance and reliability of the compressed air system, without significant risk to production. An experienced auditor can perform an audit, analyze collected data, and produce results in 2--3 days. AIRMaster reduces the cost of an audit, thus freeing funds to implement recommendations. The AIRMaster package includes an Audit Manual, Software and User's manual, Analysis Methodology Manual, and a Case Studies summary report. It also includes a Self-Guided Tour booklet to help users quickly screen a plant for efficiency improvement potentials, and an Industrial Compressed Air Systems Energy Efficiency Guidebook. AIRMaster proved to be a fast and effective audit tool. In sever audits AIRMaster identified energy savings of 4,056,000 kWh, or 49.2% of annual compressor energy use, for a cost savings of $152,000. Total implementation costs were $94,700 for a project payback period of 0.6 years. Available airflow increased between 11% and 51% of plant compressor capacity, leading to potential capital benefits from 40% to 230% of first year energy savings

  8. Software Defined Radios - Architectures, Systems and Functions

    Science.gov (United States)

    Sims, Herb

    2017-01-01

    Software Defined Radio is an industry term describing a method of utilizing a minimum amount of Radio Frequency (RF)/analog electronics before digitization takes place. Upon digitization all other functions are performed in software/firmware. There are as many different types of SDRs as there are data systems. Software Defined Radio (SDR) technology has been proven in the commercial sector since the early 90's. Today's rapid advancement in mobile telephone reliability and power management capabilities exemplifies the effectiveness of the SDR technology for the modern communications market. In contrast the foundations of transponder technology presently qualified for satellite applications were developed during the early space program of the 1960's. SDR technology offers potential to revolutionize satellite transponder technology by increasing science data through-put capability by at least an order of magnitude. While the SDR is adaptive in nature and is "One-size-fits-all" by design, conventional transponders are built to a specific platform and must be redesigned for every new bus. The SDR uses a minimum amount of analog/Radio Frequency components to up/down-convert the RF signal to/from a digital format. Once analog data is digitized, all processing is performed using hardware logic. Typical SDR processes include; filtering, modulation, up/down converting and demodulation. This presentation will show how the emerging SDR market has leveraged the existing commercial sector to provide a path to a radiation tolerant SDR transponder. These innovations will reduce the cost of transceivers, a decrease in power requirements and a commensurate reduction in volume. A second pay-off is the increased flexibility of the SDR by allowing the same hardware to implement multiple transponder types by altering hardware logic - no change of analog hardware is required - all of which can be ultimately accomplished in orbit. This in turn would provide high capability and low cost

  9. System software of the CERN proton synchrotron control system

    International Nuclear Information System (INIS)

    Carpenter, B.E.; Cailliau, R.; Cuisinier, G.; Remmer, W.

    1984-01-01

    The PS complex consists of 10 different interconnected accelerators or storage rings, mainly controlled by the same distributed system of NORD-10 and ND-100 minicomputers. After a brief outline of the hardware, this report gives a detailed description of the system software, which is based on the SINTRAN III operating system. It describes the general layout of the software, the network, CAMAC access, programming languages, program development, and microprocessor support. It concludes with reviews of performance, documentation, organization and methods, and future prospects. (orig.)

  10. Physics detector simulation facility system software description

    International Nuclear Information System (INIS)

    Allen, J.; Chang, C.; Estep, P.; Huang, J.; Liu, J.; Marquez, M.; Mestad, S.; Pan, J.; Traversat, B.

    1991-12-01

    Large and costly detectors will be constructed during the next few years to study the interactions produced by the SSC. Efficient, cost-effective designs for these detectors will require careful thought and planning. Because it is not possible to test fully a proposed design in a scaled-down version, the adequacy of a proposed design will be determined by a detailed computer model of the detectors. Physics and detector simulations will be performed on the computer model using high-powered computing system at the Physics Detector Simulation Facility (PDSF). The SSCL has particular computing requirements for high-energy physics (HEP) Monte Carlo calculations for the simulation of SSCL physics and detectors. The numerical calculations to be performed in each simulation are lengthy and detailed; they could require many more months per run on a VAX 11/780 computer and may produce several gigabytes of data per run. Consequently, a distributed computing environment of several networked high-speed computing engines is envisioned to meet these needs. These networked computers will form the basis of a centralized facility for SSCL physics and detector simulation work. Our computer planning groups have determined that the most efficient, cost-effective way to provide these high-performance computing resources at this time is with RISC-based UNIX workstations. The modeling and simulation application software that will run on the computing system is usually written by physicists in FORTRAN language and may need thousands of hours of supercomputing time. The system software is the ''glue'' which integrates the distributed workstations and allows them to be managed as a single entity. This report will address the computing strategy for the SSC

  11. Improving system quality through software evaluation.

    Science.gov (United States)

    McDaniel, James G

    2002-05-01

    The role of evaluation is examined with respect to quality of software in healthcare. Of particular note is the failure of the Therac-25 radiation therapy machine. This example provides evidence of several types of defect which could have been detected and corrected using appropriate evaluation procedures. The field of software engineering has developed metrics and guidelines to assist in software evaluation but this example indicates that software evaluation must be extended beyond the formally defined interfaces of the software to its real-life operating context.

  12. Developing Dependable Software for a System-of-Systems

    Science.gov (United States)

    2005-03-01

    Internet and intelligent transportion 34 systems (e.g., advanced traveler information services and advanced traffic control systems). He believes that the...Software, Pisa, Italy: Consorzio Universitario in Ingegneria della Qualita (Venice, Mar. 1998). [79] Vinu, G. and Vaughn, R. Application of

  13. Information Management System Supporting a Multiple Property Survey Program with Legacy Radioactive Contamination.

    Science.gov (United States)

    Stager, Ron; Chambers, Douglas; Wiatzka, Gerd; Dupre, Monica; Callough, Micah; Benson, John; Santiago, Erwin; van Veen, Walter

    2017-04-01

    The Port Hope Area Initiative is a project mandated and funded by the Government of Canada to remediate properties with legacy low-level radioactive waste contamination in the Town of Port Hope, Ontario. The management and use of large amounts of data from surveys of some 4800 properties is a significant task critical to the success of the project. A large amount of information is generated through the surveys, including scheduling individual field visits to the properties, capture of field data laboratory sample tracking, QA/QC, property report generation and project management reporting. Web-mapping tools were used to track and display temporal progress of various tasks and facilitated consideration of spatial associations of contamination levels. The IM system facilitated the management and integrity of the large amounts of information collected, evaluation of spatial associations, automated report reproduction and consistent application and traceable execution for this project.x. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. A communication-channel-based representation system for software

    NARCIS (Netherlands)

    Demirezen, Zekai; Tanik, Murat M.; Aksit, Mehmet; Skjellum, Anthony

    We observed that before initiating software development the objectives are minimally organized and developers introduce comparatively higher organization throughout the design process. To be able to formally capture this observation, a new communication channel representation system for software is

  15. Analyzing Software Errors in Safety-Critical Embedded Systems

    Science.gov (United States)

    Lutz, Robyn R.

    1994-01-01

    This paper analyzes the root causes of safty-related software faults identified as potentially hazardous to the system are distributed somewhat differently over the set of possible error causes than non-safety-related software faults.

  16. The Utility of Open Source Software in Military Systems

    National Research Council Canada - National Science Library

    Esperon, Agustin I; Munoz, Jose P; Tanneau, Jean M

    2005-01-01

    .... The companies involved were THALES and GMV. The MILOS project aimed to demonstrate benefits of Open Source Software in large software based military systems, by casting off constraints inherent to traditional proprietary COTS and by taking...

  17. Command and Control System Software Development

    Science.gov (United States)

    Velasquez, Ricky

    2017-01-01

    Kennedy Space Center has been the heart of human space flight for decades. From the Apollo Program to the Space Shuttle Program, and now to the coming Space Launch System (SLS) and Orion, NASA will be a leader in deep space exploration for mankind. Before any rockets blast off, there is significant work to be done in preparation for launch. People working on all aspects of spaceflight must contribute by developing new technology that has yet to participate in a successful launch, and which can work with technology already proven in flight. These innovations, whether hardware or software, must be tried and true, and includes the projects to which interns contribute to. For this internship, the objective was to create a data recording system for the developers of a LCS section that records certain messages in the traffic of the system. Developers would then be able to use these recordings for analysis later on, either manually or by an automated test. The tool would be of convenience to a developer as it would be used if the system's main data recorder was not available for tests.

  18. Visual software system for memory interleaving simulation

    Directory of Open Access Journals (Sweden)

    Milenković Katarina

    2017-01-01

    Full Text Available This paper describes the visual software system for memory interleaving simulation (VSMIS, implemented for the purpose of the course Computer Architecture and Organization 1, at the School of Electrical Engineering, University of Belgrade. The simulator enables students to expand their knowledge through practical work in the laboratory, as well as through independent work at home. VSMIS gives users the possibility to initialize parts of the system and to control simulation steps. The user has the ability to monitor simulation through graphical representation. It is possible to navigate through the entire hierarchy of the system using simple navigation. During the simulation the user can observe and set the values of the memory location. At any time, the user can reset the simulation of the system and observe it for different memory states; in addition, it is possible to save the current state of the simulation and continue with the execution of the simulation later. [Project of the Serbian Ministry of Education, Science and Technological Development, Grant no. III44009

  19. Engineering Software Suite Validates System Design

    Science.gov (United States)

    2007-01-01

    EDAptive Computing Inc.'s (ECI) EDAstar engineering software tool suite, created to capture and validate system design requirements, was significantly funded by NASA's Ames Research Center through five Small Business Innovation Research (SBIR) contracts. These programs specifically developed Syscape, used to capture executable specifications of multi-disciplinary systems, and VectorGen, used to automatically generate tests to ensure system implementations meet specifications. According to the company, the VectorGen tests considerably reduce the time and effort required to validate implementation of components, thereby ensuring their safe and reliable operation. EDASHIELD, an additional product offering from ECI, can be used to diagnose, predict, and correct errors after a system has been deployed using EDASTAR -created models. Initial commercialization for EDASTAR included application by a large prime contractor in a military setting, and customers include various branches within the U.S. Department of Defense, industry giants like the Lockheed Martin Corporation, Science Applications International Corporation, and Ball Aerospace and Technologies Corporation, as well as NASA's Langley and Glenn Research Centers

  20. In water thermal imaging comparison of the Alcon legacy and AMO sovereign phacoemulsification systems.

    Science.gov (United States)

    M Miller, Kevin; D Olson, Michael

    2008-02-15

    To compare the temperature profiles of 2 popular phacoemulsification units under similar operating conditions in water. The phacoemulsification probes of the Sovereign WhiteStar and Legacy AdvanTec were capped with water-filled test chambers and imaged side-by-side using a thermal camera. The highest temperature of each chamber was measured at several time points after power application. Testing was performed under conditions capable of producing a corneal burn. The Legacy was operated in pulse mode at 15 Hz; a 50% duty cycle; and console power settings of 10, 30, 50 and 100%. The Sovereign was operated at the same console settings in WhiteStar C/F pulse mode at 56 Hz and a 33% duty cycle. Under all conditions (powers of 10, 30, 50 and 100%; with or without irrigation/aspiration flow; and with or without sleeve compression), the Sovereign generated higher temperatures than the Legacy. At irrigation/aspiration flow rates ≥ 5 cc/min, the temperature profiles of the 2 units were indistinguishable. The Sovereign WhiteStar ran hotter than the Legacy AdvanTec under a variety of controlled low flow operating conditions. The Sovereign WhiteStar is more likely than the Legacy AdvanTec to produce a corneal burn under low flow conditions.

  1. Software defect, feature and requirements management system

    OpenAIRE

    Indriūnas, Paulius

    2006-01-01

    Software development is an iterative process which is based on teamwork and information exchange. In order to keep this process running, proper informational flow control techniques in a software development company have to be applied. As number of employees grows, manual control of this process becomes inaffective and automated solutions takes over this task. The most common informational units in the software development process are defects, new features and requirements. This paper address...

  2. System support software for TSTA [Tritium Systems Test Assembly

    International Nuclear Information System (INIS)

    Claborn, G.W.; Mann, L.W.; Nielson, C.W.

    1987-10-01

    The fact that Tritium Systems Test Assembly (TSTA) is an experimental facility makes it impossible and undesirable to try to forecast the exact software requirements. Thus the software had to be written in a manner that would allow modifications without compromising the safety requirements imposed by the handling of tritium. This suggested a multi-level approach to the software. In this approach (much like the ISO network model) each level is isolated from the level below and above by cleanly defined interfaces. For example, the subsystem support level interfaces with the subsystem hardware through the software support level. Routines in the software support level provide operations like ''OPEN VALVE'' and CLOSE VALVE'' to the subsystem level. This isolates the subsystem level from the actual hardware. This is advantageous because changes can occur in any level without the need for propagating the change to any other level. The TSTA control system consists of the hardware level, the data conversion level, the operator interface level, and the subsystem process level. These levels are described

  3. Tank monitor and control system (TMACS) software configuration management plan

    International Nuclear Information System (INIS)

    GLASSCOCK, J.A.

    1999-01-01

    This Software Configuration Management Plan (SCMP) describes the methodology for control of computer software developed and supported by the Systems Development and Integration (SD and I) organization of Lockheed Martin Services, Inc. (LMSI) for the Tank Monitor and Control System (TMACS). This plan controls changes to the software and configuration files used by TMACS. The controlled software includes the Gensym software package, Gensym knowledge base files developed for TMACS, C-language programs used by TMACS, the operating system on the production machine, language compilers, and all Windows NT commands and functions which affect the operating environment. The configuration files controlled include the files downloaded to the Acromag and Westronic field instruments

  4. Systems and software quality the next step for industrialisation

    CERN Document Server

    Wieczorek, Martin; Bons, Heinz

    2014-01-01

    Software and systems quality is playing an increasingly important role in the growth of almost all - profit and non-profit - organisations. Quality is vital to the success of enterprises in their markets. Most small trade and repair businesses use software systems in their administration and marketing processes. Every doctor's surgery is managing its patients using software. Banking is no longer conceivable without software. Aircraft, trucks and cars use more and more software to handle their increasingly complex technical systems. Innovation, competition and cost pressure are always present i

  5. The contribution of instrumentation and control software to system reliability

    International Nuclear Information System (INIS)

    Fryer, M.O.

    1984-01-01

    Advanced instrumentation and control systems are usually implemented using computers that monitor the instrumentation and issue commands to control elements. The control commands are based on instrument readings and software control logic. The reliability of the total system will be affected by the software design. When comparing software designs, an evaluation of how each design can contribute to the reliability of the system is desirable. Unfortunately, the science of reliability assessment of combined hardware and software systems is in its infancy. Reliability assessment of combined hardware/software systems is often based on over-simplified assumptions about software behavior. A new method of reliability assessment of combined software/hardware systems is presented. The method is based on a procedure called fault tree analysis which determines how component failures can contribute to system failure. Fault tree analysis is a well developed method for reliability assessment of hardware systems and produces quantitative estimates of failure probability based on component failure rates. It is shown how software control logic can be mapped into a fault tree that depicts both software and hardware contributions to system failure. The new method is important because it provides a way for quantitatively evaluating the reliability contribution of software designs. In many applications, this can help guide designers in producing safer and more reliable systems. An application to the nuclear power research industry is discussed

  6. A microcomputer software system for conformation therapy

    International Nuclear Information System (INIS)

    Akanuma, Atsuo; Aoki, Yukimasa; Nakagawa, Keiichi; Hosoi, Yoshio; Onogi, Yuzou; Muta, Nobuharu; Sakata, Koichi; Karasawa, Katsuyuki; Iio, Masahiro

    1987-01-01

    Effectivity of radiotherapy in the treatment of malignant tumors has gradually and constantly increased since the discovery of ionising radiation, which is greatly contributed by technological and industrial developments. Improved radiotherapy machines allowed higher and higher energy radiations. And the more penetrating radiation delivered the higher dose to a deep seated tumors with marked decreased integral dose, which rapidly increased the indications for malignant tumor therapy. The merits from the penetrating power of radiation appears recently saturated. Instead the developments in the automated processings provided easily computers for radiotherapy. Now applications of computers to radiotherapy potentiated the very frequent employment of conformation technique which is invented in this far east country. For conveniences on the computer application of radiotherapy, a set of microcomputer is chosen here and a software system on this set for conformation technique is being developed here. The system consists from a main program for maintenance and switching job programs. Digitizer input of body and inhomogenity contours is employed. Currently no dose distribution output is intended. Dose calculation at selected points is performed instead. (author)

  7. Six Sigma software development

    CERN Document Server

    Tayntor, Christine B

    2002-01-01

    Since Six Sigma has had marked success in improving quality in other settings, and since the quality of software remains poor, it seems a natural evolution to apply the concepts and tools of Six Sigma to system development and the IT department. Until now however, there were no books available that applied these concepts to the system development process. Six Sigma Software Development fills this void and illustrates how Six Sigma concepts can be applied to all aspects of the evolving system development process. It includes the traditional waterfall model and in the support of legacy systems,

  8. Integrated testing and verification system for research flight software

    Science.gov (United States)

    Taylor, R. N.

    1979-01-01

    The MUST (Multipurpose User-oriented Software Technology) program is being developed to cut the cost of producing research flight software through a system of software support tools. An integrated verification and testing capability was designed as part of MUST. Documentation, verification and test options are provided with special attention on real-time, multiprocessing issues. The needs of the entire software production cycle were considered, with effective management and reduced lifecycle costs as foremost goals.

  9. Progressive retry for software error recovery in distributed systems

    Science.gov (United States)

    Wang, Yi-Min; Huang, Yennun; Fuchs, W. K.

    1993-01-01

    In this paper, we describe a method of execution retry for bypassing software errors based on checkpointing, rollback, message reordering and replaying. We demonstrate how rollback techniques, previously developed for transient hardware failure recovery, can also be used to recover from software faults by exploiting message reordering to bypass software errors. Our approach intentionally increases the degree of nondeterminism and the scope of rollback when a previous retry fails. Examples from our experience with telecommunications software systems illustrate the benefits of the scheme.

  10. Telemetry and Science Data Software System

    Science.gov (United States)

    Bates, Lakesha; Hong, Liang

    2011-01-01

    The Telemetry and Science Data Software System (TSDSS) was designed to validate the operational health of a spacecraft, ease test verification, assist in debugging system anomalies, and provide trending data and advanced science analysis. In doing so, the system parses, processes, and organizes raw data from the Aquarius instrument both on the ground and while in space. In addition, it provides a user-friendly telemetry viewer, and an instant pushbutton test report generator. Existing ground data systems can parse and provide simple data processing, but have limitations in advanced science analysis and instant report generation. The TSDSS functions as an offline data analysis system during I&T (integration and test) and mission operations phases. After raw data are downloaded from an instrument, TSDSS ingests the data files, parses, converts telemetry to engineering units, and applies advanced algorithms to produce science level 0, 1, and 2 data products. Meanwhile, it automatically schedules upload of the raw data to a remote server and archives all intermediate and final values in a MySQL database in time order. All data saved in the system can be straightforwardly retrieved, exported, and migrated. Using TSDSS s interactive data visualization tool, a user can conveniently choose any combination and mathematical computation of interesting telemetry points from a large range of time periods (life cycle of mission ground data and mission operations testing), and display a graphical and statistical view of the data. With this graphical user interface (GUI), the data queried graphs can be exported and saved in multiple formats. This GUI is especially useful in trending data analysis, debugging anomalies, and advanced data analysis. At the request of the user, mission-specific instrument performance assessment reports can be generated with a simple click of a button on the GUI. From instrument level to observatory level, the TSDSS has been operating supporting

  11. A control system verifier using automated reasoning software

    International Nuclear Information System (INIS)

    Smith, D.E.; Seeman, S.E.

    1985-08-01

    An on-line, automated reasoning software system for verifying the actions of other software or human control systems has been developed. It was demonstrated by verifying the actions of an automated procedure generation system. The verifier uses an interactive theorem prover as its inference engine with the rules included as logical axioms. Operation of the verifier is generally transparent except when the verifier disagrees with the actions of the monitored software. Testing with an automated procedure generation system demonstrates the successful application of automated reasoning software for verification of logical actions in a diverse, redundant manner. A higher degree of confidence may be placed in the verified actions of the combined system

  12. Software V ampersand V methods for digital plant protection system

    International Nuclear Information System (INIS)

    Kim, Hung-Jun; Han, Jai-Bok; Chun, Chong-Son; Kim, Sung; Kim, Kern-Joong.

    1997-01-01

    Careful thought must be given to software design in the development of digital based systems that play a critical role in the successful operation of nuclear power plants. To evaluate the software verification and validation methods as well as to verify its system performance capabilities for the upgrade instrumentation and control system in the Korean future nuclear power plants, the prototype Digital Plant, Protection System (DPPS) based on the Programmable Logic Controller (PLC) has been constructed. The system design description and features are briefly presented, and the software design and software verification and validation methods are focused. 6 refs., 2 figs

  13. The waveform correlation event detection system global prototype software design

    Energy Technology Data Exchange (ETDEWEB)

    Beiriger, J.I.; Moore, S.G.; Trujillo, J.R.; Young, C.J.

    1997-12-01

    The WCEDS prototype software system was developed to investigate the usefulness of waveform correlation methods for CTBT monitoring. The WCEDS prototype performs global seismic event detection and has been used in numerous experiments. This report documents the software system design, presenting an overview of the system operation, describing the system functions, tracing the information flow through the system, discussing the software structures, and describing the subsystem services and interactions. The effectiveness of the software design in meeting project objectives is considered, as well as opportunities for code refuse and lessons learned from the development process. The report concludes with recommendations for modifications and additions envisioned for regional waveform-correlation-based detector.

  14. A multi-layered software architecture model for building software solutions in an urbanized information system

    Directory of Open Access Journals (Sweden)

    Sana Guetat

    2013-01-01

    Full Text Available The concept of Information Systems urbanization has been proposed since the late 1990’s in order to help organizations building agile information systems. Nevertheless, despite the advantages of this concept, it remains too descriptive and presents many weaknesses. In particular, there is a lack of useful architecture models dedicated to defining software solutions compliant with information systems urbanization principles and rules. Moreover, well-known software architecture models do not provide sufficient resources to address the requirements and constraints of urbanized information systems. In this paper, we draw on the “information city” framework to propose a model of software architecture - called the 5+1 Software Architecture Model - which is compliant with information systems urbanization principles and helps organizations building urbanized software solutions. This framework improves the well-established software architecture models and allows the integration of new architectural paradigms. Furthermore, the proposed model contributes to the implementation of information systems urbanization in several ways. On the one hand, this model devotes a specific layer to applications integration and software reuse. On the other hand, it contributes to the information system agility and scalability due to its conformity to the separation of concerns principle.

  15. Next Generation Waste Tracking: Linking Legacy Systems with Modern Networking Technologies

    International Nuclear Information System (INIS)

    Walker, Randy M.; Resseguie, David R.; Shankar, Mallikarjun; Gorman, Bryan L.; Smith, Cyrus M.; Hill, David E.

    2010-01-01

    of existing legacy hazardous, radioactive and related informational databases and systems using emerging Web 2.0 technologies. These capabilities were used to interoperate ORNL s waste generating, packaging, transportation and disposal with other DOE ORO waste management contractors. Importantly, the DOE EM objectives were accomplished in a cost effective manner without altering existing information systems. A path forward is to demonstrate and share these technologies with DOE EM, contractors and stakeholders. This approach will not alter existing DOE assets, i.e. Automated Traffic Management Systems (ATMS), Transportation Tracking and Communications System (TRANSCOM), the Argonne National Laboratory (ANL) demonstrated package tracking system, etc.

  16. Summary of the International Conference on Software and System Processes

    DEFF Research Database (Denmark)

    Kuhrmann, Marco; O'Connor, Rory V.; Perry, Dewayne E.

    2016-01-01

    The International Conference on Software and Systems Process (ICSSP), continuing the success of Software Process Workshop (SPW), the Software Process Modeling and Simulation Workshop (ProSim) and the International Conference on Software Process (ICSP) conference series, has become the established...... premier event in the field of software and systems engineering processes. It provides a leading forum for the exchange of research outcomes and industrial best-practices in process development from software and systems disciplines. ICSSP 2016 was held in Austin, Texas, from 14-15 May 2016, co......-located with the 38th International Conference on Software Engineering (ICSE). The theme of mICSSP 2016 was studying "Process(es) in Action" by recognizing that the AS-Planned and AS-Practiced processes can be quite different in many ways including their ows, their complexity and the evolving needs of stakeholders...

  17. Using a scripted data entry process to transfer legacy immunization data while transitioning between electronic medical record systems.

    Science.gov (United States)

    Michel, J; Hsiao, A; Fenick, A

    2014-01-01

    Transitioning between Electronic Medical Records (EMR) can result in patient data being stranded in legacy systems with subsequent failure to provide appropriate patient care. Manual chart abstraction is labor intensive, error-prone, and difficult to institute for immunizations on a systems level in a timely fashion. We sought to transfer immunization data from two of our health system's soon to be replaced EMRs to the future EMR using a single process instead of separate interfaces for each facility. We used scripted data entry, a process where a computer automates manual data entry, to insert data into the future EMR. Using the Center for Disease Control's CVX immunization codes we developed a bridge between immunization identifiers within our system's EMRs. We performed a two-step process evaluation of the data transfer using automated data comparison and manual chart review. We completed the data migration from two facilities in 16.8 hours with no data loss or corruption. We successfully populated the future EMR with 99.16% of our legacy immunization data - 500,906 records - just prior to our EMR transition date. A subset of immunizations, first recognized during clinical care, had not originally been extracted from the legacy systems. Once identified, this data - 1,695 records - was migrated using the same process with minimal additional effort. Scripted data entry for immunizations is more accurate than published estimates for manual data entry and we completed our data transfer in 1.2% of the total time we predicted for manual data entry. Performing this process before EMR conversion helped identify obstacles to data migration. Drawing upon this work, we will reuse this process for other healthcare facilities in our health system as they transition to the future EMR.

  18. SWEPP Gamma-Ray Spectrometer System software design description

    International Nuclear Information System (INIS)

    Femec, D.A.; Killian, E.W.

    1994-08-01

    To assist in the characterization of the radiological contents of contract-handled waste containers at the Stored Waste Examination Pilot Plant (SWEPP), the SWEPP Gamma-Ray Spectrometer (SGRS) System has been developed by the Radiation Measurements and Development Unit of the Idaho National Engineering Laboratory. The SGRS system software controls turntable and detector system activities. In addition to determining the concentrations of gamma-ray-emitting radionuclides, this software also calculates attenuation-corrected isotopic mass ratios of-specific interest. This document describes the software design for the data acquisition and analysis software associated with the SGRS system

  19. SWEPP Gamma-Ray Spectrometer System software design description

    Energy Technology Data Exchange (ETDEWEB)

    Femec, D.A.; Killian, E.W.

    1994-08-01

    To assist in the characterization of the radiological contents of contract-handled waste containers at the Stored Waste Examination Pilot Plant (SWEPP), the SWEPP Gamma-Ray Spectrometer (SGRS) System has been developed by the Radiation Measurements and Development Unit of the Idaho National Engineering Laboratory. The SGRS system software controls turntable and detector system activities. In addition to determining the concentrations of gamma-ray-emitting radionuclides, this software also calculates attenuation-corrected isotopic mass ratios of-specific interest. This document describes the software design for the data acquisition and analysis software associated with the SGRS system.

  20. Software Defined Common Processing System (SDCPS), Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Coherent Logix, Incorporated (CLX) proposes the development of a Software Defined Common Processing System (SDCPS) that leverages the inherent advantages of an...

  1. Analyzing Software Requirements Errors in Safety-Critical, Embedded Systems

    Science.gov (United States)

    Lutz, Robyn R.

    1993-01-01

    This paper analyzes the root causes of safety-related software errors in safety-critical, embedded systems. The results show that software errors identified as potentially hazardous to the system tend to be produced by different error mechanisms than non- safety-related software errors. Safety-related software errors are shown to arise most commonly from (1) discrepancies between the documented requirements specifications and the requirements needed for correct functioning of the system and (2) misunderstandings of the software's interface with the rest of the system. The paper uses these results to identify methods by which requirements errors can be prevented. The goal is to reduce safety-related software errors and to enhance the safety of complex, embedded systems.

  2. Flight test of a resident backup software system

    Science.gov (United States)

    Deets, Dwain A.; Lock, Wilton P.; Megna, Vincent A.

    1987-01-01

    A new fault-tolerant system software concept employing the primary digital computers as host for the backup software portion has been implemented and flight tested in the F-8 digital fly-by-wire airplane. The system was implemented in such a way that essentially no transients occurred in transferring from primary to backup software. This was accomplished without a significant increase in the complexity of the backup software. The primary digital system was frame synchronized, which provided several advantages in implementing the resident backup software system. Since the time of the flight tests, two other flight vehicle programs have made a commitment to incorporate resident backup software similar in nature to the system described here.

  3. Digital image processing software system using an array processor

    International Nuclear Information System (INIS)

    Sherwood, R.J.; Portnoff, M.R.; Journeay, C.H.; Twogood, R.E.

    1981-01-01

    A versatile array processor-based system for general-purpose image processing was developed. At the heart of this system is an extensive, flexible software package that incorporates the array processor for effective interactive image processing. The software system is described in detail, and its application to a diverse set of applications at LLNL is briefly discussed. 4 figures, 1 table

  4. 36 CFR 1194.21 - Software applications and operating systems.

    Science.gov (United States)

    2010-07-01

    ... operating systems. 1194.21 Section 1194.21 Parks, Forests, and Public Property ARCHITECTURAL AND... Standards § 1194.21 Software applications and operating systems. (a) When software is designed to run on a... shall not disrupt or disable activated features of any operating system that are identified as...

  5. User and system considerations for the TCSTEK software library

    Energy Technology Data Exchange (ETDEWEB)

    Gray, W.H.

    1979-08-01

    This report documents the idiosyncrasies of the Tektronix PLOT 10 Terminal Control System level 3.3 software as it currently exists on the ORNL Fusion Energy Division DECsystem-10 computer. It is intended to serve as a reference for future Terminal Control System updates in order that continuity between releases of Terminal Control System PLOT 10 software may be maintained.

  6. User and system considerations for the TCSTEK software library

    International Nuclear Information System (INIS)

    Gray, W.H.

    1979-08-01

    This report documents the idiosyncrasies of the Tektronix PLOT 10 Terminal Control System level 3.3 software as it currently exists on the ORNL Fusion Energy Division DECsystem-10 computer. It is intended to serve as a reference for future Terminal Control System updates in order that continuity between releases of Terminal Control System PLOT 10 software may be maintained

  7. Understanding Legacy Features with Featureous

    DEFF Research Database (Denmark)

    Olszak, Andrzej; Jørgensen, Bo Nørregaard

    2011-01-01

    Feature-centric comprehension of source code is essential during software evolution. However, such comprehension is oftentimes difficult to achieve due the discrepancies between structural and functional units of object-oriented programs. We present a tool for feature-centric analysis of legacy...

  8. Compiling software for a hierarchical distributed processing system

    Science.gov (United States)

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2013-12-31

    Compiling software for a hierarchical distributed processing system including providing to one or more compiling nodes software to be compiled, wherein at least a portion of the software to be compiled is to be executed by one or more nodes; compiling, by the compiling node, the software; maintaining, by the compiling node, any compiled software to be executed on the compiling node; selecting, by the compiling node, one or more nodes in a next tier of the hierarchy of the distributed processing system in dependence upon whether any compiled software is for the selected node or the selected node's descendents; sending to the selected node only the compiled software to be executed by the selected node or selected node's descendent.

  9. Software reliability growth model for safety systems of nuclear reactor

    International Nuclear Information System (INIS)

    Thirugnana Murthy, D.; Murali, N.; Sridevi, T.; Satya Murty, S.A.V.; Velusamy, K.

    2014-01-01

    The demand for complex software systems has increased more rapidly than the ability to design, implement, test, and maintain them, and the reliability of software systems has become a major concern for our, modern society.Software failures have impaired several high visibility programs in space, telecommunications, defense and health industries. Besides the costs involved, it setback the projects. The ways of quantifying it and using it for improvement and control of the software development and maintenance process. This paper discusses need for systematic approaches for measuring and assuring software reliability which is a major share of project development resources. It covers the reliability models with the concern on 'Reliability Growth'. It includes data collection on reliability, statistical estimation and prediction, metrics and attributes of product architecture, design, software development, and the operational environment. Besides its use for operational decisions like deployment, it includes guiding software architecture, development, testing and verification and validation. (author)

  10. A fault-tolerant software strategy for digital systems

    Science.gov (United States)

    Hitt, E. F.; Webb, J. J.

    1984-01-01

    Techniques developed for producing fault-tolerant software are described. Tolerance is required because of the impossibility of defining fault-free software. Faults are caused by humans and can appear anywhere in the software life cycle. Tolerance is effected through error detection, damage assessment, recovery, and fault treatment, followed by return of the system to service. Multiversion software comprises two or more versions of the software yielding solutions which are examined by a decision algorithm. Errors can also be detected by extrapolation from previous results or by the acceptability of results. Violations of timing specifications can reveal errors, or the system can roll back to an error-free state when a defect is detected. The software, when used in flight control systems, must not impinge on time-critical responses. Efforts are still needed to reduce the costs of developing the fault-tolerant systems.

  11. A software system for laser design and analysis

    Science.gov (United States)

    Cross, P. L.; Barnes, N. P.; Filer, E. D.

    1990-01-01

    A laser-material database and laser-modeling software system for designing lasers for laser-based Light Detection And Ranging (LIDAR) systems are presented. The software system consists of three basic sections: the database, laser models, and interface software. The database contains the physical parameters of laser, optical, and nonlinear materials required by laser models. The models include efficiency calculations, electrooptical component models, resonator, amplifier, and oscillator models, and miscellaneous models. The interface software provides a user-friendly interface between the user and his personal data files, the database, and models. The structure of the software system is essentially in place, while future plans call for upgrading the computer hardware and software in order to support a multiuser multitask environment.

  12. Model-driven dependability assessment of software systems

    CERN Document Server

    Bernardi, Simona; Petriu, Dorina C

    2013-01-01

    In this book, the authors present cutting-edge model-driven techniques for modeling and analysis of software dependability. Most of them are based on the use of UML as software specification language. From the software system specification point of view, such techniques exploit the standard extension mechanisms of UML (i.e., UML profiling). UML profiles enable software engineers to add non-functional properties to the software model, in addition to the functional ones. The authors detail the state of the art on UML profile proposals for dependability specification and rigorously describe the t

  13. Towards plug-and-play integration of archetypes into legacy electronic health record systems: the ArchiMed experience.

    Science.gov (United States)

    Duftschmid, Georg; Chaloupka, Judith; Rinner, Christoph

    2013-01-22

    The dual model approach represents a promising solution for achieving semantically interoperable standardized electronic health record (EHR) exchange. Its acceptance, however, will depend on the effort required for integrating archetypes into legacy EHR systems. We propose a corresponding approach that: (a) automatically generates entry forms in legacy EHR systems from archetypes; and (b) allows the immediate export of EHR documents that are recorded via the generated forms and stored in the EHR systems' internal format as standardized and archetype-compliant EHR extracts. As a prerequisite for applying our approach, we define a set of basic requirements for the EHR systems. We tested our approach with an EHR system called ArchiMed and were able to successfully integrate 15 archetypes from a test set of 27. For 12 archetypes, the form generation failed owing to a particular type of complex structure (multiple repeating subnodes), which was prescribed by the archetypes but not supported by ArchiMed's data model. Our experiences show that archetypes should be customized based on the planned application scenario before their integration. This would allow problematic structures to be dissolved and irrelevant optional archetype nodes to be removed. For customization of archetypes, openEHR templates or specialized archetypes may be employed. Gaps in the data types or terminological features supported by an EHR system will often not preclude integration of the relevant archetypes. More work needs to be done on the usability of the generated forms.

  14. Training Requirements and Information Management System. Software user guide

    Energy Technology Data Exchange (ETDEWEB)

    Cillan, T.F.; Hodgson, M.A.

    1992-05-01

    This is the software user`s guide for the Training Requirements and Information Management System. This guide defines and describes the software operating procedures as they apply to the end user of the software program. This guide is intended as a reference tool for the user who already has an indepth knowledge of the Training Requirements and Information Management System functions and data reporting requirement.

  15. Advanced information processing system: Input/output network management software

    Science.gov (United States)

    Nagle, Gail; Alger, Linda; Kemp, Alexander

    1988-01-01

    The purpose of this document is to provide the software requirements and specifications for the Input/Output Network Management Services for the Advanced Information Processing System. This introduction and overview section is provided to briefly outline the overall architecture and software requirements of the AIPS system before discussing the details of the design requirements and specifications of the AIPS I/O Network Management software. A brief overview of the AIPS architecture followed by a more detailed description of the network architecture.

  16. Performance Optimization of Multi-Tenant Software Systems

    NARCIS (Netherlands)

    Bezemer, C.

    2014-01-01

    Multi-tenant software systems are Software-as-a-Service systems in which customers (or tenants) share the same resources. The key characteristics of multi-tenancy are hardware resource sharing, a high degree of configurability and a shared application and database instance. We can deduct from these

  17. Software for MR imaging system VISTA-E50

    International Nuclear Information System (INIS)

    Nakatao, Shirou; Iino, Mitsutoshi; Fukuda, Kazuhiko

    1989-01-01

    VISTA-E50 has the advantages of high-quality imaging, fast scanning, high patient throughput and easy operation featured by AI (artificial intelligence) technologies, as well as merits of compact, light-weight, space- and energy-saving system. This paper presents system software and clinical application software of VISTA-E50, especially for each function and advantage. (author)

  18. Environmental Control System Software & Hardware Development

    Science.gov (United States)

    Vargas, Daniel Eduardo

    2017-01-01

    ECS hardware: (1) Provides controlled purge to SLS Rocket and Orion spacecraft. (2) Provide mission-focused engineering products and services. ECS software: (1) NASA requires Compact Unique Identifiers (CUIs); fixed-length identifier used to identify information items. (2) CUI structure; composed of nine semantic fields that aid the user in recognizing its purpose.

  19. New control system: ADA softwares organization

    International Nuclear Information System (INIS)

    David, L.

    1992-01-01

    On VAX/VMS, ADA compiler is integrated in a workshop of ACS software engineering which allows a coherent development by control of source and executable programs, by separation of applications in various levels of visibility and by management of existing links between different modules of a same application. (A.B.)

  20. Artificial intelligence and expert systems in-flight software testing

    Science.gov (United States)

    Demasie, M. P.; Muratore, J. F.

    1991-01-01

    The authors discuss the introduction of advanced information systems technologies such as artificial intelligence, expert systems, and advanced human-computer interfaces directly into Space Shuttle software engineering. The reconfiguration automation project (RAP) was initiated to coordinate this move towards 1990s software technology. The idea behind RAP is to automate several phases of the flight software testing procedure and to introduce AI and ES into space shuttle flight software testing. In the first phase of RAP, conventional tools to automate regression testing have already been developed or acquired. There are currently three tools in use.

  1. An Agent Based Software Approach towards Building Complex Systems

    Directory of Open Access Journals (Sweden)

    Latika Kharb

    2015-08-01

    Full Text Available Agent-oriented techniques represent an exciting new means of analyzing, designing and building complex software systems. They have the potential to significantly improve current practice in software engineering and to extend the range of applications that can feasibly be tackled. Yet, to date, there have been few serious attempts to cast agent systems as a software engineering paradigm. This paper seeks to rectify this omission. Specifically, points to be argued include:firstly, the conceptual apparatus of agent-oriented systems is well-suited to building software solutions for complex systems and secondly, agent-oriented approaches represent a genuine advance over the current state of the art for engineering complex systems. Following on from this view, the major issues raised by adopting an agentoriented approach to software engineering are highlighted and discussed in this paper.

  2. The software design of area γ radiation monitoring system

    International Nuclear Information System (INIS)

    Song Chenxin; Deng Changming; Cheng Chang; Ren Yi; Meng Dan; Liu Yun

    2008-01-01

    This paper main introduction the system structure, software architecture, design ideas of the area γ radiation monitoring system. Detailed introduction some programming technology about the computer communication with the local display unit. (authors)

  3. The software design of area γ radiation monitoring system

    International Nuclear Information System (INIS)

    Song Chenxin; Deng Changming; Cheng Chang; Ren Yi; Meng Dan; Liu Yun

    2007-01-01

    This paper main introduction the system structure, software architecture, design ideas of the area γ radiation monitoring system. Detailed introduction some programming technology about the computer communication with the local display unit. (authors)

  4. In-air thermal imaging comparison of Legacy AdvanTec, Millennium, and Sovereign WhiteStar phacoemulsification systems.

    Science.gov (United States)

    Olson, Michael D; Miller, Kevin M

    2005-08-01

    To compare the temperature profiles of 3 popular phacoemulsification units (Alcon Legacy AdvanTec, Bausch & Lomb Millennium, and AMO Sovereign WhiteStar) under similar operating conditions in air. Jules Stein Eye Institute and the Department of Ophthalmology, David Geffen School of Medicine at UCLA, Los Angeles, California, USA. Phacoemulsification probes from the 3 units were placed side by side in air and imaged in the infrared region using model P60 ThermaCAM (Flir Systems). The highest temperature produced by each probe was measured 10 seconds and 30 seconds after power application. Testing was performed under conditions that might produce a corneal burn during cataract surgery. Irrigation flow was set at the low rate of 1 cc/min to simulate a tight incision. Aspiration flow was set at 0 cc/min to simulate occlusion of the needle lumen. Wound compression was simulated in some tests by suspending 22.6 g weights by rubber bands from the silicone sleeves. Manufacturers' specific and identical silicone sleeves were used to evaluate possible variations in thermal conductivity. The AdvanTec Legacy and Millennium were operated in pulse mode at 15 Hertz; 50% duty cycle; and 10%, 30%, and 50% power. The Sovereign WhiteStar was operated in both C/F (56 Hz, 33% duty cycle) and C/L (33 Hz, 20% duty cycle) modes at the same console power settings. Temperature profiles were determined at a variety of power settings with each system operating in continuous and pulse mode. Under all experimental conditions (at 10%, 30%, and 50% powers; with and without external weights suspended from the phacoemulsification probes; with manufacturers' and identical silicone sleeves; and in continuous and pulse modes), the Millennium and the Sovereign WhiteStar generated higher temperatures than the Legacy AdvanTec. Under controlled operating conditions in air and under a variety of power, load, and duty-cycle settings, the Millennium and the Sovereign WhiteStar, operating in both pulse and

  5. Software configuration management plan for HANDI 2000 business management system

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, D.

    1998-08-25

    The Software Configuration Management Plan (SCMP) describes the configuration management and control environment for HANDI 2000 for the PP and PS software as well as any custom developed software. This plan establishes requirements and processes for uniform documentation control, system change control, systematic evaluation and coordination of HANDI 2000. This SCMP becomes effective as this document is acceptance and will provide guidance through implementation efforts.

  6. Statistical reliability assessment of software-based systems

    Energy Technology Data Exchange (ETDEWEB)

    Korhonen, J. [VTT Electronics, Espoo (Finland); Pulkkinen, U.; Haapanen, P. [VTT Automation, Espoo (Finland)

    1997-01-01

    Plant vendors nowadays propose software-based systems even for the most critical safety functions. The reliability estimation of safety critical software-based systems is difficult since the conventional modeling techniques do not necessarily apply to the analysis of these systems, and the quantification seems to be impossible. Due to lack of operational experience and due to the nature of software faults, the conventional reliability estimation methods can not be applied. New methods are therefore needed for the safety assessment of software-based systems. In the research project Programmable automation systems in nuclear power plants (OHA), financed together by the Finnish Centre for Radiation and Nuclear Safety (STUK), the Ministry of Trade and Industry and the Technical Research Centre of Finland (VTT), various safety assessment methods and tools for software based systems are developed and evaluated. This volume in the OHA-report series deals with the statistical reliability assessment of software based systems on the basis of dynamic test results and qualitative evidence from the system design process. Other reports to be published later on in OHA-report series will handle the diversity requirements in safety critical software-based systems, generation of test data from operational profiles and handling of programmable automation in plant PSA-studies. (orig.) (25 refs.).

  7. Statistical reliability assessment of software-based systems

    International Nuclear Information System (INIS)

    Korhonen, J.; Pulkkinen, U.; Haapanen, P.

    1997-01-01

    Plant vendors nowadays propose software-based systems even for the most critical safety functions. The reliability estimation of safety critical software-based systems is difficult since the conventional modeling techniques do not necessarily apply to the analysis of these systems, and the quantification seems to be impossible. Due to lack of operational experience and due to the nature of software faults, the conventional reliability estimation methods can not be applied. New methods are therefore needed for the safety assessment of software-based systems. In the research project Programmable automation systems in nuclear power plants (OHA), financed together by the Finnish Centre for Radiation and Nuclear Safety (STUK), the Ministry of Trade and Industry and the Technical Research Centre of Finland (VTT), various safety assessment methods and tools for software based systems are developed and evaluated. This volume in the OHA-report series deals with the statistical reliability assessment of software based systems on the basis of dynamic test results and qualitative evidence from the system design process. Other reports to be published later on in OHA-report series will handle the diversity requirements in safety critical software-based systems, generation of test data from operational profiles and handling of programmable automation in plant PSA-studies. (orig.) (25 refs.)

  8. Software reliability and safety in nuclear reactor protection systems

    International Nuclear Information System (INIS)

    Lawrence, J.D.

    1993-11-01

    Planning the development, use and regulation of computer systems in nuclear reactor protection systems in such a way as to enhance reliability and safety is a complex issue. This report is one of a series of reports from the Computer Safety and Reliability Group, Lawrence Livermore that investigates different aspects of computer software in reactor National Laboratory, that investigates different aspects of computer software in reactor protection systems. There are two central themes in the report, First, software considerations cannot be fully understood in isolation from computer hardware and application considerations. Second, the process of engineering reliability and safety into a computer system requires activities to be carried out throughout the software life cycle. The report discusses the many activities that can be carried out during the software life cycle to improve the safety and reliability of the resulting product. The viewpoint is primarily that of the assessor, or auditor

  9. Software reliability and safety in nuclear reactor protection systems

    Energy Technology Data Exchange (ETDEWEB)

    Lawrence, J.D. [Lawrence Livermore National Lab., CA (United States)

    1993-11-01

    Planning the development, use and regulation of computer systems in nuclear reactor protection systems in such a way as to enhance reliability and safety is a complex issue. This report is one of a series of reports from the Computer Safety and Reliability Group, Lawrence Livermore that investigates different aspects of computer software in reactor National Laboratory, that investigates different aspects of computer software in reactor protection systems. There are two central themes in the report, First, software considerations cannot be fully understood in isolation from computer hardware and application considerations. Second, the process of engineering reliability and safety into a computer system requires activities to be carried out throughout the software life cycle. The report discusses the many activities that can be carried out during the software life cycle to improve the safety and reliability of the resulting product. The viewpoint is primarily that of the assessor, or auditor.

  10. Software development for a switch-based data acquisition system

    Energy Technology Data Exchange (ETDEWEB)

    Booth, A. [Superconducting Super Collider Lab., Dallas, TX (United States); Black, D.; Walsh, D. [Fermi National Accelerator Lab., Batavia, IL (United States)

    1991-12-01

    We report on the software aspects of the development of a switch-based data acquisition system at Fermilab. This paper describes how, with the goal of providing an ``integrated systems engineering`` environment, several powerful software tools were put in place to facilitate extensive exploration of all aspects of the design. These tools include a simulation package, graphics package and an Expert System shell which have been integrated to provide an environment which encourages the close interaction of hardware and software engineers. This paper includes a description of the simulation, user interface, embedded software, remote procedure calls, and diagnostic software which together have enabled us to provide real-time control and monitoring of a working prototype switch-based data acquisition (DAQ) system.

  11. Bistatic radar system analysis and software development

    OpenAIRE

    Teo, Ching Leong

    2003-01-01

    Approved for public release, distribution is unlimited Bistatic radar has some properties that are distinctly different from monostatic radar. Recently bistatic radar has received attention for its potential to detect stealth targets due to enhanced target forward scatter. Furthermore, the feasibility of hitchhiker radar has been demonstrated, which allows passive radar receivers to detect and track targets. This thesis developed a software simulation package in Matlab that provides a conv...

  12. Testing digital safety system software with a testability measure based on a software fault tree

    International Nuclear Information System (INIS)

    Sohn, Se Do; Hyun Seong, Poong

    2006-01-01

    Using predeveloped software, a digital safety system is designed that meets the quality standards of a safety system. To demonstrate the quality, the design process and operating history of the product are reviewed along with configuration management practices. The application software of the safety system is developed in accordance with the planned life cycle. Testing, which is a major phase that takes a significant time in the overall life cycle, can be optimized if the testability of the software can be evaluated. The proposed testability measure of the software is based on the entropy of the importance of basic statements and the failure probability from a software fault tree. To calculate testability, a fault tree is used in the analysis of a source code. With a quantitative measure of testability, testing can be optimized. The proposed testability can also be used to demonstrate whether the test cases based on uniform partitions, such as branch coverage criteria, result in homogeneous partitions that is known to be more effective than random testing. In this paper, the testability measure is calculated for the modules of a nuclear power plant's safety software. The module testing with branch coverage criteria required fewer test cases if the module has higher testability. The result shows that the testability measure can be used to evaluate whether partitions have homogeneous characteristics

  13. Simulation software support (S3) system a software testing and debugging tool

    International Nuclear Information System (INIS)

    Burgess, D.C.; Mahjouri, F.S.

    1990-01-01

    The largest percentage of technical effort in the software development process is accounted for debugging and testing. It is not unusual for a software development organization to spend over 50% of the total project effort on testing. In the extreme, testing of human-rated software (e.g., nuclear reactor monitoring, training simulator) can cost three to five times as much as all other software engineering steps combined. The Simulation Software Support (S 3 ) System, developed by the Link-Miles Simulation Corporation is ideally suited for real-time simulation applications which involve a large database with models programmed in FORTRAN. This paper will focus on testing elements of the S 3 system. In this paper system support software utilities are provided which enable the loading and execution of modules in the development environment. These elements include the Linking/Loader (LLD) for dynamically linking program modules and loading them into memory and the interactive executive (IEXEC) for controlling the execution of the modules. Features of the Interactive Symbolic Debugger (SD) and the Real Time Executive (RTEXEC) to support the unit and integrated testing will be explored

  14. Hardware control system using modular software under RSX-11D

    International Nuclear Information System (INIS)

    Kittell, R.S.; Helland, J.A.

    1978-01-01

    A modular software system used to control extensive hardware is described. The development, operation, and experience with this software are discussed. Included are the methods employed to implement this system while taking advantage of the Real-Time features of RSX-11D. Comparisons are made between this system and an earlier nonmodular system. The controlled hardware includes magnet power supplies, stepping motors, DVM's, and multiplexors, and is interfaced through CAMAC. 4 figures

  15. A Configurable, Object-Oriented, Transportation System Software Framework

    Energy Technology Data Exchange (ETDEWEB)

    KELLY,SUZANNE M.; MYRE,JOHN W.; PRICE,MARK H.; RUSSELL,ERIC D.; SCOTT,DAN W.

    2000-08-01

    The Transportation Surety Center, 6300, has been conducting continuing research into and development of information systems for the Configurable Transportation Security and Information Management System (CTSS) project, an Object-Oriented Framework approach that uses Component-Based Software Development to facilitate rapid deployment of new systems while improving software cost containment, development reliability, compatibility, and extensibility. The direction has been to develop a Fleet Management System (FMS) framework using object-oriented technology. The goal for the current development is to provide a software and hardware environment that will demonstrate and support object-oriented development commonly in the FMS Central Command Center and Vehicle domains.

  16. The achievement and assessment of safety in systems containing software

    International Nuclear Information System (INIS)

    Ball, A.; Dale, C.J.; Butterfield, M.H.

    1986-01-01

    In order to establish confidence in the safe operation of a reactor protection system, there is a need to establish, as far as it is possible, that: (i) the algorithms used are correct; (ii) the system is a correct implementation of the algorithms; and (iii) the hardware is sufficiently reliable. This paper concentrates principally on the second of these, as it applies to the software aspect of the more accurate and complex trip functions to be performed by modern reactor protection systems. In order to engineer safety into software, there is a need to use a development strategy which will stand a high chance of achieving a correct implementation of the trip algorithms. This paper describes three broad methodologies by which it is possible to enhance the integrity of software: fault avoidance, fault tolerance and fault removal. Fault avoidance is concerned with making the software as fault free as possible by appropriate choice of specification, design and implementation methods. A fault tolerant strategy may be advisable in many safety critical applications, in order to guard against residual faults present in the software of the installed system. Fault detection and removal techniques are used to remove as many faults as possible of those introduced during software development. The paper also discusses safety and reliability assessment as it applies to software, outlining the various approaches available. Finally, there is an outline of a research project underway in the UKAEA which is intended to assess methods for developing and testing safety and protection systems involving software. (author)

  17. Systems and software variability management concepts, tools and experiences

    CERN Document Server

    Capilla, Rafael; Kang, Kyo-Chul

    2013-01-01

    The success of product line engineering techniques in the last 15 years has popularized the use of software variability as a key modeling approach for describing the commonality and variability of systems at all stages of the software lifecycle. Software product lines enable a family of products to share a common core platform, while allowing for product specific functionality being built on top of the platform. Many companies have exploited the concept of software product lines to increase the resources that focus on highly differentiating functionality and thus improve their competitiveness

  18. Testing methodology of embedded software in digital plant protection system

    International Nuclear Information System (INIS)

    Seong, Ah Young; Choi, Bong Joo; Lee, Na Young; Hwang, Il Soon

    2001-01-01

    It is necessary to assure the reliability of software in order to digitalize RPS(Reactor Protection System). Since RPS causes fatal damage on accidental cases, it is classified as Safety 1E class. Therefore we propose the effective testing methodology to assure the reliability of embedded software in the DPPS(Digital Plant Protection System). To test the embedded software effectively in DPPS, our methodology consists of two steps. The first is the re-engineering step that extracts classes from structural source program, and the second is the level of testing step which is composed of unit testing, Integration Testing and System Testing. On each testing step we test the embedded software with selected test cases after the test item identification step. If we use this testing methodology, we can test the embedded software effectively by reducing the cost and the time

  19. Ignominy: tool for analysing software dependencies and for reducing complexity in large software systems

    Energy Technology Data Exchange (ETDEWEB)

    Tuura, L.A. E-mail: lassi.tuura@cern.ch

    2003-04-21

    LHC experiments such as CMS have large-scale software projects that are challenging to manage. We present Ignominy, a tool developed in CMS to help us deal better with complex software systems. Ignominy analysis the source code as well binary products such as libraries and programs to deliver a comprehensive view of the package dependencies, including all the external products used by the project. We describe the analysis and the various charts, diagrams and metrics collected by the tool, including results from several large-scale HEP software projects. We also discuss the progress made in CMS to improve the software structure and the experience we have gained in physical packaging and distribution of our code.

  20. Ignominy: tool for analysing software dependencies and for reducing complexity in large software systems

    Science.gov (United States)

    Tuura, L. A.; CMS Collaboration

    2003-04-01

    LHC experiments such as CMS have large-scale software projects that are challenging to manage. We present Ignominy, a tool developed in CMS to help us deal better with complex software systems. Ignominy analysis the source code as well binary products such as libraries and programs to deliver a comprehensive view of the package dependencies, including all the external products used by the project. We describe the analysis and the various charts, diagrams and metrics collected by the tool, including results from several large-scale HEP software projects. We also discuss the progress made in CMS to improve the software structure and the experience we have gained in physical packaging and distribution of our code.

  1. Ignominy Tool for analysing software dependencies and for reducing complexity in large software systems

    CERN Document Server

    Tuura, L A

    2003-01-01

    LHC experiments such as CMS have large-scale software projects that are challenging to manage. We present Ignominy, a tool developed in CMS to help us deal better with complex software systems. Ignominy analysis the source code as well binary products such as libraries and programs to deliver a comprehensive view of the package dependencies, including all the external products used by the project. We describe the analysis and the various charts, diagrams and metrics collected by the tool, including results from several large-scale HEP software projects. We also discuss the progress made in CMS to improve the software structure and the experience we have gained in physical packaging and distribution of our code.

  2. Ignominy: tool for analysing software dependencies and for reducing complexity in large software systems

    International Nuclear Information System (INIS)

    Tuura, L.A.

    2003-01-01

    LHC experiments such as CMS have large-scale software projects that are challenging to manage. We present Ignominy, a tool developed in CMS to help us deal better with complex software systems. Ignominy analysis the source code as well binary products such as libraries and programs to deliver a comprehensive view of the package dependencies, including all the external products used by the project. We describe the analysis and the various charts, diagrams and metrics collected by the tool, including results from several large-scale HEP software projects. We also discuss the progress made in CMS to improve the software structure and the experience we have gained in physical packaging and distribution of our code

  3. The software product assurance metrics study: JPL's software systems quality and productivity

    Science.gov (United States)

    Bush, Marilyn W.

    1989-01-01

    The findings are reported of the Jet Propulsion Laboratory (JPL)/Software Product Assurance (SPA) Metrics Study, conducted as part of a larger JPL effort to improve software quality and productivity. Until recently, no comprehensive data had been assembled on how JPL manages and develops software-intensive systems. The first objective was to collect data on software development from as many projects and for as many years as possible. Results from five projects are discussed. These results reflect 15 years of JPL software development, representing over 100 data points (systems and subsystems), over a third of a billion dollars, over four million lines of code and 28,000 person months. Analysis of this data provides a benchmark for gauging the effectiveness of past, present and future software development work. In addition, the study is meant to encourage projects to record existing metrics data and to gather future data. The SPA long term goal is to integrate the collection of historical data and ongoing project data with future project estimations.

  4. Integrated analysis software for bulk power system stability

    Energy Technology Data Exchange (ETDEWEB)

    Tanaka, T.; Nagao, T.; Takahashi, K. [Central Research Inst. of Electric Power Industry, Tokyo (Japan)

    1994-12-31

    This paper presents Central Research Inst.of Electric Power Industry - CRIEPI`s - own developed three softwares for bulk power network analysis and the user support system which arranges tremendous data necessary for these softwares with easy and high reliability. (author) 3 refs., 7 figs., 2 tabs.

  5. Software for ASS-500 based early warning system

    International Nuclear Information System (INIS)

    The article describes the software for the management of early warning system based on ASS-500 station. The software can communicate with the central computer using TCP/IP protocol. This allows remote control of the station through modem or local area network connection. The article describes Windows based user interface of the program

  6. Prototype Software for Automated Structural Analysis of Systems

    DEFF Research Database (Denmark)

    Jørgensen, A.; Izadi-Zamanabadi, Roozbeh; Kristensen, M.

    2004-01-01

    In this paper we present a prototype software tool that is developed to analyse the structural model of automated systems in order to identify redundant information that is hence utilized for Fault detection and Isolation (FDI) purposes. The dedicated algorithms in this software tool use a tri...

  7. Conceptual design for controller software of mechatronic systems

    NARCIS (Netherlands)

    Broenink, Johannes F.; Hilderink, G.H.; Bakkers, André; Bradshaw, Alan; Counsell, John

    1998-01-01

    The method and software tool presented here, aims at supporting the development of control software for mechatronic systems. Heterogeneous distributed embedded processors are considered as target hardware. Principles of the method are that the implementation process is a stepwise refinement from

  8. Automated transportation management system (ATMS) software project management plan (SPMP)

    Energy Technology Data Exchange (ETDEWEB)

    Weidert, R.S., Westinghouse Hanford

    1996-05-20

    The Automated Transportation Management System (ATMS) Software Project Management plan (SPMP) is the lead planning document governing the life cycle of the ATMS and its integration into the Transportation Information Network (TIN). This SPMP defines the project tasks, deliverables, and high level schedules involved in developing the client/server ATMS software.

  9. 77 FR 50722 - Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants

    Science.gov (United States)

    2012-08-22

    ... NUCLEAR REGULATORY COMMISSION [NRC-2012-0195] Software Unit Testing for Digital Computer Software...) is issuing for public comment draft regulatory guide (DG), DG-1208, ``Software Unit Testing for Digital Computer Software used in Safety Systems of Nuclear Power Plants.'' The DG-1208 is proposed...

  10. Using Software Architectures for Designing Distributed Embedded Systems

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak

    In this paper, we outline an on-going project of designing distributed embedded systems for closed-loop process control. The project is a joint effort between software architecture researchers and developers from two companies that produce commercial embedded process control systems. The project...... has a strong emphasis on software architectural issues and terminology in order to envision, design and analyze design alternatives. We present two results. First, we outline how focusing on software architecture, architectural issues and qualities are beneficial in designing distributed, embedded......, systems. Second, we present two different architectures for closed-loop process control and discuss benefits and reliabilities....

  11. Spaceport Command and Control System Software Development

    Science.gov (United States)

    Mahlin, Jonathan Nicholas

    2017-01-01

    There is an immense challenge in organizing personnel across a large agency such as NASA, or even over a subset of that, like a center's Engineering directorate. Workforce inefficiencies and challenges are bound to grow over time without oversight and management. It is also not always possible to hire new employees to fill workforce gaps, therefore available resources must be utilized more efficiently. The goal of this internship was to develop software that improves organizational efficiency by aiding managers, making employee information viewable and editable in an intuitive manner. This semester I created an application for managers that aids in optimizing allocation of employee resources for a single division with the possibility of scaling upwards. My duties this semester consisted of developing frontend and backend software to complete this task. The application provides user-friendly information displays and documentation of the workforce to allow NASA to track diligently track the status and skills of its workforce. This tool should be able to prove that current employees are being effectively utilized and if new hires are necessary to fulfill skill gaps.

  12. Capturing security requirements for software systems

    Science.gov (United States)

    El-Hadary, Hassan; El-Kassas, Sherif

    2014-01-01

    Security is often an afterthought during software development. Realizing security early, especially in the requirement phase, is important so that security problems can be tackled early enough before going further in the process and avoid rework. A more effective approach for security requirement engineering is needed to provide a more systematic way for eliciting adequate security requirements. This paper proposes a methodology for security requirement elicitation based on problem frames. The methodology aims at early integration of security with software development. The main goal of the methodology is to assist developers elicit adequate security requirements in a more systematic way during the requirement engineering process. A security catalog, based on the problem frames, is constructed in order to help identifying security requirements with the aid of previous security knowledge. Abuse frames are used to model threats while security problem frames are used to model security requirements. We have made use of evaluation criteria to evaluate the resulting security requirements concentrating on conflicts identification among requirements. We have shown that more complete security requirements can be elicited by such methodology in addition to the assistance offered to developers to elicit security requirements in a more systematic way. PMID:25685514

  13. Computer software design description for the integrated control and data acquisition system LDUA system

    International Nuclear Information System (INIS)

    Aftanas, B.L.

    1998-01-01

    This Computer Software Design Description (CSDD) document provides the overview of the software design for all the software that is part of the integrated control and data acquisition system of the Light Duty Utility Arm System (LDUA). It describes the major software components and how they interface. It also references the documents that contain the detailed design description of the components

  14. The art of software thermal management for embedded systems

    CERN Document Server

    Benson, Mark

    2014-01-01

    This book introduces Software Thermal Management (STM) as a means of reducing power consumption in a computing system, in order to manage heat, improve component reliability, and increase system safety.  Readers will benefit from this pragmatic guide to the field of STM for embedded systems and its catalog of software power management techniques.  Since thermal management is a key bottleneck in embedded systems design, this book focuses on power as the root cause of heat. Since software has an enormous impact on power consumption in an embedded system, this book guides readers to manage heat effectively by understanding, categorizing, and developing new ways to reduce dynamic power. Whereas most books on thermal management describe mechanisms to remove heat, this book focuses on ways to avoid generating heat in the first place.   • Explains fundamentals of software thermal management, application techniques and advanced optimization strategies; • Describes a novel method for managing dynamic power, e...

  15. Fabrication of a Sludge-Conditioning System for processing legacy wastes from the Gunite and Associated Tanks

    International Nuclear Information System (INIS)

    Randolph, J.D.; Lewis, B.E.; Farmer, J.R.; Johnson, M.A.

    2000-01-01

    The Sludge Conditioning System (SCS) for the Gunite and Associated Tanks (GAATs) is designed to receive, monitor, characterize and process legacy waste materials from the South Tank Farm tanks in preparation for final transfer of the wastes to the Melton Valley Storage Tanks (MVSTs), which are located at Oak Ridge National Laboratory. The SCS includes (1) a Primary Conditioning System (PCS) Enclosure for sampling and particle size classification, (2) a Solids Monitoring Test Loop (SMTL) for slurry characterization, (3) a Waste Transfer Pump to retrieve and transfer waste materials from GAAT consolidation tank W-9 to the MVSTs, (4) a PulsAir Mixing System to provide mixing of consolidated sludges for ease of retrieval, and (5) the interconnecting piping and valving. This report presents the design, fabrication, cost, and fabrication schedule information for the SCS

  16. Customizable software architectures in the accelerator control system environment

    CERN Document Server

    Mejuev, I; Kadokura, E

    2001-01-01

    Tailoring is further evolution of an application after deployment in order to adapt it to requirements that were not accounted for in the original design. End-user customization has been extensively researched in applied computer science from HCI and software engineering perspectives. Customization allows coping with flexibility requirements, decreasing maintenance and development costs of software products. In general, dynamic or diverse software requirements constitute the need for implementing end-user customization in computer systems. In accelerator physics research the factor of dynamic requirements is especially important, due to frequent software and hardware modifications resulting in correspondingly high upgrade and maintenance costs. We introduce the results of feasibility study on implementing end-user tailorability in the software for accelerator control system, considering the design and implementation of a distributed monitoring application for the 12 GeV KEK Proton Synchrotron as an example. T...

  17. A study of software safety analysis system for safety-critical software

    International Nuclear Information System (INIS)

    Chang, H. S.; Shin, H. K.; Chang, Y. W.; Jung, J. C.; Kim, J. H.; Han, H. H.; Son, H. S.

    2004-01-01

    The core factors and requirements for the safety-critical software traced and the methodology adopted in each stage of software life cycle are presented. In concept phase, Failure Modes and Effects Analysis (FMEA) for the system has been performed. The feasibility evaluation of selected safety parameter was performed and Preliminary Hazards Analysis list was prepared using HAZOP(Hazard and Operability) technique. And the check list for management control has been produced via walk-through technique. Based on the evaluation of the check list, activities to be performed in requirement phase have been determined. In the design phase, hazard analysis has been performed to check the safety capability of the system with regard to safety software algorithm using Fault Tree Analysis (FTA). In the test phase, the test items based on FMEA have been checked for fitness guided by an accident scenario. The pressurizer low pressure trip algorithm has been selected to apply FTA method to software safety analysis as a sample. By applying CASE tool, the requirements traceability of safety critical system has been enhanced during all of software life cycle phases

  18. Software control and system configuration management - A process that works

    Science.gov (United States)

    Petersen, K. L.; Flores, C., Jr.

    1983-01-01

    A comprehensive software control and system configuration management process for flight-crucial digital control systems of advanced aircraft has been developed and refined to insure efficient flight system development and safe flight operations. Because of the highly complex interactions among the hardware, software, and system elements of state-of-the-art digital flight control system designs, a systems-wide approach to configuration control and management has been used. Specific procedures are implemented to govern discrepancy reporting and reconciliation, software and hardware change control, systems verification and validation testing, and formal documentation requirements. An active and knowledgeable configuration control board reviews and approves all flight system configuration modifications and revalidation tests. This flexible process has proved effective during the development and flight testing of several research aircraft and remotely piloted research vehicles with digital flight control systems that ranged from relatively simple to highly complex, integrated mechanizations.

  19. In Forming Software: Systems, Structuralism, Demythification

    Directory of Open Access Journals (Sweden)

    Edward A. Shanken

    2014-05-01

    Full Text Available In the mid-1960s, Marshall McLuhan prophesied that electronic media were creating an increasingly interconnected global village. Such pronouncements popularized the idea that the era of machine-age technology was drawing to a close, ushering in a new era of information technology. This shift finds parallels in a wave of major art performances and exhibitions between 1966-1970, including nine evenings: theatre and engineering at the New York Armory, spearheaded by Robert Rauschenberg, Billy Klüver, and Robert Whitman in 1966; The Machine: As Seen at the End of the Mechanical Age, curated by Pontus Hultén at the Museum of Modern Art in New York (MOMA in 1968; Cybernetic Serendipity, curated by Jasia Reichardt at the Institute of Contemporary Art in London in 1968; and Software, Information Technology: Its New Meaning for Art, curated by Jack Burnham at the Jewish Museum in New York.

  20. MEMbrain. A software emergency management system

    International Nuclear Information System (INIS)

    Drager, K.H.; Brokke, I.

    1998-01-01

    MEMbrain is the name of the EUREKA project EU904. MEM is an abbreviation for Major Emergency Management and brain refers to computer technology. MEMbrain is a strategic European project - the consortium includes partners from six countries, covering the European continent from North to South (Finland, Norway, Denmark, France, Portugal and Greece). The strategy for the project has been to develop a dynamic decision support tool based on: information, prediction, communication, on-line training. The project's results has resulted in a set of knowledge-based software tools supporting MEM activities e.g.; public protection management, man to man communication management, environment information management, resource management, as well as an implementation of an architecture to integrate such tools. (R.P.)

  1. System Engineering Software Assessment Model for Exploration (SESAME) Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Concept phase space-systems architecture evaluations typically use mass estimates as the primary means of ranking potential mission architectures. Software does not...

  2. Software Testing During Post-Deployment Support of Weapon Systems

    National Research Council Canada - National Science Library

    Gimble, Thomas

    1994-01-01

    We are providing this final audit report for your information and use. The report discusses policies, procedures, and methodologies for software testing during post-deployment support of weapon systems...

  3. Towards a lessons learned system for critical software

    International Nuclear Information System (INIS)

    Andrade, J.; Ares, J.; Garcia, R.; Pazos, J.; Rodriguez, S.; Rodriguez-Paton, A.; Silva, A.

    2007-01-01

    Failure can be a major driver for the advance of any engineering discipline and Software Engineering is no exception. But failures are useful only if lessons are learned from them. In this article we aim to make a strong defence of, and set the requirements for, lessons learned systems for safety-critical software. We also present a prototype lessons learned system that includes many of the features discussed here. We emphasize that, apart from individual organizations, lessons learned systems should target industrial sectors and even the Software Engineering community. We would like to encourage the Software Engineering community to use this kind of systems as another tool in the toolbox, which complements or enhances other approaches like, for example, standards and checklists

  4. Plexus (Phillips Laboratory Expert System-Assisted User Software)

    National Research Council Canada - National Science Library

    Myers, Thomas

    1997-01-01

    This report summarizes the results of the Phillips Lab PLEXUS project to design, build, and distribute a user friendly, expert system assisted, GUI enhanced software suite of sophisticated atmospheric...

  5. Specification for Visual Requirements of Work-Centered Software Systems

    National Research Council Canada - National Science Library

    Knapp, James R

    2006-01-01

    ... aspects of the user interface design. Without the ability to specify such original requirements, the probability of creating an accurate and effective work-centered software system is significantly reduced...

  6. The Utility of Open Source Software in Military Systems

    National Research Council Canada - National Science Library

    Esperon, Agustin I; Munoz, Jose P; Tanneau, Jean M

    2005-01-01

    The MILOS (Military Systems based on Open-source Software) project was a European research program in the Eurofinder framework, attached to the CEPA 6 and co-financed by the Ministry of Defence of France and Spain...

  7. Towards a lessons learned system for critical software

    Energy Technology Data Exchange (ETDEWEB)

    Andrade, J. [University of A Coruna. Campus de Elvina, s/n. 15071, A Coruna (Spain)]. E-mail: jag@udc.es; Ares, J. [University of A Coruna. Campus de Elvina, s/n. 15071, A Coruna (Spain)]. E-mail: juanar@udc.es; Garcia, R. [University of A Coruna. Campus de Elvina, s/n. 15071, A Coruna (Spain)]. E-mail: rafael@udc.es; Pazos, J. [Technical University of Madrid. Campus de Montegancedo, s/n. 28660, Boadilla del Monte, Madrid (Spain)]. E-mail: jpazos@fi.upm.es; Rodriguez, S. [University of A Coruna. Campus de Elvina, s/n. 15071, A Coruna (Spain)]. E-mail: santi@udc.es; Rodriguez-Paton, A. [Technical University of Madrid. Campus de Montegancedo, s/n. 28660, Boadilla del Monte, Madrid (Spain)]. E-mail: arpaton@fi.upm.es; Silva, A. [Technical University of Madrid. Campus de Montegancedo, s/n. 28660, Boadilla del Monte, Madrid (Spain)]. E-mail: asilva@fi.upm.es

    2007-07-15

    Failure can be a major driver for the advance of any engineering discipline and Software Engineering is no exception. But failures are useful only if lessons are learned from them. In this article we aim to make a strong defence of, and set the requirements for, lessons learned systems for safety-critical software. We also present a prototype lessons learned system that includes many of the features discussed here. We emphasize that, apart from individual organizations, lessons learned systems should target industrial sectors and even the Software Engineering community. We would like to encourage the Software Engineering community to use this kind of systems as another tool in the toolbox, which complements or enhances other approaches like, for example, standards and checklists.

  8. Fault Tolerant Software: a Multi Agent System Solution

    DEFF Research Database (Denmark)

    Caponetti, Fabio; Bergantino, Nicola; Longhi, Sauro

    2009-01-01

    Development of high dependable systems remains a labour intensive task. This paper explores recent advances on the adaptation of the software agent architecture for control application while looking to dependability issues. Multiple agent systems theory will be reviewed giving methods to supervise...... it. Software ageing is shown to be the most common problem and rejuvenation its counteract. The paper will show how an agent population can be monitored, faulty agents isolated and reloaded in a healthy state, hence rejuvenated. The aim is to propose an architecture as basis for the design of control...... software able to tolerate faults and residual bugs without the need of maintenance stops....

  9. Hardware-assisted software clock synchronization for homogeneous distributed systems

    Science.gov (United States)

    Ramanathan, P.; Kandlur, Dilip D.; Shin, Kang G.

    1990-01-01

    A clock synchronization scheme that strikes a balance between hardware and software solutions is proposed. The proposed is a software algorithm that uses minimal additional hardware to achieve reasonably tight synchronization. Unlike other software solutions, the guaranteed worst-case skews can be made insensitive to the maximum variation of message transit delay in the system. The scheme is particularly suitable for large partially connected distributed systems with topologies that support simple point-to-point broadcast algorithms. Examples of such topologies include the hypercube and the mesh interconnection structures.

  10. Stress testing of digital flight-control system software

    Science.gov (United States)

    Rajan, N.; Defeo, P. V.; Saito, J.

    1983-01-01

    A technique for dynamically testing digital flight-control system software on a module-by-module basis is described. Each test module is repetitively executed faster than real-time with an exhaustive input sequence. Outputs of the test module are compared with outputs generated by an alternate, simpler implementation for the same input data. Discrepancies between the two sets of output indicate the possible presence of a software error. The results of an implementation of this technique in the Digital Flight-Control System Software Verification Laboratory are discussed.

  11. Development of design and analysis software for advanced nuclear system

    International Nuclear Information System (INIS)

    Wu Yican; Hu Liqin; Long Pengcheng; Luo Yuetong; Li Yazhou; Zeng Qin; Lu Lei; Zhang Junjun; Zou Jun; Xu Dezheng; Bai Yunqing; Zhou Tao; Chen Hongli; Peng Lei; Song Yong; Huang Qunying

    2010-01-01

    A series of professional codes, which are necessary software tools and data libraries for advanced nuclear system design and analysis, were developed by the FDS Team, including the codes of automatic modeling, physics and engineering calculation, virtual simulation and visualization, system engineering and safety analysis and the related database management etc. The development of these software series was proposed as an exercise of development of nuclear informatics. This paper introduced the main functions and key techniques of the software series, as well as some tests and practical applications. (authors)

  12. Qt based control system software for Low Energy Accelerator Facility

    International Nuclear Information System (INIS)

    Basu, A.; Singh, S.; Nagraju, S.B.V.; Gupta, S.; Singh, P.

    2012-01-01

    Qt based control system software for Low Energy Accelerating Facility (LEAF) is operational at Bhabha Atomic Research Centre (BARC), Trombay, Mumbai. LEAF is a 50 keV negative ion electrostatic accelerator based on SNICS ion source. Control system uses Nokia Trolltech's QT 4.x API for control system software. Ni 6008 USB based multifunction cards has been used for control and read back field equipments such as power supplies, pumps, valves etc. Control system architecture is designed to be client server. Qt is chosen for its excellent GUI capability and platform independent nature. Control system follows client server architecture. The paper will describe the control system. (author)

  13. Design and Acquisition of Software for Defense Systems

    Science.gov (United States)

    2018-02-14

    factory, training). However, based on the experience of the commercial sector, net costs can be expected to decrease after adopting iterative...the U.S. Naval Sea Systems Command (NAVSEA), and the AMC) need to develop workforce competency and a deep familiarity of current software development...techniques. To do so, they should acquire or access a small cadre of software systems architects with a deep understanding of iterative development

  14. Radioisotope thermoelectric generator transportation system subsystem 143 software development plan

    International Nuclear Information System (INIS)

    King, D.A.

    1994-01-01

    This plan describes the activities to be performed and the controls to be applied to the process of specifying, developing, and qualifying the data acquisition software for the Radioisotope Thermoelectric Generator (RTG) Transportation System Subsystem 143 Instrumentation and Data Acquisition System (IDAS). This plan will serve as a software quality assurance plan, a verification and validation (V and V) plan, and a configuration management plan

  15. Software for Intelligent System Health Management

    Science.gov (United States)

    Trevino, Luis C.

    2004-01-01

    This viewgraph presentation describes the characteristics and advantages of autonomy and artificial intelligence in systems health monitoring. The presentation lists technologies relevant to Intelligent System Health Management (ISHM), and some potential applications.

  16. Software For Monitoring VAX Computer Systems

    Science.gov (United States)

    Farkas, Les; Don, Ken; Lavery, David; Baron, Amy

    1994-01-01

    VAX Continuous Monitoring System (VAXCMS) computer program developed at NASA Headquarters to aid system managers in monitoring performances of VAX computer systems through generation of graphic images summarizing trends in performance metrics over time. VAXCMS written in DCL and VAX FORTRAN for use with DEC VAX-series computers running VMS 5.1 or later.

  17. Software Coherence in Multiprocessor Memory Systems. Ph.D. Thesis

    Science.gov (United States)

    Bolosky, William Joseph

    1993-01-01

    Processors are becoming faster and multiprocessor memory interconnection systems are not keeping up. Therefore, it is necessary to have threads and the memory they access as near one another as possible. Typically, this involves putting memory or caches with the processors, which gives rise to the problem of coherence: if one processor writes an address, any other processor reading that address must see the new value. This coherence can be maintained by the hardware or with software intervention. Systems of both types have been built in the past; the hardware-based systems tended to outperform the software ones. However, the ratio of processor to interconnect speed is now so high that the extra overhead of the software systems may no longer be significant. This issue is explored both by implementing a software maintained system and by introducing and using the technique of offline optimal analysis of memory reference traces. It finds that in properly built systems, software maintained coherence can perform comparably to or even better than hardware maintained coherence. The architectural features necessary for efficient software coherence to be profitable include a small page size, a fast trap mechanism, and the ability to execute instructions while remote memory references are outstanding.

  18. NIF Projects Controls and Information Systems Software Quality Assurance Plan

    Energy Technology Data Exchange (ETDEWEB)

    Fishler, B

    2011-03-18

    Quality achievement for the National Ignition Facility (NIF) and the National Ignition Campaign (NIC) is the responsibility of the NIF Projects line organization as described in the NIF and Photon Science Directorate Quality Assurance Plan (NIF QA Plan). This Software Quality Assurance Plan (SQAP) is subordinate to the NIF QA Plan and establishes quality assurance (QA) activities for the software subsystems within Controls and Information Systems (CIS). This SQAP implements an activity level software quality assurance plan for NIF Projects as required by the LLNL Institutional Software Quality Assurance Program (ISQAP). Planned QA activities help achieve, assess, and maintain appropriate quality of software developed and/or acquired for control systems, shot data systems, laser performance modeling systems, business applications, industrial control and safety systems, and information technology systems. The objective of this SQAP is to ensure that appropriate controls are developed and implemented for management planning, work execution, and quality assessment of the CIS organization's software activities. The CIS line organization places special QA emphasis on rigorous configuration control, change management, testing, and issue tracking to help achieve its quality goals.

  19. Software verification in on-line systems

    International Nuclear Information System (INIS)

    Ehrenberger, W.

    1980-01-01

    Operator assistance is more and more provided by computers. Computers contain programs, whose quality should be above a certain level, before they are allowed to be used in reactor control rooms. Several possibilities for gaining software reliability figures are discussed in this paper. By supervising the testing procedure of a program, one can estimate the number of remaining programming errors. Such an estimation, however, is not very accurate. With mathematical proving procedures one can gain some knowledge on program properties. Such proving procedures are important for the verification of general WHILE-loops, which tend to be error prone. The program analysis decomposes a program into its parts. First the program structure is made visible, which includes the data movements and the control flow. From this analysis test cases can be derived that lead to a complete test. Program analysis can be done by hand or automatically. A statistical program test normally requires a large number of test runs. This number is diminished if details concerning both the program to be tested or its use are known in advance. (orig.)

  20. Pharmacogenomics training using an instructional software system.

    Science.gov (United States)

    Springer, John A; Iannotti, Nicholas V; Kane, Michael D; Haynes, Kevin; Sprague, Jon E

    2011-03-10

    To implement an elective course in pharmacogenomics designed to teach pharmacy students about the fundamentals of pharmacogenomics and the anticipated changes it will bring to the profession. The 8 sessions of the course covered the basics of pharmacogenomics, genomic biotechnology, implementation of pharmacogenetics in pharmacy, information security and privacy, ethical issues related to the use of genomic data, pharmacoepidemiology, and use and promotion of GeneScription, a software program designed to mimic the professional pharmacy environment. Student grades were based on completion of a patient education pamphlet, a 2-page paper on pharmacogenomics, and precourse and postcourse survey instruments. In the postcourse survey, all students strongly agreed that genomic data could be used to determine the optimal dose of a drug and genomic data for metabolizing enzymes could be stored in a safe place. Students also were more willing to submit deoxyribonucleic acid (DNA) data for genetic profiling and better understood how DNA analysis is performed after completing the course. An elective course in pharmacogenomics equipped pharmacy students with the basic knowledge necessary to make clinical decisions based on pharmacogenomic data and to teach other healthcare professionals and patients about pharmacogenomics. For personalized medicine to become a reality, all pharmacists and pharmacy students must learn this knowledge and these skills.

  1. Cosimulation of embedded system using RTOS software simulator

    Science.gov (United States)

    Wang, Shihao; Duan, Zhigang; Liu, Mingye

    2003-09-01

    Embedded system design often employs co-simulation to verify system's function; one efficient verification tool of software is Instruction Set Simulator (ISS). As a full functional model of target CPU, ISS interprets instruction of embedded software step by step, which usually is time-consuming since it simulates at low-level. Hence ISS often becomes the bottleneck of co-simulation in a complicated system. In this paper, a new software verification tools, the RTOS software simulator (RSS) was presented. The mechanism of its operation was described in a full details. In RSS method, RTOS API is extended and hardware simulator driver is adopted to deal with data-exchange and synchronism between the two simulators.

  2. Generation of Embedded Hardware/Software from SystemC

    Directory of Open Access Journals (Sweden)

    Dominique Houzet

    2006-08-01

    Full Text Available Designers increasingly rely on reusing intellectual property (IP and on raising the level of abstraction to respect system-on-chip (SoC market characteristics. However, most hardware and embedded software codes are recoded manually from system level. This recoding step often results in new coding errors that must be identified and debugged. Thus, shorter time-to-market requires automation of the system synthesis from high-level specifications. In this paper, we propose a design flow intended to reduce the SoC design cost. This design flow unifies hardware and software using a single high-level language. It integrates hardware/software (HW/SW generation tools and an automatic interface synthesis through a custom library of adapters. We have validated our interface synthesis approach on a hardware producer/consumer case study and on the design of a given software radiocommunication application.

  3. Generation of Embedded Hardware/Software from SystemC

    Directory of Open Access Journals (Sweden)

    Ouadjaout Salim

    2006-01-01

    Full Text Available Designers increasingly rely on reusing intellectual property (IP and on raising the level of abstraction to respect system-on-chip (SoC market characteristics. However, most hardware and embedded software codes are recoded manually from system level. This recoding step often results in new coding errors that must be identified and debugged. Thus, shorter time-to-market requires automation of the system synthesis from high-level specifications. In this paper, we propose a design flow intended to reduce the SoC design cost. This design flow unifies hardware and software using a single high-level language. It integrates hardware/software (HW/SW generation tools and an automatic interface synthesis through a custom library of adapters. We have validated our interface synthesis approach on a hardware producer/consumer case study and on the design of a given software radiocommunication application.

  4. Use of Commericially Available Software in an Attribute Measurement System

    International Nuclear Information System (INIS)

    MacArthur, Duncan W.; Bracken, David S.; Carrillo, Louis A.; Elmont, Timothy H.; Frame, Katherine C.; Hirsch, Karen L.

    2005-01-01

    A major issue in international safeguards of nuclear materials is the ability to verify that processes and materials in nuclear facilities are consistent with declaration without revealing sensitive information. An attribute measurement system (AMS) is a non-destructive assay (NDA) system that utilizes an information barrier to protect potentially sensitive information about the measurement item. A key component is the software utilized for operator interface, data collection, analysis, and attribute determination, as well as the operating system under which they are implemented. Historically, custom software has been used almost exclusively in transparency applications, and it is unavoidable that some amount of custom software is needed. The focus of this paper is to explore the extent to which commercially available software may be used and the relative merits.

  5. Software Engineering Issues for Cyber-Physical Systems

    DEFF Research Database (Denmark)

    Al-Jaroodi, Jameela; Mohamed, Nader; Jawhar, Imad

    2016-01-01

    is not a trivial task. This paper provides an overview discussion of software engineering issues related to the analysis, design, development, verification and validation, and quality assurance of CPS software. Some of these issues are related to the nature/type of CPS while others are related to the complexity......Cyber-Physical Systems (CPS) provide many smart features for enhancing physical processes. These systems are designed with a set of distributed hardware, software, and network components that are embedded in physical systems and environments or attached to humans. Together they function seamlessly...... to offer specific functionalities or features that help enhance human lives, operations or environments. While different CPS components play important roles in a successful CPS development, the software plays the most important role among them. Acquiring and using high quality CPS components is the first...

  6. Investigating Advances in the Acquisition of Secure Systems Based on Open Architecture, Open Source Software, and Software Product Lines

    Science.gov (United States)

    2012-01-27

    software system acquisition within the DoD, whether focusing on SPLs ( Bergey & Jones, 2010; Guertin & Clements, 2010), or on how to improve software system...Engineering (SEKE2011), Miami, FL. Bergey , J., & Jones, L. (2010). Exploring acquisition strategies for adopting a software product line approach. In

  7. Outsourcing the development of specific application software using the ESA software engineering standards the SPS software Interlock System

    CERN Document Server

    Denis, B

    1995-01-01

    CERN is considering outsourcing as a solution to the reduction of staff. To need to re-engineer the SPS Software Interlock System provided an opportunity to explore the applicability of outsourcing to our specific controls environment and the ESA PSS-05 standards were selected for the requirements specification, the development, the control and monitoring and the project management. The software produced by the contractor is now fully operational. After outlining the scope and the complexity of the project, a discussion on the ESA PSS-05 will be presented: the choice, the way these standards improve the outsourcing process, the quality induced but also the need to adapt them and their limitation in the definition of the customer-supplier relationship. The success factors and the difficulties of development under contract will also be discussed. The maintenance aspect and the impact on in-house developments will finally be addressed.

  8. A General Water Resources Regulation Software System in China

    Science.gov (United States)

    LEI, X.

    2017-12-01

    To avoid iterative development of core modules in water resource normal regulation and emergency regulation and improve the capability of maintenance and optimization upgrading of regulation models and business logics, a general water resources regulation software framework was developed based on the collection and analysis of common demands for water resources regulation and emergency management. It can provide a customizable, secondary developed and extensible software framework for the three-level platform "MWR-Basin-Province". Meanwhile, this general software system can realize business collaboration and information sharing of water resources regulation schemes among the three-level platforms, so as to improve the decision-making ability of national water resources regulation. There are four main modules involved in the general software system: 1) A complete set of general water resources regulation modules allows secondary developer to custom-develop water resources regulation decision-making systems; 2) A complete set of model base and model computing software released in the form of Cloud services; 3) A complete set of tools to build the concept map and model system of basin water resources regulation, as well as a model management system to calibrate and configure model parameters; 4) A database which satisfies business functions and functional requirements of general water resources regulation software can finally provide technical support for building basin or regional water resources regulation models.

  9. Communicating embedded systems software and design

    CERN Document Server

    Jard, Claude

    2013-01-01

    The increased complexity of embedded systems coupled with quick design cycles to accommodate faster time-to-market requires increased system design productivity that involves both model-based design and tool-supported methodologies. Formal methods are mathematically-based techniques and provide a clean framework in which to express requirements and models of the systems, taking into account discrete, stochastic and continuous (timed or hybrid) parameters with increasingly efficient tools. This book deals with these formal methods applied to communicating embedded systems by presenting the

  10. Software system development of NPP plant DiD risk monitor. Basic design of software configuration

    International Nuclear Information System (INIS)

    Yoshikawa, Hidekazu; Nakagawa, Takashi

    2015-01-01

    A new risk monitor system is under development which can be applied not only to prevent severe accident in daily operation but also to serve as to mitigate the radiological hazard just after severe accident happens and long term management of post-severe accident consequences. The fundamental method for the new risk monitor system is first given on how to configure the Plant Defense in-Depth (DiD) Risk Monitor by object-oriented software system based on functional modeling approach. In this paper, software system for the plant DiD risk monitor is newly developed by object oriented method utilizing Unified Modeling Language (UML). Usage of the developed DiD risk monitor is also introduced by showing examples for LOCA case of AP1000. (author)

  11. Hardware and software architecture for the integration of the new EC waves launcher in FTU control system

    Energy Technology Data Exchange (ETDEWEB)

    Boncagni, L. [Associazione EURATOM-ENEA sulla Fusione – ENEA, Via Enrico Fermi, 45 00045 Frascati (RM) (Italy); Centioli, C., E-mail: cristina.centioli@enea.it [Associazione EURATOM-ENEA sulla Fusione – ENEA, Via Enrico Fermi, 45 00045 Frascati (RM) (Italy); Galperti, C.; Alessi, E.; Granucci, G. [Associazione EURATOM-ENEA-CNR sulla Fusione – IFP-CNR, Via Roberto Cozzi, 53 20125 Milano (Italy); Grosso, L.A. [Associazione EURATOM-ENEA sulla Fusione – ENEA, Via Enrico Fermi, 45 00045 Frascati (RM) (Italy); Marchetto, C. [Associazione EURATOM-ENEA-CNR sulla Fusione – IFP-CNR, Via Roberto Cozzi, 53 20125 Milano (Italy); Napolitano, M. [Associazione EURATOM-ENEA sulla Fusione – ENEA, Via Enrico Fermi, 45 00045 Frascati (RM) (Italy); Nowak, S. [Associazione EURATOM-ENEA-CNR sulla Fusione – IFP-CNR, Via Roberto Cozzi, 53 20125 Milano (Italy); Panella, M. [Associazione EURATOM-ENEA sulla Fusione – ENEA, Via Enrico Fermi, 45 00045 Frascati (RM) (Italy); Sozzi, C. [Associazione EURATOM-ENEA-CNR sulla Fusione – IFP-CNR, Via Roberto Cozzi, 53 20125 Milano (Italy); Tilia, B.; Vitale, V. [Associazione EURATOM-ENEA sulla Fusione – ENEA, Via Enrico Fermi, 45 00045 Frascati (RM) (Italy)

    2013-10-15

    Highlights: ► The integration of a new ECRH launcher to FTU legacy control system is reported. ► Fast control has been developed with a three-node RT cluster within MARTe framework. ► Slow control was implemented with a Simatic S7 PLC and an EPICS IOC-CA application. ► The first results have assessed the feasibility of the launcher control architecture. -- Abstract: The role of high power electron cyclotron (EC) waves in controlling magnetohydrodynamic (MHD) instabilities in tokamaks has been assessed in several experiments, exploiting the physical effects induced by resonant heating and current drive. Recently a new EC launcher, whose main goal is controlling tearing modes and possibly preventing their onset, is being implemented on FTU. So far most of the components of the launcher control strategy have been realized and successfully tested on plasma experiments. Nevertheless the operations of the new launcher must be completely integrated into the existing one, and to FTU control system. This work deals with this final step, proposing a hardware and software architecture implementing up to date technologies, to achieve a modular and effective control strategy well integrated into a legacy system. The slow control system of the new EC launcher is based on a Siemens S7 Programmable Logic Controller (PLC), integrated into FTU control system supervisor through an EPICS input output controller (IOC) and an in-house developed Channel Access client application creating an abstraction layer that decouples the IOC and the PLC from the FTU Supervisor software. This architecture could enable a smooth migration to an EPICS-only supervisory control system. The real time component of the control system is based on the open source MARTe framework relying on a Linux real time cluster, devoted to the detection of MHD instabilities and the calculation of the injection angles and the time reference for the radiofrequency power enable commands for the EC launcher.

  12. Programming Guidelines for FBD Programs in Reactor Protection System Software

    International Nuclear Information System (INIS)

    Jung, Se Jin; Lee, Dong Ah; Kim, Eui Sub; Yoo, Jun Beom; Lee, Jang Su

    2014-01-01

    Properties of programming languages, such as reliability, traceability, etc., play important roles in software development to improve safety. Several researches are proposed guidelines about programming to increase the dependability of software which is developed for safety critical systems. Misra-c is a widely accepted programming guidelines for the C language especially in the sector of vehicle industry. NUREG/CR-6463 helps engineers in nuclear industry develop software in nuclear power plant systems more dependably. FBD (Function Block Diagram), which is one of programming languages defined in IEC 61131-3 standard, is often used for software development of PLC (programmable logic controllers) in nuclear power plants. Software development for critical systems using FBD needs strict guidelines, because FBD is a general language and has easily mistakable elements. There are researches about guidelines for IEC 61131-3 programming languages. They, however, do not specify details about how to use languages. This paper proposes new guidelines for the FBD based on NUREG/CR-6463. The paper introduces a CASE (Computer-Aided Software Engineering) tool to check FBD programs with the new guidelines and shows availability with a case study using a FBD program in a reactor protection system. The paper is organized as follows

  13. Current position on software for the automatic data acquisition system

    International Nuclear Information System (INIS)

    1988-01-01

    This report describes the current concepts for software to control the operation of the Automatic Data Acquisition System (ADAS) proposed for the Deaf Smith County, Texas, Exploratory Shaft Facility (ESF). The purpose of this report is to provide conceptual details of how the ADAS software will execute the data acquisition function, and how the software will make collected information available to the test personnel, the Data Management Group (DMG), and other authorized users. It is not intended that this report describe all of the ADAS functions in exact detail, but the concepts included herein will form the basis for the formal ADAS functional requirements definition document. 5 refs., 14 figs

  14. Software layer for FPGA-based TESLA cavity control system

    Science.gov (United States)

    Koprek, Waldemar; Kaleta, Pawel; Szewinski, Jaroslaw; Pozniak, Krzysztof T.; Czarski, Tomasz; Romaniuk, Ryszard S.

    2005-02-01

    The paper describes design and practical realization of software for laboratory purposes to control FPGA-based photonic and electronic equipment. There is presented a universal solution for all relevant devices with FPGA chips and gigabit optical links. The paper describes architecture of the software layers and program solutions of hardware communication based on Internal Interface (II) technology. Such a solution was used for superconducting Cavity Controller and Simulator (SIMCON) for the TESLA experiment in DESY (Hamburg). A number of practical examples of the software solutions for the SIMCON system were given in this paper.

  15. Software systems for energy control in the English industry

    International Nuclear Information System (INIS)

    Bouma, J.W.J.

    1993-01-01

    Monitoring and targeting software systems have proved to be valuable tools for energy control, permitting to save five to ten percent of energy. The article reviews the systems that are presently available in England and illustrates how these systems are successfully used in practice in small (British Telecom) and middle large (Charles Wells Brewery) industrial applications. (A.S.)

  16. Towards automated construction of dependable software/hardware systems

    Energy Technology Data Exchange (ETDEWEB)

    Yakhnis, A.; Yakhnis, V. [Pioneer Technologies & Rockwell Science Center, Albuquerque, NM (United States)

    1997-11-01

    This report contains viewgraphs on the automated construction of dependable computer architecture systems. The outline of this report is: examples of software/hardware systems; dependable systems; partial delivery of dependability; proposed approach; removing obstacles; advantages of the approach; criteria for success; current progress of the approach; and references.

  17. Reactive Software Agent Anesthesia Decision Support System

    Directory of Open Access Journals (Sweden)

    Grant H. Kruger

    2011-12-01

    Full Text Available Information overload of the anesthesiologist through technological advances have threatened the safety of patients under anesthesia in the operating room (OR. Traditional monitoring and alarm systems provide independent, spatially distributed indices of patient physiological state. This creates the potential to distract caregivers from direct patient care tasks. To address this situation, a novel reactive agent decision support system with graphical human machine interface was developed. The system integrates the disparate data sources available in the operating room, passes the data though a decision matrix comprising a deterministic physiologic rule base established through medical research. Patient care is improved by effecting change to the care environment by displaying risk factors and alerts as an intuitive color coded animation. The system presents a unified, contextually appropriate snapshot of the patient state including current and potential risk factors, and alerts of critical patient events to the operating room team without requiring any user intervention. To validate the efficacy of the system, a retrospective analysis focusing on the hypotension rules were performed. Results show that even with vigilant and highly trained clinicians, deviations from ideal patient care exist and it is here that the proposed system may allow more standardized and improved patient care and potentially outcomes.

  18. Launch Control System Software Development System Automation Testing

    Science.gov (United States)

    Hwang, Andrew

    2017-01-01

    The Spaceport Command and Control System (SCCS) is the National Aeronautics and Space Administration's (NASA) launch control system for the Orion capsule and Space Launch System, the next generation manned rocket currently in development. This system requires high quality testing that will measure and test the capabilities of the system. For the past two years, the Exploration and Operations Division at Kennedy Space Center (KSC) has assigned a group including interns and full-time engineers to develop automated tests to save the project time and money. The team worked on automating the testing process for the SCCS GUI that would use streamed simulated data from the testing servers to produce data, plots, statuses, etc. to the GUI. The software used to develop automated tests included an automated testing framework and an automation library. The automated testing framework has a tabular-style syntax, which means the functionality of a line of code must have the appropriate number of tabs for the line to function as intended. The header section contains either paths to custom resources or the names of libraries being used. The automation library contains functionality to automate anything that appears on a desired screen with the use of image recognition software to detect and control GUI components. The data section contains any data values strictly created for the current testing file. The body section holds the tests that are being run. The function section can include any number of functions that may be used by the current testing file or any other file that resources it. The resources and body section are required for all test files; the data and function sections can be left empty if the data values and functions being used are from a resourced library or another file. To help equip the automation team with better tools, the Project Lead of the Automated Testing Team, Jason Kapusta, assigned the task to install and train an optical character recognition (OCR

  19. The Java Legacy Interface

    DEFF Research Database (Denmark)

    Korsholm, Stephan

    2007-01-01

    The Java Legacy Interface is designed to use Java for encapsulating native legacy code on small embedded platforms. We discuss why existing technologies for encapsulating legacy code (JNI) is not sufficient for an important range of small embedded platforms, and we show how the Java Legacy...... Interface offers this previously missing functionality. We describe an implementation of the Java Legacy Interface for a particular virtual machine, and how we have used this virtual machine to integrate Java with an existing, commercial, soft real-time, C/C++ legacy platform....

  20. Human Factors in Software Development Processes: Measuring System Quality

    DEFF Research Database (Denmark)

    Abrahão, Silvia; Baldassarre, Maria Teresa; Caivano, Danilo

    2016-01-01

    Software Engineering and Human-Computer Interaction look at the development process from different perspectives. They apparently use very different approaches, are inspired by different principles and address different needs. But, they definitively have the same goal: develop high quality software...... in the most effective way. The second edition of the workshop puts particular attention on efforts of the two communities in enhancing system quality. The research question discussed is: who, what, where, when, why, and how should we evaluate?...

  1. 75 FR 8400 - In the Matter of Certain Wireless Communications System Server Software, Wireless Handheld...

    Science.gov (United States)

    2010-02-24

    ... Communications System Server Software, Wireless Handheld Devices and Battery Packs; Notice of Investigation... within the United States after importation of certain wireless communications system server software... certain wireless communications system server software, wireless handheld devices or battery packs that...

  2. eXascale PRogramming Environment and System Software (XPRESS)

    Energy Technology Data Exchange (ETDEWEB)

    Chapman, Barbara [Univ. of Houston, TX (United States); Gabriel, Edgar [Univ. of Houston, TX (United States)

    2015-11-30

    Exascale systems, with a thousand times the compute capacity of today’s leading edge petascale computers, are expected to emerge during the next decade. Their software systems will need to facilitate the exploitation of exceptional amounts of concurrency in applications, and ensure that jobs continue to run despite the occurrence of system failures and other kinds of hard and soft errors. Adapting computations at runtime to cope with changes in the execution environment, as well as to improve power and performance characteristics, is likely to become the norm. As a result, considerable innovation is required to develop system support to meet the needs of future computing platforms. The XPRESS project aims to develop and prototype a revolutionary software system for extreme-­scale computing for both exascale and strong­scaled problems. The XPRESS collaborative research project will advance the state-­of-­the-­art in high performance computing and enable exascale computing for current and future DOE mission-­critical applications and supporting systems. The goals of the XPRESS research project are to: A. enable exascale performance capability for DOE applications, both current and future, B. develop and deliver a practical computing system software X-­stack, OpenX, for future practical DOE exascale computing systems, and C. provide programming methods and environments for effective means of expressing application and system software for portable exascale system execution.

  3. Quantitative reliability assessment for safety critical system software

    International Nuclear Information System (INIS)

    Chung, Dae Won; Kwon, Soon Man

    2005-01-01

    An essential issue in the replacement of the old analogue I and C to computer-based digital systems in nuclear power plants is the quantitative software reliability assessment. Software reliability models have been successfully applied to many industrial applications, but have the unfortunate drawback of requiring data from which one can formulate a model. Software which is developed for safety critical applications is frequently unable to produce such data for at least two reasons. First, the software is frequently one-of-a-kind, and second, it rarely fails. Safety critical software is normally expected to pass every unit test producing precious little failure data. The basic premise of the rare events approach is that well-tested software does not fail under normal routine and input signals, which means that failures must be triggered by unusual input data and computer states. The failure data found under the reasonable testing cases and testing time for these conditions should be considered for the quantitative reliability assessment. We will present the quantitative reliability assessment methodology of safety critical software for rare failure cases in this paper

  4. Safety Justification of Software Systems. Software Based Safety Systems. Regulatory Inspection Handbook

    Energy Technology Data Exchange (ETDEWEB)

    Dahll, Gustav (OECD Halden Project, Halden (NO)); Liwaang, Bo (Swedish Nuclear Power Inspectorate, Stockholm (Sweden)); Wainwright, Norman (Wainwright Safety Advice (GB))

    2006-07-01

    The introduction of new software based technology in the safety systems in nuclear power plants also makes it necessary to develop new strategies for regulatory review and assessment of these new systems that is more focused on reviewing the processes at the different phases in design phases during the system life cycle. It is a general requirement that the licensee shall perform different kinds of reviews. From a regulatory point of view it is more cost effective to assess that the design activities at the suppliers and the review activities within the development project are performed with good quality. But the change from more technical reviews over to the development process oriented approach also cause problems. When reviewing development and quality aspects there are no 'hard facts' that can be judged against some specified criteria, the issues are more 'soft' and are more to build up structure of arguments and evidences that the requirements are met. The regulatory review strategy must therefore change to follow the development process over the whole life cycle from concept phase until installation and operation. Even if we know what factors that is of interest we need some guidance on how to interpret and judge the information.For that purpose SKl started research activities in this area at the end of the 1990s. In the first phase, in co-operation with Gustav Dahll at the Halden project, a life cycle model was selected. For the different phases a qualitative influence net was constructed of the type that is used in Bayesian Believe Network together with a discussion on different issues involved. In the second phase of the research work, in co-operation with Norman Wainwright, a former NII inspector, information from a selection of the most important sources as guidelines, IAEA and EC reports etc, was mapped into the influence net structure (the total list on used sources are in the report). The result is presented in the form of

  5. Safety Justification of Software Systems. Software Based Safety Systems. Regulatory Inspection Handbook

    International Nuclear Information System (INIS)

    Dahll, Gustav; Liwang, Bo; Wainwright, Norman

    2006-01-01

    The introduction of new software based technology in the safety systems in nuclear power plants also makes it necessary to develop new strategies for regulatory review and assessment of these new systems that is more focused on reviewing the processes at the different phases in design phases during the system life cycle. It is a general requirement that the licensee shall perform different kinds of reviews. From a regulatory point of view it is more cost effective to assess that the design activities at the suppliers and the review activities within the development project are performed with good quality. But the change from more technical reviews over to the development process oriented approach also cause problems. When reviewing development and quality aspects there are no 'hard facts' that can be judged against some specified criteria, the issues are more 'soft' and are more to build up structure of arguments and evidences that the requirements are met. The regulatory review strategy must therefore change to follow the development process over the whole life cycle from concept phase until installation and operation. Even if we know what factors that is of interest we need some guidance on how to interpret and judge the information.For that purpose SKl started research activities in this area at the end of the 1990s. In the first phase, in co-operation with Gustav Dahll at the Halden project, a life cycle model was selected. For the different phases a qualitative influence net was constructed of the type that is used in Bayesian Believe Network together with a discussion on different issues involved. In the second phase of the research work, in co-operation with Norman Wainwright, a former NII inspector, information from a selection of the most important sources as guidelines, IAEA and EC reports etc, was mapped into the influence net structure (the total list on used sources are in the report). The result is presented in the form of questions (Q) and a

  6. Automatic Visualization of Software Requirements: Reactive Systems

    International Nuclear Information System (INIS)

    Castello, R.; Mili, R.; Tollis, I.G.; Winter, V.

    1999-01-01

    In this paper we present an approach that facilitates the validation of high consequence system requirements. This approach consists of automatically generating a graphical representation from an informal document. Our choice of a graphical notation is statecharts. We proceed in two steps: we first extract a hierarchical decomposition tree from a textual description, then we draw a graph that models the statechart in a hierarchical fashion. The resulting drawing is an effective requirements assessment tool that allows the end user to easily pinpoint inconsistencies and incompleteness

  7. Review of Bruce A reactor regulating system software

    International Nuclear Information System (INIS)

    1995-12-01

    Each of the four reactor units at the Ontario Hydro Bruce A Nuclear Generating Station is controlled by the Reactor Regulating System (RRS) software running on digital computers. This research report presents an assessment of the quality and reliability of the RRS software based on a review of the RRS design documentation, an analysis of certain significant Event Reports (SERs), and an examination of selected software changes. We found that the RRS software requirements (i.e., what the software should do) were never clearly documented, and that design documents, which should describe how the requirements are implemented, are incomplete and inaccurate. Some RRS-related SERs (i.e., reports on unexpected incidents relating to the reactor control) implied that there were faults in the RRS, or that RRS changes should be made to help prevent certain unexpected events. The follow-up investigations were generally poorly documented, and so it could not usually be determined that problems were properly resolved. The Bruce A software change control procedures require improvement. For the software changes examined, there was insufficient evidence provided by Ontario Hydro that the required procedures regarding change approval, independent review, documentation updates, and testing were followed. Ontario Hydro relies on the expertise of their technical staff to modify the RRS software correctly; they have confidence in the software code itself, even if the documentation is not up-to-date. Ontario Hydro did not produce the documentation required for an independent formal assessment of the reliability of the RRS. (author). 37 refs., 3 figs

  8. Within the triangle of healthcare legacies: comparing the performance of South-Eastern European health systems.

    Science.gov (United States)

    Jakovljevic, Mihajlo Michael; Arsenijevic, Jelena; Pavlova, Milena; Verhaeghe, Nick; Laaser, Ulrich; Groot, Wim

    2017-05-01

    Inter-regional comparison of health-reform outcomes in south-eastern Europe (SEE). Macro-indicators were obtained from the WHO Health for All Database. Inter-regional comparison among post-Semashko, former Yugoslavia, and prior-1989-free-market SEE economies was conducted. United Nations Development Program Human Development Index growth was strongest among prior-free-market SEE, followed by former Yugoslavia and post-Semashko. Policy cuts to hospital beds and nursing-staff capacities were highest in post-Semashko. Physician density increased the most in prior-free-market SEE. Length of hospital stay was reduced in most countries; frequency of outpatient visits and inpatient discharges doubled in prior-free-market SEE. Fertility rates fell for one third in Post-Semashko and prior-free-market SEE. Crude death rates slightly decreased in prior-free-market-SEE and post-Semashko, while growing in the former Yugoslavia region. Life expectancy increased by 4 years on average in all regions; prior-free-market SEE achieving the highest longevity. Childhood and maternal mortality rates decreased throughout SEE, while post-Semashko countries recorded the most progress. Significant differences in healthcare resources and outcomes were observed among three historical health-policy legacies in south-eastern Europe. These different routes towards common goals created a golden opportunity for these economies to learn from each other.

  9. Conjunctive programming: An interactive approach to software system synthesis

    Science.gov (United States)

    Tausworthe, Robert C.

    1992-01-01

    This report introduces a technique of software documentation called conjunctive programming and discusses its role in the development and maintenance of software systems. The report also describes the conjoin tool, an adjunct to assist practitioners. Aimed at supporting software reuse while conforming with conventional development practices, conjunctive programming is defined as the extraction, integration, and embellishment of pertinent information obtained directly from an existing database of software artifacts, such as specifications, source code, configuration data, link-edit scripts, utility files, and other relevant information, into a product that achieves desired levels of detail, content, and production quality. Conjunctive programs typically include automatically generated tables of contents, indexes, cross references, bibliographic citations, tables, and figures (including graphics and illustrations). This report presents an example of conjunctive programming by documenting the use and implementation of the conjoin program.

  10. Software Development and Test Methodology for a Distributed Ground System

    Science.gov (United States)

    Ritter, George; Guillebeau, Pat; McNair, Ann R. (Technical Monitor)

    2002-01-01

    The Marshall Space Flight Center's (MSFC) Payload Operations Center (POC) ground system has evolved over a period of about 10 years. During this time the software processes have migrated from more traditional to more contemporary development processes in an effort to minimize unnecessary overhead while maximizing process benefits. The Software processes that have evolved still emphasize requirements capture, software configuration management, design documenting, and making sure the products that have been developed are accountable to initial requirements. This paper will give an overview of how the Software Processes have evolved, highlighting the positives as well as the negatives. In addition, we will mention the COTS tools that have been integrated into the processes and how the COTS have provided value to the project.

  11. Subsystem software for TSTA [Tritium Systems Test Assembly

    International Nuclear Information System (INIS)

    Mann, L.W.; Claborn, G.W.; Nielson, C.W.

    1987-01-01

    The Subsystem Control Software at the Tritium System Test Assembly (TSTA) must control sophisticated chemical processes through the physical operation of valves, motor controllers, gas sampling devices, thermocouples, pressure transducers, and similar devices. Such control software has to be capable of passing stringent quality assurance (QA) criteria to provide for the safe handling of significant amounts of tritium on a routine basis. Since many of the chemical processes and physical components are experimental, the control software has to be flexible enough to allow for trial/error learning curve, but still protect the environment and personnel from exposure to unsafe levels of radiation. The software at TSTA is implemented in several levels as described in a preceding paper in these proceedings. This paper depends on information given in the preceding paper for understanding. The top level is the Subsystem Control level

  12. Tailorable software architectures in the accelerator control system environment

    International Nuclear Information System (INIS)

    Mejuev, Igor; Kumagai, Akira; Kadokura, Eiichi

    2001-01-01

    Tailoring is further evolution of an application after deployment in order to adapt it to requirements that were not accounted for in the original design. End-user tailorability has been extensively researched in applied computer science from HCl and software engineering perspectives. Tailorability allows coping with flexibility requirements, decreasing maintenance and development costs of software products. In general, dynamic or diverse software requirements constitute the need for implementing end-user tailorability in computer systems. In accelerator physics research the factor of dynamic requirements is especially important, due to frequent software and hardware modifications resulting in correspondingly high upgrade and maintenance costs. In this work we introduce the results of feasibility study on implementing end-user tailorability in the software for accelerator control system, considering the design and implementation of distributed monitoring application for 12 GeV KEK Proton Synchrotron as an example. The software prototypes used in this work are based on a generic tailoring platform (VEDICI), which allows decoupling of tailoring interfaces and runtime components. While representing a reusable application-independent framework, VEDICI can be potentially applied for tailoring of arbitrary compositional Web-based applications

  13. Software for the occupational health and safety integrated management system

    International Nuclear Information System (INIS)

    Vătăsescu, Mihaela

    2015-01-01

    This paper intends to present the design and the production of a software for the Occupational Health and Safety Integrated Management System with the view to a rapid drawing up of the system documents in the field of occupational health and safety

  14. 14 CFR 415.123 - Computing systems and software.

    Science.gov (United States)

    2010-01-01

    ...-critical computer system function for any operation performed during launch processing or flight that could... display; (3) Provide flow charts or diagrams that show all hardware data busses, hardware interfaces, software interfaces, data flow, and power systems, and all operations of each safety-critical computer...

  15. AN EVALUATION OF FIVE COMMERCIAL IMMUNOASSAY DATA ANALYSIS SOFTWARE SYSTEMS

    Science.gov (United States)

    An evaluation of five commercial software systems used for immunoassay data analysis revealed numerous deficiencies. Often, the utility of statistical output was compromised by poor documentation. Several data sets were run through each system using a four-parameter calibration f...

  16. QFD Application to a Software - Intensive System Development Project

    Science.gov (United States)

    Tran, T. L.

    1996-01-01

    This paper describes the use of Quality Function Deployment (QFD), adapted to requirements engineering for a software-intensive system development project, and sysnthesizes the lessons learned from the application of QFD to the Network Control System (NCS) pre-project of the Deep Space Network.

  17. A Reusable Software Architecture for Small Satellite AOCS Systems

    DEFF Research Database (Denmark)

    Alminde, Lars; Bendtsen, Jan Dimon; Laursen, Karl Kaas

    2006-01-01

    This paper concerns the software architecture called Sophy, which is an abbreviation for Simulation, Observation, and Planning in HYbrid systems. We present a framework that allows execution of hybrid dynamical systems in an on-line distributed computing environment, which includes interaction...

  18. A software architecture for knowledge-based systems

    NARCIS (Netherlands)

    Fensel, D; Groenboom, R

    The paper introduces a software architecture for the specification and verification of knowledge-based systems combining conceptual and formal techniques. Our focus is component-based specification enabling their reuse. We identify four elements of the specification of a knowledge-based system: a

  19. Software for the occupational health and safety integrated management system

    Energy Technology Data Exchange (ETDEWEB)

    Vătăsescu, Mihaela [University Politehnica Timisoara, Department of Engineering and Management, 5 Revolutiei street, 331128 Hunedoara (Romania)

    2015-03-10

    This paper intends to present the design and the production of a software for the Occupational Health and Safety Integrated Management System with the view to a rapid drawing up of the system documents in the field of occupational health and safety.

  20. A flexible software architecture for tokamak discharge control systems

    International Nuclear Information System (INIS)

    Ferron, J.R.; Penaflor, B.; Walker, M.L.; Moller, J.; Butner, D.

    1995-01-01

    The software structure of the plasma control system in use on the DIII-D tokamak experiment is described. This system implements control functions through software executing in real time on one or more digital computers. The software is organized into a hierarchy that allows new control functions needed to support the DIII-D experimental program to be added easily without affecting previously implemented functions. This also allows the software to be portable in order to create control systems for other applications. The tokamak operator uses an X-windows based interface to specify the time evolution of a tokamak discharge. The interface provides a high level view for the operator that reduces the need for detailed knowledge of the control system operation. There is provision for an asynchronous change to an alternate discharge time evolution in response to an event that is detected in real time. Quality control is enhanced through off-line testing that can make use of software-based tokamak simulators

  1. Seamless Method- and Model-based Software and Systems Engineering

    Science.gov (United States)

    Broy, Manfred

    Today engineering software intensive systems is still more or less handicraft or at most at the level of manufacturing. Many steps are done ad-hoc and not in a fully systematic way. Applied methods, if any, are not scientifically justified, not justified by empirical data and as a result carrying out large software projects still is an adventure. However, there is no reason why the development of software intensive systems cannot be done in the future with the same precision and scientific rigor as in established engineering disciplines. To do that, however, a number of scientific and engineering challenges have to be mastered. The first one aims at a deep understanding of the essentials of carrying out such projects, which includes appropriate models and effective management methods. What is needed is a portfolio of models and methods coming together with a comprehensive support by tools as well as deep insights into the obstacles of developing software intensive systems and a portfolio of established and proven techniques and methods with clear profiles and rules that indicate when which method is ready for application. In the following we argue that there is scientific evidence and enough research results so far to be confident that solid engineering of software intensive systems can be achieved in the future. However, yet quite a number of scientific research problems have to be solved.

  2. Software Sub-system in Loading Automatic Test System for the Measurement of Power Line Filters

    Directory of Open Access Journals (Sweden)

    Yu Bo

    2017-01-01

    Full Text Available The loading automatic test system for measurement of power line filters are in urgent demand. So the software sub-system of the whole test system was proposed. Methods: structured the test system based on the virtual instrument framework, which consisted of lower and up computer and adopted the top down approach of design to perform the system and its modules, according to the measurement principle of the test system. Results: The software sub-system including human machine interface, data analysis and process software, expert system, communication software, control software in lower computer, etc. had been designed. Furthermore, it had been integrated into the entire test system. Conclusion: This sub-system provided a fiendly software platform for the whole test system, and had many advantages such as strong functions, high performances, low prices. It not only raises the test efficiency of EMI filters, but also renders some creativities.

  3. Evaluating software for safety systems in nuclear power plants

    International Nuclear Information System (INIS)

    Lawrence, J.D.; Persons, W.L.; Preckshot, G.G.; Gallagher, J.

    1994-01-01

    In 1991, LLNL was asked by the NRC to provide technical assistance in various aspects of computer technology that apply to computer-based reactor protection systems. This has involved the review of safety aspects of new reactor designs and the provision of technical advice on the use of computer technology in systems important to reactor safety. The latter includes determining and documenting state-of-the-art subjects that require regulatory involvement by the NRC because of their importance in the development and implementation of digital computer safety systems. These subjects include data communications, formal methods, testing, software hazards analysis, verification and validation, computer security, performance, software complexity and others. One topic software reliability and safety is the subject of this paper

  4. An approach to software quality assurance for robotic inspection systems

    International Nuclear Information System (INIS)

    Kiebel, G.R.

    1993-10-01

    Software quality assurance (SQA) for robotic systems used in nuclear waste applications is vital to ensure that the systems operate safely and reliably and pose a minimum risk to humans and the environment. This paper describes the SQA approach for the control and data acquisition system for a robotic system being developed for remote surveillance and inspection of underground storage tanks (UST) at the Hanford Site

  5. A validatable legacy database migration using ORM

    NARCIS (Netherlands)

    Moes, T.H.; Wijbenga, J.P.; Balsters, H.; Huitema, G.B.

    2012-01-01

    This paper describes a method used in a real-life case of a legacy database migration. The difficulty of the case lies in the fact that the legacy application to be replaced has to remain fully available during the migration process while at the same time data from the old system is to be integrated

  6. SOFTM: a software maintenance expert system in Prolog

    DEFF Research Database (Denmark)

    Pau, L.; Negret, J. M.

    1988-01-01

    A description is given of a knowledge-based system called SOFTM, serving the following purposes: (1) assisting a software programmer or analyst in his application code maintenance tasks, (2) generating and updating automatically software correction documentation, (3) helping the end user register......-output, or procedural errors normally detected by the syntactic analyzer, compiler, or by the operating system environment. SOFTM relies on a unique ATN network-based code description, on diagnostic inference procedure based on context-based pattern classification, on maintenance log report generators...

  7. Advanced Transport Operating System (ATOPS) control display unit software description

    Science.gov (United States)

    Slominski, Christopher J.; Parks, Mark A.; Debure, Kelly R.; Heaphy, William J.

    1992-01-01

    The software created for the Control Display Units (CDUs), used for the Advanced Transport Operating Systems (ATOPS) project, on the Transport Systems Research Vehicle (TSRV) is described. Module descriptions are presented in a standardized format which contains module purpose, calling sequence, a detailed description, and global references. The global reference section includes subroutines, functions, and common variables referenced by a particular module. The CDUs, one for the pilot and one for the copilot, are used for flight management purposes. Operations performed with the CDU affects the aircraft's guidance, navigation, and display software.

  8. The Earth System Documentation (ES-DOC) Software Process

    Science.gov (United States)

    Greenslade, M. A.; Murphy, S.; Treshansky, A.; DeLuca, C.; Guilyardi, E.; Denvil, S.

    2013-12-01

    Earth System Documentation (ES-DOC) is an international project supplying high-quality tools & services in support of earth system documentation creation, analysis and dissemination. It is nurturing a sustainable standards based documentation eco-system that aims to become an integral part of the next generation of exa-scale dataset archives. ES-DOC leverages open source software, and applies a software development methodology that places end-user narratives at the heart of all it does. ES-DOC has initially focused upon nurturing the Earth System Model (ESM) documentation eco-system and currently supporting the following projects: * Coupled Model Inter-comparison Project Phase 5 (CMIP5); * Dynamical Core Model Inter-comparison Project (DCMIP); * National Climate Predictions and Projections Platforms Quantitative Evaluation of Downscaling Workshop. This talk will demonstrate that ES-DOC implements a relatively mature software development process. Taking a pragmatic Agile process as inspiration, ES-DOC: * Iteratively develops and releases working software; * Captures user requirements via a narrative based approach; * Uses online collaboration tools (e.g. Earth System CoG) to manage progress; * Prototypes applications to validate their feasibility; * Leverages meta-programming techniques where appropriate; * Automates testing whenever sensibly feasible; * Streamlines complex deployments to a single command; * Extensively leverages GitHub and Pivotal Tracker; * Enforces strict separation of the UI from underlying API's; * Conducts code reviews.

  9. Systems, methods and apparatus for developing and maintaining evolving systems with software product lines

    Science.gov (United States)

    Hinchey, Michael G. (Inventor); Rash, James L. (Inventor); Pena, Joaquin (Inventor)

    2011-01-01

    Systems, methods and apparatus are provided through which an evolutionary system is managed and viewed as a software product line. In some embodiments, the core architecture is a relatively unchanging part of the system, and each version of the system is viewed as a product from the product line. Each software product is generated from the core architecture with some agent-based additions. The result may be a multi-agent system software product line.

  10. ARCHITECTURE SOFTWARE SOLUTION TO SUPPORT AND DOCUMENT MANAGEMENT QUALITY SYSTEM

    Directory of Open Access Journals (Sweden)

    Milan Eric

    2010-12-01

    Full Text Available One of the basis of a series of standards JUS ISO 9000 is quality system documentation. An architecture of the quality system documentation depends on the complexity of business system. An establishment of an efficient management documentation of system of quality is of a great importance for the business system, as well as in the phase of introducing the quality system and in further stages of its improvement. The study describes the architecture and capability of software solutions to support and manage the quality system documentation in accordance with the requirements of standards ISO 9001:2001, ISO 14001:2005 HACCP etc.

  11. System Quality Management in Software Testing Laboratory that Chooses Accreditation

    Directory of Open Access Journals (Sweden)

    Yanet Brito R.

    2013-12-01

    Full Text Available The evaluation of software products will reach full maturity when executed by the scheme and provides third party certification. For the validity of the certification, the independent laboratory must be accredited for that function, using internationally recognized standards. This brings with it a challenge for the Industrial Laboratory Testing Software (LIPS, responsible for testing the products developed in Cuban Software Industry, define strategies that will permit it to offer services with a high level of quality. Therefore it is necessary to establish a system of quality management according to NC-ISO/IEC 17025: 2006 to continuously improve the operational capacity and technical competence of the laboratory, with a view to future accreditation of tests performed. This article discusses the process defined in the LIPS for the implementation of a Management System of Quality, from the current standards and trends, as a necessary step to opt for the accreditation of the tests performed.

  12. Oxygen Generation System Laptop Bus Controller Flight Software

    Science.gov (United States)

    Rowe, Chad; Panter, Donna

    2009-01-01

    The Oxygen Generation System Laptop Bus Controller Flight Software was developed to allow the International Space Station (ISS) program to activate specific components of the Oxygen Generation System (OGS) to perform a checkout of key hardware operation in a microgravity environment, as well as to perform preventative maintenance operations of system valves during a long period of what would otherwise be hardware dormancy. The software provides direct connectivity to the OGS Firmware Controller with pre-programmed tasks operated by on-orbit astronauts to exercise OGS valves and motors. The software is used to manipulate the pump, separator, and valves to alleviate the concerns of hardware problems due to long-term inactivity and to allow for operational verification of microgravity-sensitive components early enough so that, if problems are found, they can be addressed before the hardware is required for operation on-orbit. The decision was made to use existing on-orbit IBM ThinkPad A31p laptops and MIL-STD-1553B interface cards as the hardware configuration. The software at the time of this reporting was developed and tested for use under the Windows 2000 Professional operating system to ensure compatibility with the existing on-orbit computer systems.

  13. Generation of embedded Hardware/Software from SystemC

    OpenAIRE

    Houzet , Dominique; Ouadjaout , Salim

    2006-01-01

    International audience; Designers increasingly rely on reusing intellectual property (IP) and on raising the level of abstraction to respect system-on-chip (SoC) market characteristics. However, most hardware and embedded software codes are recoded manually from system level. This recoding step often results in new coding errors that must be identified and debugged. Thus, shorter time-to-market requires automation of the system synthesis from high-level specifications. In this paper, we propo...

  14. Software Testbed for Developing and Evaluating Integrated Autonomous Systems

    Science.gov (United States)

    2015-03-01

    he has led the development of simulation-based intelligent tutoring systems for the Department of Defense. . Axel Prompt is a Software Engineer...utilization of the system components and resources. An execution manager coordinates the activities of the other subsystems. The subsystems are integrated...state estimation subsystems. For example, diagnostic systems analyze sensor readings, commands, and other data to identify faulty components and their

  15. NSTX-U Digital Coil Protection System Software Detailed Design

    Energy Technology Data Exchange (ETDEWEB)

    None

    2014-06-01

    The National Spherical Torus Experiment (NSTX) currently uses a collection of analog signal processing solutions for coil protection. Part of the NSTX Upgrade (NSTX-U) entails replacing these analog systems with a software solution running on a conventional computing platform. The new Digital Coil Protection System (DCPS) will replace the old systems entirely, while also providing an extensible framework that allows adding new functionality as desired.

  16. The graphics software of the Saclay linear accelerator control system

    International Nuclear Information System (INIS)

    Gournay, J.F.

    1987-06-01

    The Control system of the Saclay Linear Accelerator is based upon modern technology hardware. In the graphic software, pictures are created in exactly the same manner for all the graphic devices supported by the system. The informations used to draw a picture are stored in an array called a graphic segment. Three output primitives are used to add graphic material in a segment. Three coordinate systems are defined

  17. The analysis of software system in SOPHY SPECT

    International Nuclear Information System (INIS)

    Xu Chikang

    1993-01-01

    The FORTH software system of the Single Photon Emission Computed Tomography (SPECT) made by French SOPHA MEDICAL Corp. are analysed. On the basis of brief introduction to the construction principle and programming methods of FORTH language the whole structure and lay-out of the Sophy system are described. With the help of some figures the modular structure, the allocation of the hard disk and internal storage, as well as the running procedure of the system are introduced in details

  18. A Software Defined Radio Based Airplane Communication Navigation Simulation System

    Science.gov (United States)

    He, L.; Zhong, H. T.; Song, D.

    2018-01-01

    Radio communication and navigation system plays important role in ensuring the safety of civil airplane in flight. Function and performance should be tested before these systems are installed on-board. Conventionally, a set of transmitter and receiver are needed for each system, thus all the equipment occupy a lot of space and are high cost. In this paper, software defined radio technology is applied to design a common hardware communication and navigation ground simulation system, which can host multiple airplane systems with different operating frequency, such as HF, VHF, VOR, ILS, ADF, etc. We use a broadband analog frontend hardware platform, universal software radio peripheral (USRP), to transmit/receive signal of different frequency band. Software is compiled by LabVIEW on computer, which interfaces with USRP through Ethernet, and is responsible for communication and navigation signal processing and system control. An integrated testing system is established to perform functional test and performance verification of the simulation signal, which demonstrate the feasibility of our design. The system is a low-cost and common hardware platform for multiple airplane systems, which provide helpful reference for integrated avionics design.

  19. Design and development of virtual TXP control system software

    International Nuclear Information System (INIS)

    Wang Yunwei; Leng Shan; Liu Zhisheng; Wang Qiang; Shang Yanxia

    2008-01-01

    Taking distributed control system (DCS) of Siemens TELEPERM-XP (TXP) as the simulation object,Virtual TXP (VTXP) control system based on Virtual DCS with high fidelity and reliability was designed and developed on the platform of Windows. In the process of development, the method of object-oriented modeling and modularization program design are adopted, C++ language and technologies such as multithreading, ActiveX control, Socket network communication are used, to realize the wide range dynamic simulation and recreate the functions of the hardware and software of real TXP. This paper puts emphasis on the design and realization of Control server and Communication server. The development of Virtual TXP control system software is with great effect on the construction of simulation system and the design, commission, verification and maintenance of control system in large-scale power plants, nuclear power plants and combined cycle power plants. (authors)

  20. Spaceport Command and Control System Automated Verification Software Development

    Science.gov (United States)

    Backus, Michael W.

    2017-01-01

    For as long as we have walked the Earth, humans have always been explorers. We have visited our nearest celestial body and sent Voyager 1 beyond our solar system1 out into interstellar space. Now it is finally time for us to step beyond our home and onto another planet. The Spaceport Command and Control System (SCCS) is being developed along with the Space Launch System (SLS) to take us on a journey further than ever attempted. Within SCCS are separate subsystems and system level software, each of which have to be tested and verified. Testing is a long and tedious process, so automating it will be much more efficient and also helps to remove the possibility of human error from mission operations. I was part of a team of interns and full-time engineers who automated tests for the requirements on SCCS, and with that was able to help verify that the software systems are performing as expected.

  1. An expert system as applied to bridges : software development phase.

    Science.gov (United States)

    1989-01-01

    This report describes the results of the third of a four-part study dealing with the use of a computerized expert system to assist bridge engineers in their structures management program. In this phase of the study, software (called DOBES) was writte...

  2. Osprey: Operating system for predictable clouds

    NARCIS (Netherlands)

    Sacha, Jan; Napper, Jeff; Mullender, Sape J.; McKie, Jim

    2012-01-01

    Cloud computing is currently based on hardware virtualization wherein a host operating system provides a virtual machine interface nearly identical to that of physical hardware to guest operating systems. Full transparency allows backward compatibility with legacy software but introduces

  3. The new ICSU World Data System: Building on the 50 Year Legacy of the World Data Centers

    Science.gov (United States)

    Clark, D. M.; Minster, J.

    2008-12-01

    The International Council for Science (ICSU) World Data Center (WDC) system was established in 1957 in response to the data needs of the International Geophysical Year (IGY). Its holdings included a wide range of solar, geophysical, environmental, and human dimensions data. The WDC system developed many innovative data management and data exchange procedures and techniques over the last 50 years, which mitigated effectively the impact of global politics on science. The beginning of the 21st century has seen new ICSU requirements for management of large and diverse scientific data from major international programs such as the Group on Earth Observations (GEO) Global Earth Observation Systems of Systems (GEOSS), the International Polar Year (IPY), the Millennium Ecosystems Assessment (MEA), and the Coordinated Energy and Water Cycle Observation Project (CEOP). As a consequence, a completely new ICSU data activity, the World Data System (WDS) is being created which will incorporate the major ICSU data activities including in particular the WDCs and the Federation of Astronomical and Geophysical Data- Analysis Services. Using the legacy of the WDC system, the WDS will place an emphasis on new information technology as applied to modern data management techniques and international data exchange. The new World Data System will support ICSU's enduring mission and objectives, ensuring the long-term stewardship and provision of quality-assessed data and data services to the international science community and other stakeholders. It will have a broader disciplinary and geographic base than the current ICSU networks and be recognized as a world-wide "community of excellence" for data issues. It will use state-of-the-art systems interoperability, international very high bandwidth capabilities and a coordinated focus on topics such as virtual observatories. It will also encourage the establishment of new data centers and services, using modern paradigms for their establishment

  4. Actuator prototype system by voice commands using free software

    Directory of Open Access Journals (Sweden)

    Jaime Andrango

    2016-06-01

    Full Text Available This prototype system is a software application that through the use of techniques of digital signal processing, extracts information from the user's speech, which is then used to manage the on/off actuator on a peripheral computer when vowels are pronounced. The method applies spectral differences. The application uses the parallel port as actuator, with the information recorded in the memory address 378H. This prototype was developed using free software tools for its versatility and dynamism, and to allow other researchers to base on it for further studies.

  5. EPICS: A control system software co-development success story

    International Nuclear Information System (INIS)

    Knott, M.; Gurd, D.; Lewis, S.; Thuot, M.

    1993-01-01

    The Experimental Physics and Industrial Control Systems (EPICS) is the result of a software sharing and co-development effort of major importance now underway. The initial two participants, LANL and ANL, have now been joined by three other labs, and an earlier version of the software has been transferred to three commercial firms and is currently undergoing separate development. The reasons for EPICS's success may be useful to enumerate and explain and the desire and prospects for its continued development are certainly worth examining

  6. Software safety analysis techniques for developing safety critical software in the digital protection system of the LMR

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jang Soo; Cheon, Se Woo; Kim, Chang Hoi; Sim, Yun Sub

    2001-02-01

    This report has described the software safety analysis techniques and the engineering guidelines for developing safety critical software to identify the state of the art in this field and to give the software safety engineer a trail map between the code and standards layer and the design methodology and documents layer. We have surveyed the management aspects of software safety activities during the software lifecycle in order to improve the safety. After identifying the conventional safety analysis techniques for systems, we have surveyed in details the software safety analysis techniques, software FMEA(Failure Mode and Effects Analysis), software HAZOP(Hazard and Operability Analysis), and software FTA(Fault Tree Analysis). We have also surveyed the state of the art in the software reliability assessment techniques. The most important results from the reliability techniques are not the specific probability numbers generated, but the insights into the risk importance of software features. To defend against potential common-mode failures, high quality, defense-in-depth, and diversity are considered to be key elements in digital I and C system design. To minimize the possibility of CMFs and thus increase the plant reliability, we have provided D-in-D and D analysis guidelines.

  7. Software safety analysis techniques for developing safety critical software in the digital protection system of the LMR

    International Nuclear Information System (INIS)

    Lee, Jang Soo; Cheon, Se Woo; Kim, Chang Hoi; Sim, Yun Sub

    2001-02-01

    This report has described the software safety analysis techniques and the engineering guidelines for developing safety critical software to identify the state of the art in this field and to give the software safety engineer a trail map between the code and standards layer and the design methodology and documents layer. We have surveyed the management aspects of software safety activities during the software lifecycle in order to improve the safety. After identifying the conventional safety analysis techniques for systems, we have surveyed in details the software safety analysis techniques, software FMEA(Failure Mode and Effects Analysis), software HAZOP(Hazard and Operability Analysis), and software FTA(Fault Tree Analysis). We have also surveyed the state of the art in the software reliability assessment techniques. The most important results from the reliability techniques are not the specific probability numbers generated, but the insights into the risk importance of software features. To defend against potential common-mode failures, high quality, defense-in-depth, and diversity are considered to be key elements in digital I and C system design. To minimize the possibility of CMFs and thus increase the plant reliability, we have provided D-in-D and D analysis guidelines

  8. Design and implementation of embedded Bluetooth software system

    Science.gov (United States)

    Zhou, Zhijian; Zhou, Shujie; Xu, Huimin

    2001-10-01

    This thesis introduces the background knowledge and characteristics of Bluetooth technology. Then it summarizes the architecture and working principle of Bluetooth software. After carefully studying the characteristics of embedded operating system and Bluetooth software, this thesis declared two sets of module about Bluetooth software. Corresponding to these module's characteristics, this thesis introduces the design and implementation of LAN Access and Bluetooth headset. The Headset part introduces a developing method corresponding to the particularity of Bluetooth control software. Although these control software are application entity, the control signaling exchanged between them are regulations according to former definitions and they functions through the interaction of data and control information. These data and control information construct the protocol data unit (PDU), and the former definition can be seen as protocol in fact. This thesis uses the advanced development flow on communication protocol development as reference, a formal method - SDL (Specification and Description Language) - describing, validating and coding manually to C. This method not only reserved the efficiency of manually coded code, but also it ensures the quality of codes. The introduction also involves finite state machine theory while introduces the practical developing method on protocol development with the aid of SDL.

  9. Comparison of Overridden Medication-related Clinical Decision Support in the Intensive Care Unit between a Commercial System and a Legacy System.

    Science.gov (United States)

    Wong, Adrian; Wright, Adam; Seger, Diane L; Amato, Mary G; Fiskio, Julie M; Bates, David

    2017-08-23

    Electronic health records (EHRs) with clinical decision support (CDS) have shown to be effective at improving patient safety. Despite this, alerts delivered as part of CDS are overridden frequently, which is of concern in the critical care population as this group may have an increased risk of harm. Our organization recently transitioned from an internally-developed EHR to a commercial system. Data comparing various EHR systems, especially after transitions between EHRs, are needed to identify areas for improvement. To compare the two systems and identify areas for potential improvement with the new commercial system at a single institution. Overridden medication-related CDS alerts were included from October to December of the systems' respective years (legacy, 2011; commercial, 2015), restricted to three intensive care units. The two systems were compared with regards to CDS presentation and override rates for four types of CDS: drug-allergy, drug-drug interaction (DDI), geriatric and renal alerts. A post hoc analysis to evaluate for adverse drug events (ADEs) potentially resulting from overridden alerts was performed for 'contraindicated' DDIs via chart review. There was a significant increase in provider exposure to alerts and alert overrides in the commercial system (commercial: n=5,535; legacy: n=1,030). Rates of overrides were higher for the allergy and DDI alerts (pcommercial system. Geriatric and renal alerts were significantly different in incidence and presentation between the two systems. No ADEs were identified in an analysis of 43 overridden contraindicated DDI alerts. The vendor system had much higher rates of both alerts and overrides, although we did not find evidence of harm in a review of DDIs which were overridden. We propose recommendations for improving our current system which may be helpful to other similar institutions; improving both alert presentation and the underlying knowledge base appear important.

  10. The Node Monitoring Component of a Scalable Systems Software Environment

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Samuel James [Iowa State Univ., Ames, IA (United States)

    2006-01-01

    This research describes Fountain, a suite of programs used to monitor the resources of a cluster. A cluster is a collection of individual computers that are connected via a high speed communication network. They are traditionally used by users who desire more resources, such as processing power and memory, than any single computer can provide. A common drawback to effectively utilizing such a large-scale system is the management infrastructure, which often does not often scale well as the system grows. Large-scale parallel systems provide new research challenges in the area of systems software, the programs or tools that manage the system from boot-up to running a parallel job. The approach presented in this thesis utilizes a collection of separate components that communicate with each other to achieve a common goal. While systems software comprises a broad array of components, this thesis focuses on the design choices for a node monitoring component. We will describe Fountain, an implementation of the Scalable Systems Software (SSS) node monitor specification. It is targeted at aggregate node monitoring for clusters, focusing on both scalability and fault tolerance as its design goals. It leverages widely used technologies such as XML and HTTP to present an interface to other components in the SSS environment.

  11. Systems Engineering Management and the Relationship of Systems Engineering to Project Management and Software Engineering (presentation)

    OpenAIRE

    Boehm, Barry; Conrow, Ed; Madachy, Ray; Nidiffer, Ken; Roedler, Garry

    2010-01-01

    Prepared for the 13th Annual NDIA Systems Engineering Conference October 28, 2010, “Achieving Acquisition Excellence Via Effective Systems Engineering”. Panel: Systems Engineering Management and the Relationship of Systems Engineering to Project Management and Software Engineering

  12. New Control System Software for the Hobby-Eberly Telescope

    Science.gov (United States)

    Rafferty, T.; Cornell, M. E.; Taylor, C., III; Moreira, W.

    2011-07-01

    The Hobby-Eberly Telescope at the McDonald Observatory is undergoing a major upgrade to support the Hobby-Eberly Telescope Dark Energy Experiment (HETDEX) and to facilitate large field systematic emission-line surveys of the universe. An integral part of this upgrade will be the development of a new software control system. Designed using modern object oriented programming techniques and tools, the new software system uses a component architecture that closely models the telescope hardware and instruments, and provides a high degree of configuration, automation and scalability. Here we cover the overall architecture of the new system, plus details some of the key design patterns and technologies used. This includes the utilization of an embedded Python scripting engine, the use of the factory method pattern and interfacing for easy run-time configuration, a flexible communication scheme, the design and use of a centralized logging system, and the distributed GUI architecture.

  13. Social software: E-learning beyond learning management systems

    DEFF Research Database (Denmark)

    Dalsgaard, Christian

    2006-01-01

    to move e-learning beyond learning management systems. An approach to use of social software in support of a social constructivist approach to e-learning is presented, and it is argued that learning management systems do not support a social constructivist approach which emphasizes self-governed learning...... activities of students. The article suggests a limitation of the use of learning management systems to cover only administrative issues. Further, it is argued that students' self-governed learning processes are supported by providing students with personal tools and engaging them in different kinds of social......The article argues that it is necessary to move e-learning beyond learning management systems and engage students in an active use of the web as a resource for their self-governed, problem-based and collaborative activities. The purpose of the article is to discuss the potential of social software...

  14. Client Mobile Software Design Principles for Mobile Learning Systems

    Directory of Open Access Journals (Sweden)

    Qing Tan

    2009-01-01

    Full Text Available In a client-server mobile learning system, client mobile software must run on the mobile phone to acquire, package, and send student’s interaction data via the mobile communications network to the connected mobile application server. The server will receive and process the client data in order to offer appropriate content and learning activities. To develop the mobile learning systems there are a number of very important issues that must be addressed. Mobile phones have scarce computing resources. They consist of heterogeneous devices and use various mobile operating systems, they have limitations with their user/device interaction capabilities, high data communications cost, and must provide for device mobility and portability. In this paper we propose five principles for designing Client mobile learning software. A location-based adaptive mobile learning system is presented as a proof of concept to demonstrate the applicability of these design principles.

  15. Aspects of system modelling in Hardware/Software partitioning

    DEFF Research Database (Denmark)

    Knudsen, Peter Voigt; Madsen, Jan

    1996-01-01

    This paper addresses fundamental aspects of system modelling and partitioning algorithms in the area of Hardware/Software Codesign. Three basic system models for partitioning are presented and the consequences of partitioning according to each of these are analyzed. The analysis shows the importa......This paper addresses fundamental aspects of system modelling and partitioning algorithms in the area of Hardware/Software Codesign. Three basic system models for partitioning are presented and the consequences of partitioning according to each of these are analyzed. The analysis shows...... the importance of making a clear distinction between the model used for partitioning and the model used for evaluation It also illustrates the importance of having a realistic hardware model such that hardware sharing can be taken into account. Finally, the importance of integrating scheduling and allocation...

  16. Making embedded systems design patterns for great software

    CERN Document Server

    White, Elecia

    2011-01-01

    Interested in developing embedded systems? Since they don't tolerate inefficiency, these systems require a disciplined approach to programming. This easy-to-read guide helps you cultivate a host of good development practices, based on classic software design patterns and new patterns unique to embedded programming. Learn how to build system architecture for processors, not operating systems, and discover specific techniques for dealing with hardware difficulties and manufacturing requirements. Written by an expert who's created embedded systems ranging from urban surveillance and DNA scanner

  17. 242-A Control System device logic software documentation. Revision 2

    International Nuclear Information System (INIS)

    Berger, J.F.

    1995-01-01

    A Distributive Process Control system was purchased by Project B-534. This computer-based control system, called the Monitor and Control System (MCS), was installed in the 242-A Evaporator located in the 200 East Area. The purpose of the MCS is to monitor and control the Evaporator and Monitor a number of alarms and other signals from various Tank Farm facilities. Applications software for the MCS was developed by the Waste Treatment System Engineering Group of Westinghouse. This document describes the Device Logic for this system

  18. Interaction between systems and software engineering in safety-critical systems

    International Nuclear Information System (INIS)

    Knight, J.

    1994-01-01

    There are three areas of concern: when is software to be considered safe; what, exactly, is the role of the software engineer; and how do systems, or sometimes applications, engineers and software engineers interact with each other. The author presents his perspective on these questions which he feels differ from those of many in the field. He argues for a clear definition of safety in the software arena, so the engineer knows what he is engineering toward. Software must be viewed as part of the entire system, since it does not function on its own, or isolation. He argues for the establishment of clear specifications in this area

  19. Software of diagnostic systems for nuclear power plants

    International Nuclear Information System (INIS)

    1989-01-01

    23 papers deal with the assessment of the standard of software in in-service diagnostics systems in the Dukovany and the Bohunice nuclear power plants. Research projects, intentions and the scope of research are outlined for the diagnostic systems of the Mochovce and the Temelin nuclear power plants. It is shown that the use is ever more growing of personal computers in this sphere. (J.B.)

  20. SAGA: A project to automate the management of software production systems

    Science.gov (United States)

    Campbell, Roy H.; Laliberte, D.; Render, H.; Sum, R.; Smith, W.; Terwilliger, R.

    1987-01-01

    The Software Automation, Generation and Administration (SAGA) project is investigating the design and construction of practical software engineering environments for developing and maintaining aerospace systems and applications software. The research includes the practical organization of the software lifecycle, configuration management, software requirements specifications, executable specifications, design methodologies, programming, verification, validation and testing, version control, maintenance, the reuse of software, software libraries, documentation, and automated management.

  1. Software application for quality control protocol of mammography systems

    International Nuclear Information System (INIS)

    Kjosevski, Vladimir; Gershan, Vesna; Ginovska, Margarita; Spasevska, Hristina

    2010-01-01

    Considering the fact that the Quality Control of the technological process of the mammographic system involves testing of a large number of parameters, it is clearly evident that there is a need for using the information technology for gathering, processing and storing of all the parameters that are result of this process. The main goal of this software application is facilitation and automation of the gathering, processing, storing and presenting process of the data related to the qualification of the physical and technical parameters during the quality control of the technological process of the mammographic system. The software application along with its user interface and database has been made with the Microsoft Access 2003 application which is part of the Microsoft Office 2003 software packet and has been chosen as a platform for developing because it is the most commonly used office application today among the computer users in the country. This is important because it will provide the end users a familiar environment to work in, without the need for additional training and improving the computer skills that they posses. Most importantly, the software application is easy to use, fast in calculating the parameters needed and it is an excellent way to store and display the results. There is a possibility for up scaling this software solution so it can be used by many different users at the same time over the Internet. It is highly recommended that this system is implemented as soon as possible in the quality control process of the mammographic systems due to its many advantages.(Author)

  2. The Legacy of Nikola Tesla

    Indian Academy of Sciences (India)

    Srimath

    and problems of rural power situation in India. Writing books of science for children is his major preoccupation now. The Legacy of Nikola Tesla. 2. AC Power System and its Growth in India. D P Sen Gupta. Electrical power supply has grown enormously during this century. In 1950 the total capacity of generators producing.

  3. The Legacy of Nikola Tesla

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 12; Issue 4. The Legacy of Nikola Tesla - AC Power System and its Growth in India. D P Sen Gupta. General Article Volume 12 Issue 4 April 2007 pp 69-79. Fulltext. Click here to view fulltext PDF. Permanent link:

  4. The Legacy of Nikola Tesla

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 12; Issue 3. The Legacy of Nikola Tesla - The AC System that he Helped to Usher in. D P Sen Gupta. General Article Volume 12 Issue 3 March 2007 pp 54-69. Fulltext. Click here to view fulltext PDF. Permanent link:

  5. SQuAVisiT : A Software Quality Assessment and Visualisation Toolset

    NARCIS (Netherlands)

    Roubtsov, Serguei; Telea, Alexandru; Holten, Danny

    2007-01-01

    Software quality assessment of large COBOL industrial legacy systems, both for maintenance or migration purposes, mounts a serious challenge. We present the Software Quality Assessment and Visualisation Toolset (SQuAVisiT), which assists users in performing the above task. First, it allows a fully

  6. Research on Sewage Treatment System by Configuration Software and PLC

    Directory of Open Access Journals (Sweden)

    Yu Guoqing

    2014-08-01

    Full Text Available The automation products have been applied in various industries, especially in the water disposing industry. This paper describes the design of the hardware and the software of the monitoring system of sewage treatment which is based on S7-300 PLC (Programmable Logic Controller and the Profibus bus technology. The hardware of PLC includes the power supply, CPU (Central Processing Unit, analog- digital conversion module. Through the management of the configuration software MCGS (Monitor and Control Generated System, the system realizes the main functions, such as the multi analog signals’ testing, the control of the driving output, the display of the digital information collection, the parameters enactment, the manual debugging control, etc. Then, the monitoring and management of the disposing of sewage plant is completed.

  7. Solid Waste Information and Tracking System (SWITS) Software Requirements Specification

    Energy Technology Data Exchange (ETDEWEB)

    MAY, D.L.

    2000-03-22

    This document is the primary document establishing requirements for the Solid Waste Information and Tracking System (SWITS) as it is converted to a client-server architecture. The purpose is to provide the customer and the performing organizations with the requirements for the SWITS in the new environment. This Software Requirement Specification (SRS) describes the system requirements for the SWITS Project, and follows the PHMC Engineering Requirements, HNF-PRO-1819, and Computer Software Qualify Assurance Requirements, HNF-PRO-309, policies. This SRS includes sections on general description, specific requirements, references, appendices, and index. The SWITS system defined in this document stores information about the solid waste inventory on the Hanford site. Waste is tracked as it is generated, analyzed, shipped, stored, and treated. In addition to inventory reports a number of reports for regulatory agencies are produced.

  8. Solid Waste Information and Tracking System (SWITS) Software Requirements Specification

    International Nuclear Information System (INIS)

    MAY, D.L.

    2000-01-01

    This document is the primary document establishing requirements for the Solid Waste Information and Tracking System (SWITS) as it is converted to a client-server architecture. The purpose is to provide the customer and the performing organizations with the requirements for the SWITS in the new environment. This Software Requirement Specification (SRS) describes the system requirements for the SWITS Project, and follows the PHMC Engineering Requirements, HNF-PRO-1819, and Computer Software Qualify Assurance Requirements, HNF-PRO-309, policies. This SRS includes sections on general description, specific requirements, references, appendices, and index. The SWITS system defined in this document stores information about the solid waste inventory on the Hanford site. Waste is tracked as it is generated, analyzed, shipped, stored, and treated. In addition to inventory reports a number of reports for regulatory agencies are produced

  9. Software management of the LHC Detector Control Systems

    CERN Multimedia

    Varela, F

    2007-01-01

    The control systems of each of the four Large Hadron Collider (LHC) experiments will contain of the order of 150 computers running the back-end applications. These applications will have to be maintained and eventually upgraded during the lifetime of the experiments, ~20 years. This paper presents the centralized software management strategy adopted by the Joint COntrols Project (JCOP) [1], which is based on a central database that holds the overall system configuration. The approach facilitates the integration of different parts of a control system and provides versioning of its various software components. The information stored in the configuration database can eventually be used to restore a computer in the event of failure.

  10. Usage models in reliability assessment of software-based systems

    International Nuclear Information System (INIS)

    Haapanen, P.; Pulkkinen, U.; Korhonen, J.

    1997-04-01

    This volume in the OHA-project report series deals with the statistical reliability assessment of software based systems on the basis of dynamic test results and qualitative evidence from the system design process. Other reports to be published later on in the OHA-project report series will handle the diversity requirements in safety critical software-based systems, generation of test data from operational profiles and handling of programmable automation in plant PSA-studies. In this report the issues related to the statistical testing and especially automated test case generation are considered. The goal is to find an efficient method for building usage models for the generation of statistically significant set of test cases and to gather practical experiences from this method by applying it in a case study. The scope of the study also includes the tool support for the method, as the models may grow quite large and complex. (32 refs., 30 figs.)

  11. Assessing software quality at each step of its life-cycle to enhance reliability of control systems

    International Nuclear Information System (INIS)

    Hardion, V.; Buteau, A.; Leclercq, N.; Abeille, G.; Pierre-Joseph, Z.; Le, S.

    2012-01-01

    A distributed software control system aims to enhance the upgrade ability and reliability by sharing responsibility between several components. The disadvantage is that it makes it harder to detect problems on a significant number of modules. With Kaizen in mind we have chosen to continuously invest in automation to obtain a complete overview of software quality despite the growth of legacy code. The development process has already been mastered by staging each life-cycle step thanks to a continuous integration server based on JENKINS and MAVEN. We enhanced this process, focusing on 3 objectives: Automatic Test, Static Code Analysis and Post-Mortem Supervision. Now, the build process automatically includes a test section to detect regressions, incorrect behaviour and integration incompatibility. The in-house TANGOUNIT project satisfies the difficulties of testing distributed components such as Tango Devices. In the next step, the programming code has to pass a complete code quality check-up. The SONAR quality server has been integrated in the process, to collect each static code analysis and display the hot topics on summary web pages. Finally, the integration of Google BREAKPAD in every TANGO Devices gives us essential statistics from crash reports and enables us to replay the crash scenarios at any time. We have already gained greater visibility on current developments. Some concrete results will be presented including reliability enhancement, better management of subcontracted software development, quicker adoption of coding standards by new developers and understanding of impacts when moving to a new technology. (authors)

  12. Optimal structure of fault-tolerant software systems

    International Nuclear Information System (INIS)

    Levitin, Gregory

    2005-01-01

    This paper considers software systems consisting of fault-tolerant components. These components are built from functionally equivalent but independently developed versions characterized by different reliability and execution time. Because of hardware resource constraints, the number of versions that can run simultaneously is limited. The expected system execution time and its reliability (defined as probability of obtaining the correct output within a specified time) strictly depend on parameters of software versions and sequence of their execution. The system structure optimization problem is formulated in which one has to choose software versions for each component and find the sequence of their execution in order to achieve the greatest system reliability subject to cost constraints. The versions are to be chosen from a list of available products. Each version is characterized by its reliability, execution time and cost. The suggested optimization procedure is based on an algorithm for determining system execution time distribution that uses the moment generating function approach and on the genetic algorithm. Both N-version programming and the recovery block scheme are considered within a universal model. Illustrated example is presented

  13. Distributed software framework and continuous integration in hydroinformatics systems

    Science.gov (United States)

    Zhou, Jianzhong; Zhang, Wei; Xie, Mengfei; Lu, Chengwei; Chen, Xiao

    2017-08-01

    When encountering multiple and complicated models, multisource structured and unstructured data, complex requirements analysis, the platform design and integration of hydroinformatics systems become a challenge. To properly solve these problems, we describe a distributed software framework and it’s continuous integration process in hydroinformatics systems. This distributed framework mainly consists of server cluster for models, distributed database, GIS (Geographic Information System) servers, master node and clients. Based on it, a GIS - based decision support system for joint regulating of water quantity and water quality of group lakes in Wuhan China is established.

  14. New Software Architecture Options for the TCL Data Acquisition System

    Energy Technology Data Exchange (ETDEWEB)

    Valenton, Emmanuel [Univ. of California, Berkeley, CA (United States)

    2014-09-01

    The Turbulent Combustion Laboratory (TCL) conducts research on combustion in turbulent flow environments. To conduct this research, the TCL utilizes several pulse lasers, a traversable wind tunnel, flow controllers, scientific grade CCD cameras, and numerous other components. Responsible for managing these different data-acquiring instruments and data processing components is the Data Acquisition (DAQ) software. However, the current system is constrained to running through VXI hardware—an instrument-computer interface—that is several years old, requiring the use of an outdated version of the visual programming language, LabVIEW. A new Acquisition System is being programmed which will borrow heavily from either a programming model known as the Current Value Table (CVT) System or another model known as the Server-Client System. The CVT System model is in essence, a giant spread sheet from which data or commands may be retrieved or written to, and the Server-Client System is based on network connections between a server and a client, very much like the Server-Client model of the Internet. Currently, the bare elements of a CVT DAQ Software have been implemented, consisting of client programs in addition to a server program that the CVT will run on. This system is being rigorously tested to evaluate the merits of pursuing the CVT System model and to uncover any potential flaws which may result in further implementation. If the CVT System is chosen, which is likely, then future work will consist of build up the system until enough client programs have been created to run the individual components of the lab. The advantages of such a System will be flexibility, portability, and polymorphism. Additionally, the new DAQ software will allow the Lab to replace the VXI with a newer instrument interface—the PXI—and take advantage of the capabilities of current and future versions of LabVIEW.

  15. 75 FR 11918 - Hewlett Pachard Company, Business Critical Systems, Mission Critical Business Software Division...

    Science.gov (United States)

    2010-03-12

    ... Pachard Company, Business Critical Systems, Mission Critical Business Software Division, Openvms Operating... Colorado, Marlborough, Massachuetts; Hewlett Pachard Company, Business Critical Systems, Mission Critical... Company, Business Critical Systems, Mission Critical Business Software Division, OpenVMS Operating System...

  16. An Architecture, System Engineering, and Acquisition Approach for Space System Software Resiliency

    Science.gov (United States)

    Phillips, Dewanne Marie

    Software intensive space systems can harbor defects and vulnerabilities that may enable external adversaries or malicious insiders to disrupt or disable system functions, risking mission compromise or loss. Mitigating this risk demands a sustained focus on the security and resiliency of the system architecture including software, hardware, and other components. Robust software engineering practices contribute to the foundation of a resilient system so that the system "can take a hit to a critical component and recover in a known, bounded, and generally acceptable period of time". Software resiliency must be a priority and addressed early in the life cycle development to contribute a secure and dependable space system. Those who develop, implement, and operate software intensive space systems must determine the factors and systems engineering practices to address when investing in software resiliency. This dissertation offers methodical approaches for improving space system resiliency through software architecture design, system engineering, increased software security, thereby reducing the risk of latent software defects and vulnerabilities. By providing greater attention to the early life cycle phases of development, we can alter the engineering process to help detect, eliminate, and avoid vulnerabilities before space systems are delivered. To achieve this objective, this dissertation will identify knowledge, techniques, and tools that engineers and managers can utilize to help them recognize how vulnerabilities are produced and discovered so that they can learn to circumvent them in future efforts. We conducted a systematic review of existing architectural practices, standards, security and coding practices, various threats, defects, and vulnerabilities that impact space systems from hundreds of relevant publications and interviews of subject matter experts. We expanded on the system-level body of knowledge for resiliency and identified a new software

  17. Development Of Data Acquisition Software For Centralized Radiation Monitoring System

    International Nuclear Information System (INIS)

    Nolida Yussup; Maslina Mohd Ibrahim; Mohd Fauzi Haris; Syirrazie Che Soh; Harzawardi Hasim; Azraf Azman; Mohd Ashhar Khalid

    2014-01-01

    Nowadays, with the growth of technology, many devices and equipment's can be connected to the network and internet to enable online data acquisition. Centralized radiation monitoring system utilizes a Local Area Network (LAN) as a communication media for data acquisition of the area radiation levels from radiation detectors in Malaysian Nuclear Agency (Nuclear Malaysia). The development of the system involves device configuration, wiring, network and hardware installation, software and web development. This paper describes the software development on the system server that is responsible to acquire and record the area radiation readings from the detectors. Then the recorded readings are called in a web programming to be displayed on a web site. The readings with the time stamp are stored in the system database for query. Besides acquiring the area radiation levels in Nuclear Malaysia centrally, additional features such as data conversion from mR to μSv and line chart display are developed in the software for effective radiation level trend observation and studies. (author)

  18. Radian remote sampling system digital processor system. Software detail documentation: Pittsburgh Energy Research Center

    Energy Technology Data Exchange (ETDEWEB)

    1979-11-01

    Software documentation for the DART data acquisition system is provided. This system runs on a minicomputer. After an overview of the system and file structures, the various subprograms are discussed individually; flow charts are included. 37 figures. (RWR)

  19. Software that goes with the flow in systems biology

    Directory of Open Access Journals (Sweden)

    Le Novère Nicolas

    2010-11-01

    Full Text Available Abstract A recent article in BMC Bioinformatics describes new advances in workflow systems for computational modeling in systems biology. Such systems can accelerate, and improve the consistency of, modeling through automation not only at the simulation and results-production stages, but also at the model-generation stage. Their work is a harbinger of the next generation of more powerful software for systems biologists. See research article: http://www.biomedcentral.com/1471-2105/11/582/abstract/ Ever since the rise of systems biology at the end of the last century, mathematical representations of biological systems and their activities have flourished. They are being used to describe everything from biomolecular networks, such as gene regulation, metabolic processes and signaling pathways, at the lowest biological scales, to tissue growth and differentiation, drug effects, environmental interactions, and more. A very active area in the field has been the development of techniques that facilitate the construction, analysis and dissemination of computational models. The heterogeneous, distributed nature of most data resources today has increased not only the opportunities for, but also the difficulties of, developing software systems to support these tasks. The work by Li et al. 1 published in BMC Bioinformatics represents a promising evolutionary step forward in this area. They describe a workflow system - a visual software environment enabling a user to create a connected set of operations to be performed sequentially using seperate tools and resources. Their system uses third-party data resources accessible over the Internet to elaborate and parametrize (that is, assign parameter values to computational models in a semi-automated manner. In Li et al.'s work, the authors point towards a promising future for computational modeling and simultaneously highlight some of the difficulties that need to be overcome before we get there.

  20. Legacy model integration for enhancing hydrologic interdisciplinary research

    Science.gov (United States)

    Dozier, A.; Arabi, M.; David, O.

    2013-12-01

    Many challenges are introduced to interdisciplinary research in and around the hydrologic science community due to advances in computing technology and modeling capabilities in different programming languages, across different platforms and frameworks by researchers in a variety of fields with a variety of experience in computer programming. Many new hydrologic models as well as optimization, parameter estimation, and uncertainty characterization techniques are developed in scripting languages such as Matlab, R, Python, or in newer languages such as Java and the .Net languages, whereas many legacy models have been written in FORTRAN and C, which complicates inter-model communication for two-way feedbacks. However, most hydrologic researchers and industry personnel have little knowledge of the computing technologies that are available to address the model integration process. Therefore, the goal of this study is to address these new challenges by utilizing a novel approach based on a publish-subscribe-type system to enhance modeling capabilities of legacy socio-economic, hydrologic, and ecologic software. Enhancements include massive parallelization of executions and access to legacy model variables at any point during the simulation process by another program without having to compile all the models together into an inseparable 'super-model'. Thus, this study provides two-way feedback mechanisms between multiple different process models that can be written in various programming languages and can run on different machines and operating systems. Additionally, a level of abstraction is given to the model integration process that allows researchers and other technical personnel to perform more detailed and interactive modeling, visualization, optimization, calibration, and uncertainty analysis without requiring deep understanding of inter-process communication. To be compatible, a program must be written in a programming language with bindings to a common

  1. Software System for Finding the Incipient Faults in Power Transformers

    Directory of Open Access Journals (Sweden)

    Nikolina Petkova

    2015-05-01

    Full Text Available In this paper a new software system for finding of incipient faultsis presented.An experiment is made with real measurement of partial discharge(PD that appeared in power transformer. The software system usesacquisition data to define the real state of this transformer. One of the most important criteria for the power transformer’s state is the presence of partial discharges. The wave propagation caused by partial discharge depends on scheme of the winding and construction of the power equipment. In all cases, the PD source had a specific position so the wave measured from the PD –coupling device had a specific waveform. The waveform is different when PDcoupling device is put on a specific place. The waveform and the time of propagation are criteria for the localization of the source of incipient faults in the volume of power transformer.

  2. Software design for the Tritium System Test Assembly

    International Nuclear Information System (INIS)

    Claborn, G.W.; Heaphy, R.T.; Lewis, P.S.; Mann, L.W.; Nielson, C.W.

    1983-01-01

    The control system for the Tritium Systems Test Assembly (TSTA) must execute complicated algorithms for the control of several sophisticated subsystems. It must implement this control with requirements for easy modifiability, for high availability, and provide stringent protection for personnel and the environment. Software techniques used to deal with these requirements are described, including modularization based on the structure of the physical systems, a two-level hierarchy of concurrency, a dynamically modifiable man-machine interface, and a specification and documentation language based on a computerized form of structured flowcharts

  3. FRAMES Software System: Linking to the Statistical Package R

    Energy Technology Data Exchange (ETDEWEB)

    Castleton, Karl J.; Whelan, Gene; Hoopes, Bonnie L.

    2006-12-11

    This document provides requirements, design, data-file specifications, test plan, and Quality Assurance/Quality Control protocol for the linkage between the statistical package R and the Framework for Risk Analysis in Multimedia Environmental Systems (FRAMES) Versions 1.x and 2.0. The requirements identify the attributes of the system. The design describes how the system will be structured to meet those requirements. The specification presents the specific modifications to FRAMES to meet the requirements and design. The test plan confirms that the basic functionality listed in the requirements (black box testing) actually functions as designed, and QA/QC confirms that the software meets the client’s needs.

  4. Specification and Verification of Secure Concurrent and Distributed Software Systems

    Science.gov (United States)

    1992-02-01

    reduce in SERUPNOZ : *%tsIde(v(v(p(ep(p(0)))) rewite: 23 result Zero: 0 rede In SOUNDOU : outside (op(v(V(r(p(0)))) rewrites: 19 result 2254t: I...Transactions On Programming Languages And Systems, 6(2):159-174, 1984. 252 [BW821 M. Broy and M. Wirsing. Partial abstract types. Acta Informatica , 18, 1982...Acta Informatica , 24, 1987. [DE82] R. Dannenberg and G. Ernst. Formal program verification using symbolic execution. IEEE Transactions on Software

  5. DiPS: A Unifying Approach for developing System Software

    OpenAIRE

    Michiels, Sam; Matthijs, Frank; Walravens, Dirk; Verbaeten, Pierre

    2002-01-01

    In this paper we unify three essential features for flexible system software: a component oriented approach, self-adaptation and separation of concerns.We propose DiPS (Distrinet Protocol Stack), a component framework, which offers components, an anonymous interaction model and connectors to handle non-functional aspects such as concurrency. DiPS has effectively been used in industrial protocol stacks and device drivers.

  6. SIM_EXPLORE: Software for Directed Exploration of Complex Systems

    Science.gov (United States)

    Burl, Michael; Wang, Esther; Enke, Brian; Merline, William J.

    2013-01-01

    Physics-based numerical simulation codes are widely used in science and engineering to model complex systems that would be infeasible to study otherwise. While such codes may provide the highest- fidelity representation of system behavior, they are often so slow to run that insight into the system is limited. Trying to understand the effects of inputs on outputs by conducting an exhaustive grid-based sweep over the input parameter space is simply too time-consuming. An alternative approach called "directed exploration" has been developed to harvest information from numerical simulators more efficiently. The basic idea is to employ active learning and supervised machine learning to choose cleverly at each step which simulation trials to run next based on the results of previous trials. SIM_EXPLORE is a new computer program that uses directed exploration to explore efficiently complex systems represented by numerical simulations. The software sequentially identifies and runs simulation trials that it believes will be most informative given the results of previous trials. The results of new trials are incorporated into the software's model of the system behavior. The updated model is then used to pick the next round of new trials. This process, implemented as a closed-loop system wrapped around existing simulation code, provides a means to improve the speed and efficiency with which a set of simulations can yield scientifically useful results. The software focuses on the case in which the feedback from the simulation trials is binary-valued, i.e., the learner is only informed of the success or failure of the simulation trial to produce a desired output. The software offers a number of choices for the supervised learning algorithm (the method used to model the system behavior given the results so far) and a number of choices for the active learning strategy (the method used to choose which new simulation trials to run given the current behavior model). The software

  7. Substantially Evolutionary Theorizing in Designing Software-Intensive Systems

    Directory of Open Access Journals (Sweden)

    Petr Sosnin

    2018-04-01

    Full Text Available Useful inheritances from scientific experience open perspective ways for increasing the degree of success in designing of systems with software. One such way is a search and build applied theory that takes into account the nature of design and the specificity of software engineering. This paper presents a substantially evolutionary approach to creating the project theories, the application of which leads to positive effects that are traditionally expected from theorizing. Any implementation of the approach is based on a reflection by designers of an operational space of designing onto a semantic memory of a question-answer type. One of the results of such reflection is a system of question-answer nets, the nodes of which register facts of interactions of designers with accessible experience. A set of such facts is used by designers for creating and using the theory that belongs to the new subclass of Grounded Theories. This sub-class is oriented on organizationally behavioral features of a project’s work based on design thinking, automated mental imagination, and thought experimenting that facilitate increasing the degree of controlled intellectualization in the design process and, correspondingly, increasing the degree of success in the development of software-intensive systems.

  8. A software system for oilfield facility investment minimization

    International Nuclear Information System (INIS)

    Ding, Z.X.; Startzman, R.A.

    1996-01-01

    Minimizing investment in oilfield development is an important subject that has attracted a considerable amount of industry attention. One method to reduce investment involves the optimal placement and selection of production facilities. Because of the large amount of capital used in this process, saving a small percent of the total investment may represent a large monetary value. The literature reports algorithms using mathematical programming techniques that were designed to solve the proposed problem in a global optimal manner. Owing to the high-computational complexity and the lack of user-friendly interfaces for data entry and results display, mathematical programming techniques have not been given enough attention in practice. This paper describes an interactive, graphical software system that provides a global optimal solution to the problem of placement and selection of production facilities in oil-field development processes. This software system can be used as an investment minimization tool and a scenario-study simulator. The developed software system consists of five basic modules: (1) an interactive data-input unit, (2) a cost function generator, (3) an optimization unit, (4) a graphic-output display, and (5) a sensitivity-analysis unit

  9. Software for the Local Control and Instrumentation System for MFTF

    International Nuclear Information System (INIS)

    Labiak, W.G.

    1979-01-01

    There are nine different systems requiring over fifty computers in the Local Control and Instrumentation System for the Mirror Fusion Test Facility. Each computer system consists of an LSI-11/2 processor with 32,000 words of memory, a serial driver that implements the CAMAC serial highway protocol. With this large number of systems it is important that as much software as possible be common to all systems. A serial communications system has been developed for data transfers between the LSI-11/2's and the supervisory computers. This system is based on the RS 232 C interface with modem control lines. Six modem control lines are used for hardware handshaking, which allows totally independent full duplex communications to occur. Odd parity on each byte and a 16-bit checksum are used to detect errors in transmission

  10. A Non-Intrusive Approach to Enhance Legacy Embedded Control Systems with Cyber Protection Features

    Science.gov (United States)

    Ren, Shangping; Chen, Nianen; Yu, Yue; Poirot, Pierre; Kwiat, Kevin; Tsai, Jeffrey J. P.

    Trust is cast as a continuous re-evaluation: a system’s reliability and security are scrutinized, not just prior to, but during its deployment. This approach to maintaining trust is specifically applied to distributed and embedded control systems. Unlike general purpose systems, distributed and embedded control systems, such as power grid control systems and water treatment systems, etc., generally have a 24x7 availability requirement. Hence, upgrading or adding new cyber protection features into these systems in order to sustain them when faults caused by cyber attacks occur, is often difficult to achieve and inhibits the evolution of these systems into a cyber environment. In this chapter, we present a solution for extending the capabilities of existing systems while simultaneously maintaining the stability of the current systems. An externalized survivability management scheme based on the observe-reason-modify paradigm is applied, which decomposes the cyber attack protection process into three orthogonal subtasks: observation, evaluation and protection. This architecture provides greater flexibility and has a resolvability attribute- it can utilize emerging techniques; yet requires either minimal modifications or even no modifications whatsoever to the controlled infrastructures. The approach itself is general and can be applied to a broad class of observable systems.

  11. Software Tools to Support the Assessment of System Health

    Science.gov (United States)

    Melcher, Kevin J.

    2013-01-01

    This presentation provides an overview of three software tools that were developed by the NASA Glenn Research Center to support the assessment of system health: the Propulsion Diagnostic Method Evaluation Strategy (ProDIMES), the Systematic Sensor Selection Strategy (S4), and the Extended Testability Analysis (ETA) tool. Originally developed to support specific NASA projects in aeronautics and space, these software tools are currently available to U.S. citizens through the NASA Glenn Software Catalog. The ProDiMES software tool was developed to support a uniform comparison of propulsion gas path diagnostic methods. Methods published in the open literature are typically applied to dissimilar platforms with different levels of complexity. They often address different diagnostic problems and use inconsistent metrics for evaluating performance. As a result, it is difficult to perform a one ]to ]one comparison of the various diagnostic methods. ProDIMES solves this problem by serving as a theme problem to aid in propulsion gas path diagnostic technology development and evaluation. The overall goal is to provide a tool that will serve as an industry standard, and will truly facilitate the development and evaluation of significant Engine Health Management (EHM) capabilities. ProDiMES has been developed under a collaborative project of The Technical Cooperation Program (TTCP) based on feedback provided by individuals within the aircraft engine health management community. The S4 software tool provides a framework that supports the optimal selection of sensors for health management assessments. S4 is structured to accommodate user ]defined applications, diagnostic systems, search techniques, and system requirements/constraints. One or more sensor suites that maximize this performance while meeting other user ]defined system requirements that are presumed to exist. S4 provides a systematic approach for evaluating combinations of sensors to determine the set or sets of

  12. Using VME to leverage legacy CAMAC electronics into a high speed data acquisition system

    International Nuclear Information System (INIS)

    Anthony, P.L.

    1997-06-01

    The authors report on the first full scale implementation of a VME based Data Acquisition (DAQ) system at the Stanford Linear Accelerator Center (SLAC). This system was designed for use in the End Station A (ESA) fixed target program. It was designed to handle interrupts at rates up to 120 Hz and event sizes up to 10,000 bytes per interrupt. One of the driving considerations behind the design of this system was to make use of existing CAMAC based electronics and yet deliver a high performance DAQ system. This was achieved by basing the DAQ system in a VME backplane allowing parallel control and readout of CAMAC branches and VME DAQ modules. This system was successfully used in the Spin Physics research program at SLAC (E154 and E155)

  13. Qualification of safety-critical software for digital reactor safety system in nuclear power plants

    International Nuclear Information System (INIS)

    Kwon, Kee-Choon; Park, Gee-Yong; Kim, Jang-Yeol; Lee, Jang-Soo

    2013-01-01

    This paper describes the software qualification activities for the safety-critical software of the digital reactor safety system in nuclear power plants. The main activities of the software qualification processes are the preparation of software planning documentations, verification and validation (V and V) of the software requirements specifications (SRS), software design specifications (SDS) and codes, and the testing of the integrated software and integrated system. Moreover, the software safety analysis and software configuration management are involved in the software qualification processes. The V and V procedure for SRS and SDS contains a technical evaluation, licensing suitability evaluation, inspection and traceability analysis, formal verification, software safety analysis, and an evaluation of the software configuration management. The V and V processes for the code are a traceability analysis, source code inspection, test case and test procedure generation. Testing is the major V and V activity of the software integration and system integration phases. The software safety analysis employs a hazard operability method and software fault tree analysis. The software configuration management in each software life cycle is performed by the use of a nuclear software configuration management tool. Through these activities, we can achieve the functionality, performance, reliability, and safety that are the major V and V objectives of the safety-critical software in nuclear power plants. (author)

  14. Monitoring extensions for component-based distributed software

    NARCIS (Netherlands)

    Diakov, N.K.; Papir, Z.; van Sinderen, Marten J.; Quartel, Dick

    2000-01-01

    This paper defines a generic class of monitoring extensions to component-based distributed enterprise software. Introducing a monitoring extension to a legacy application system can be very costly. In this paper, we identify the minimum support for application monitoring within the generic

  15. Licensing process for safety-critical software-based systems

    International Nuclear Information System (INIS)

    Haapanen, P.; Korhonen, J.; Pulkkinen, U.

    2000-12-01

    System vendors nowadays propose software-based technology even for the most critical safety functions in nuclear power plants. Due to the nature of software faults and the way they cause system failures new methods are needed for the safety and reliability evaluation of these systems. In the research project 'Programmable automation systems in nuclear power plants (OHA)', financed together by the Radiation and Nuclear Safety Authority (STUK), the Ministry of Trade and Industry (KTM) and the Technical Research Centre of Finland (VTT), various safety assessment methods and tools for software based systems are developed and evaluated. As a part of the OHA-work a reference model for the licensing process for software-based safety automation systems is defined. The licensing process is defined as the set of interrelated activities whose purpose is to produce and assess evidence concerning the safety and reliability of the system/application to be licensed and to make the decision about the granting the construction and operation permissions based on this evidence. The parties of the licensing process are the authority, the licensee (the utility company), system vendors and their subcontractors and possible external independent assessors. The responsibility about the production of the evidence in first place lies at the licensee who in most cases rests heavily on the vendor expertise. The evaluation and gauging of the evidence is carried out by the authority (possibly using external experts), who also can acquire additional evidence by using their own (independent) methods and tools. Central issue in the licensing process is to combine the quality evidence about the system development process with the information acquired through tests, analyses and operational experience. The purpose of the licensing process described in this report is to act as a reference model both for the authority and the licensee when planning the licensing of individual applications. Many of the

  16. Licensing process for safety-critical software-based systems

    Energy Technology Data Exchange (ETDEWEB)

    Haapanen, P. [VTT Automation, Espoo (Finland); Korhonen, J. [VTT Electronics, Espoo (Finland); Pulkkinen, U. [VTT Automation, Espoo (Finland)

    2000-12-01

    System vendors nowadays propose software-based technology even for the most critical safety functions in nuclear power plants. Due to the nature of software faults and the way they cause system failures new methods are needed for the safety and reliability evaluation of these systems. In the research project 'Programmable automation systems in nuclear power plants (OHA)', financed together by the Radiation and Nuclear Safety Authority (STUK), the Ministry of Trade and Industry (KTM) and the Technical Research Centre of Finland (VTT), various safety assessment methods and tools for software based systems are developed and evaluated. As a part of the OHA-work a reference model for the licensing process for software-based safety automation systems is defined. The licensing process is defined as the set of interrelated activities whose purpose is to produce and assess evidence concerning the safety and reliability of the system/application to be licensed and to make the decision about the granting the construction and operation permissions based on this evidence. The parties of the licensing process are the authority, the licensee (the utility company), system vendors and their subcontractors and possible external independent assessors. The responsibility about the production of the evidence in first place lies at the licensee who in most cases rests heavily on the vendor expertise. The evaluation and gauging of the evidence is carried out by the authority (possibly using external experts), who also can acquire additional evidence by using their own (independent) methods and tools. Central issue in the licensing process is to combine the quality evidence about the system development process with the information acquired through tests, analyses and operational experience. The purpose of the licensing process described in this report is to act as a reference model both for the authority and the licensee when planning the licensing of individual applications

  17. Lessons learned from development and quality assurance of software systems at the Halden Project

    International Nuclear Information System (INIS)

    Bjorlo, T.J.; Berg, O.; Pehrsen, M.; Dahll, G.; Sivertsen, T.

    1996-01-01

    The OECD Halden Reactor Project has developed a number of software systems within the research programmes. These programmes have comprised a wide range of topics, like studies of software for safety-critical applications, development of different operator support systems, and software systems for building and implementing graphical user interfaces. The systems have ranged from simple prototypes to installations in process plants. In the development of these software systems, Halden has gained much experience in quality assurance of different types of software. This paper summarises the accumulated experience at the Halden Project in quality assurance of software systems. The different software systems being developed at the Halden Project may be grouped into three categories. These are plant-specific software systems (one-of-a-kind deliveries), generic software products, and safety-critical software systems. This classification has been found convenient as the categories have different requirements to the quality assurance process. In addition, the experience from use of software development tools and proprietary software systems at Halden, is addressed. The paper also focuses on the experience gained from the complete software life cycle, starting with the software planning phase and ending with software operation and maintenance

  18. Integrated software system for low level waste management

    International Nuclear Information System (INIS)

    Worku, G.

    1995-01-01

    In the continually changing and uncertain world of low level waste management, many generators in the US are faced with the prospect of having to store their waste on site for the indefinite future. This consequently increases the set of tasks performed by the generators in the areas of packaging, characterizing, classifying, screening (if a set of acceptance criteria applies), and managing the inventory for the duration of onsite storage. When disposal sites become available, it is expected that the work will require re-evaluating the waste packages, including possible re-processing, re-packaging, or re-classifying in preparation for shipment for disposal under the regulatory requirements of the time. In this day and age, when there is wide use of computers and computer literacy is at high levels, an important waste management tool would be an integrated software system that aids waste management personnel in conducting these tasks quickly and accurately. It has become evident that such an integrated radwaste management software system offers great benefits to radwaste generators both in the US and other countries. This paper discusses one such approach to integrated radwaste management utilizing some globally accepted radiological assessment software applications

  19. KAERI software verification and validation guideline for developing safety-critical software in digital I and C system of NPP

    International Nuclear Information System (INIS)

    Kim, Jang Yeol; Lee, Jang Soo; Eom, Heung Seop.

    1997-07-01

    This technical report is to present V and V guideline development methodology for safety-critical software in NPP safety system. Therefore it is to present V and V guideline of planning phase for the NPP safety system in addition to critical safety items, for example, independence philosophy, software safety analysis concept, commercial off the shelf (COTS) software evaluation criteria, inter-relationships between other safety assurance organizations, including the concepts of existing industrial standard, IEEE Std-1012, IEEE Std-1059. This technical report includes scope of V and V guideline, guideline framework as part of acceptance criteria, V and V activities and task entrance as part of V and V activity and exit criteria, review and audit, testing and QA records of V and V material and configuration management, software verification and validation plan production etc., and safety-critical software V and V methodology. (author). 11 refs

  20. System software design for the CDF Silicon Vertex Detector

    International Nuclear Information System (INIS)

    Tkaczyk, S.; Bailey, M.

    1991-11-01

    An automated system for testing and performance evaluation of the CDF Silicon Vertex Detector (SVX) data acquisition electronics is described. The SVX data acquisition chain includes the Fastbus Sequencer and the Rabbit Crate Controller and Digitizers. The Sequencer is a programmable device for which we developed a high level assembly language. Diagnostic, calibration and data acquisition programs have been developed. A distributed software package was developed in order to operate the modules. The package includes programs written in assembly and Fortran languages that are executed concurrently on the SVX Sequencer modules and either a microvax or an SSP. Test software was included to assist technical personnel during the production and maintenance of the modules. Details of the design of different components of the package are reported

  1. Advancing Software Development for a Multiprocessor System-on-Chip

    Directory of Open Access Journals (Sweden)

    Stephen Bique

    2007-06-01

    Full Text Available A low-level language is the right tool to develop applications for some embedded systems. Notwithstanding, a high-level language provides a proper environment to develop the programming tools. The target device is a system-on-chip consisting of an array of processors with only local communication. Applications include typical streaming applications for digital signal processing. We describe the hardware model and stress the advantages of a flexible device. We introduce IDEA, a graphical integrated development environment for an array. A proper foundation for software development is a UML and standard programming abstractions in object-oriented languages.

  2. Physics Detector Simulation Facility Phase II system software description

    International Nuclear Information System (INIS)

    Scipioni, B.; Allen, J.; Chang, C.; Huang, J.; Liu, J.; Mestad, S.; Pan, J.; Marquez, M.; Estep, P.

    1993-05-01

    This paper presents the Physics Detector Simulation Facility (PDSF) Phase II system software. A key element in the design of a distributed computing environment for the PDSF has been the separation and distribution of the major functions. The facility has been designed to support batch and interactive processing, and to incorporate the file and tape storage systems. By distributing these functions, it is often possible to provide higher throughput and resource availability. Similarly, the design is intended to exploit event-level parallelism in an open distributed environment

  3. XPRESS: eXascale PRogramming Environment and System Software

    Energy Technology Data Exchange (ETDEWEB)

    Brightwell, Ron [Louisiana State Univ., Baton Rouge, LA (United States); Sterling, Thomas [Louisiana State Univ., Baton Rouge, LA (United States); Koniges, Alice [Louisiana State Univ., Baton Rouge, LA (United States); Kaiser, Hartmut [Louisiana State Univ., Baton Rouge, LA (United States); Gabriel, Edgar [Louisiana State Univ., Baton Rouge, LA (United States); Porterfield, Allan [Louisiana State Univ., Baton Rouge, LA (United States); Malony, Allen [Louisiana State Univ., Baton Rouge, LA (United States)

    2017-07-14

    The XPRESS Project is one of four major projects of the DOE Office of Science Advanced Scientific Computing Research X-stack Program initiated in September, 2012. The purpose of XPRESS is to devise an innovative system software stack to enable practical and useful exascale computing around the end of the decade with near-term contributions to efficient and scalable operation of trans-Petaflops performance systems in the next two to three years; both for DOE mission-critical applications. To this end, XPRESS directly addresses critical challenges in computing of efficiency, scalability, and programmability through introspective methods of dynamic adaptive resource management and task scheduling.

  4. Error detection and prevention in Embedded Systems Software

    DEFF Research Database (Denmark)

    Kamel, Hani Fouad

    1996-01-01

    Despite many efforts to structure the development and design processes of embedded systems, errors are discovered at the final stages of production and sometimes after the delivery of the products. The cost of such errors can be prohibitive.Different design techniques to detect such errors...... systems, a formal model for such systems is introduced. The main characteristics of embedded systems design and the interaction of these properties are described. A taxonomy for the structure of the software developed for such systems based on the amount of processes and processors involved is presented...... will be presented. Moreover, we will try to describe the causes of these errors and the countermeasures that can be taken to avoid them. The main theme is that prevention is better than cure.The presentation is structured in three parts. The first part deals with an introduction to the subject area of embedded...

  5. Overview of MFTF supervisory control and diagnostics system software

    International Nuclear Information System (INIS)

    Ng, W.C.

    1979-01-01

    The Mirror Fusion Test Facility (MFTF) at the Lawrence Livermore Laboratory (LLL) is currently the largest mirror fusion research project in the world. Its Control and Diagnostics System is handled by a distributed computer network consisting of nine Interdata minicomputer systems and about 65 microprocessors. One of the design requirements is tolerance of single-point failure. If one of the computer systems becomes inoperative, the experiment can still be carried out, although the system responsiveness to operator command may be degraded. In a normal experiment cycle, the researcher can examine the result of the previous experiment, change any control parameter, fire a shot, collect four million bytes of diagnostics data, perform intershot analysis, and have the result presented - all within five minutes. The software approach adopted for the Supervisory Control and Diagnostics System features chief programmer teams and structured programming. Pascal is the standard programming language in this project

  6. Aiming toward perfection with POBSYS, a new software system

    International Nuclear Information System (INIS)

    Osudar, J.; Parks, J.E.; Levitz, N.M.

    1985-01-01

    An integrated general-purpose software system, POBSYS, has been developed that provides the foundation and tools for building a highly interactive system for carrying out detailed operating procedures and performing conventional process control, data acquisition, and data management functions. Features of the present system, which may be of particular interest to the problem of the man-machine interface include: (a) a multi-level safety system for fail-safe operation; (b) hierarchical operational control; (c) documented responsibility; (d) equipment status tracking; and (e) quality assurance checks on operations. The system runs on commercially available microprocessors and is presently in use in the destructive analysis of irradiated fuel rods from the Light Water Breeder Reactor

  7. The legacy of biosphere 2 for the study of biospherics and closed ecological systems

    Science.gov (United States)

    Allen, J. P.; Nelson, M.; Alling, A.

    The unprecedented challenges of creating Biosphere 2, the world's first laboratory for biospherics, the study of global ecology and long-term closed ecological system dynamics, led to breakthrough developments in many fields, and a deeper understanding of the opportunities and difficulties of material closure. This paper will review accomplishments and challenges, citing some of the key research findings and publications that have resulted from the experiments in Biosphere 2. Engineering accomplishments included development of a technique for variable volume to deal with pressure differences between the facility and outside environment, developing methods of atmospheric leak detection and sealing, while achieving new standards of closure, with an annual atmospheric leakrate of less than 10%, or less than 300 ppm per day. This degree of closure permitted detailed tracking of carbon dioxide, oxygen, and trice gases such as nitrous oxide and ethylene over the seasonal variability of two years. Full closure also necessitated developing new approaches and technologies for complete air, water, and wastewater recycle and reuse within the facility. The development of a soil-based highly productive agricultural system was a first in closed ecological systems, and much was learned about managing a wide variety of crops using non-chemical means of pest and disease control. Closed ecological systems have different temporal biogeochemical cycling and ranges of atmospheric components because of their smaller reservoirs of air, water and soil, and higher concentration of biomass, and Biosphere 2 provided detailed examination and modeling of these accelerated cycles over a period of closure which measured in years. Medical research inside Biosphere 2 included the effects on humans of lowered oxygen: the discovery that human productivity can be maintained with good health with lowered atmospheric oxygen levels could lead to major economies on the design of space stations and

  8. The Legacy of Biosphere 2 for Biospherics and Closed Ecological System Research

    Science.gov (United States)

    Allen, J.; Alling, A.; Nelson, M.

    The unprecedented challenges of creating Biosphere 2, the world's first laboratory for biospherics, the study of global ecology and long-term closed ecological system dynamics led to breakthrough developments in many fields, and a deeper understanding of the opportunities and difficulties of material closure. This paper will review these accomplishments and challenges, citing some of the key research accomplishments and publications which have resulted from the experiments in Biosphere 2. Engineering accomplishments included development of a technique for variable volume to deal with pressure differences between the facility and outside environment, developing methods of leak detection and sealing, and achieving new standards of closure, with an annual atmospheric leakrate of less than 10%, or less than 300 ppm per day. This degree of closure permitted detailed tracking of carbon dioxide, oxygen, and trace gases such as nitrous oxide and ethylene over the seasonal variability of two years. Full closure also necessitated developing new approaches and technologies for complete air, water, and wastewater recycle and reuse within the facility. The development of a soil-based highly productive agricultural system was a first in closed ecological systems, and much was learned about managing a wide variety of crops using non-chemical means of pest and disease control. Closed ecological systems have different temporal b ogeochemical cycling and ranges ofi atmospheric components because of their smaller reservoirs of air, water and soil, and higher concentration of biomass, and Biosphere 2 provided detailed examination and modeling of these accelerated cycles over a period of closure which measured in years. Medical research inside Biosphere 2 included the effects on humans of lowered oxygen: the discovery that human productivity can be maintained down to 15% oxygen could lead to major economies on the design of space stations and planetary/lunar settlements. The improved

  9. In-Building Wireless Distribution in legacy Multimode Fiber with an improved RoMMF system

    DEFF Research Database (Denmark)

    Visani, Davide; Petersen, Martin Nordal; Sorci, Francesca

    2012-01-01

    ). Experimental and theoretical results are reported showing that this scheme outperforms a RoMMF system employing a distributed feed-back (DFB) laser diode (LD) and/or a mode scrambler to achieve overfilled launch (OFL). Long Term Evolution (LTE) signal transmission is achieved with high quality in terms...

  10. Enabling Support of Collaborative Cross-enterprise Business Processes for Legacy ERP Systems

    Directory of Open Access Journals (Sweden)

    Gundars Alksnis

    2015-04-01

    Full Text Available In order to create innovative business products, share knowledge between people and businesses, or increase the control and quality of services, more and more often enterprise business processes involve in collaborations by delegating or providing some pieces of work to other enterprises. Necessity to cooperate in the cross-enterprise setting leads to Collaborative Business Processes (CBPs. The difference between CBPs and Business Processes (BPs is in the decentralized coordination, flexible backward recovery, participants notification about the state, efficient adaptability to changes, presence of multiple information systems, and individual authorization settings. In the paper we consider a specific case of CBPs where multiple collaborating partners use Enterprise Resource Planning (ERP system of the same vendor. The vendor can see (e.g., monitor the changes of data elements, but does not have explicit process awareness in the ERP system to support flow of activities in the cross-enterprise setting. The paper also discusses different settings of cross-enterprise CBP and shows simplified enterprise models behind the vendor possibilities to positively impact collaborative processes. The restrictions of the vendor are implicit information flows in BP, diversity of ERP integrations with third party Information Systems (IS, the lack of mechanisms for monitoring BP instances, backward recovery, user notification about the current state and tasks, and inability to make explicit changes in customers’ ISs.

  11. Nature and statistical properties of quasar associated absorption systems in the XQ-100 Legacy Survey

    DEFF Research Database (Denmark)

    Perrotta, Serena; D'Odorico, Valentina; Prochaska, J. Xavier

    2016-01-01

    We statistically study the physical properties of a sample of narrow absorption line (NAL) systems looking for empirical evidences to distinguish between intrinsic and intervening NALs without taking into account any a priori definition or velocity cut-off. We analyze the spectra of 100 quasars...

  12. An Interpretation of Part of Gilbert Gottlieb's Legacy: Developmental Systems Theory Contra Developmental Behavior Genetics

    Science.gov (United States)

    Molenaar, Peter C. M.

    2015-01-01

    The main theme of this paper concerns the persistent critique of Gilbert Gottlieb on developmental behavior genetics and my reactions to this critique, the latter changing from rejection to complete acceptation. Concise characterizations of developmental behavior genetics, developmental systems theory (to which Gottlieb made essential…

  13. Intelligent surgical laser system configuration and software implementation

    Science.gov (United States)

    Hsueh, Chi-Fu T.; Bille, Josef F.

    1992-06-01

    An intelligent surgical laser system, which can help the ophthalmologist to achieve higher precision and control during their procedures, has been developed by ISL as model CLS 4001. In addition to the laser and laser delivery system, the system is also equipped with a vision system (IPU), robotics motion control (MCU), and a tracking closed loop system (ETS) that tracks the eye in three dimensions (X, Y and Z). The initial patient setup is computer controlled with guidance from the vision system. The tracking system is automatically engaged when the target is in position. A multi-level tracking system is developed by integrating our vision and tracking systems which have been able to maintain our laser beam precisely on target. The capabilities of the automatic eye setup and the tracking in three dimensions provides for improved accuracy and measurement repeatability. The system is operated through the Surgical Control Unit (SCU). The SCU communicates with the IPU and the MCU through both ethernet and RS232. Various scanning pattern (i.e., line, curve, circle, spiral, etc.) can be selected with given parameters. When a warning is activated, a voice message is played that will normally require a panel touch acknowledgement. The reliability of the system is ensured in three levels: (1) hardware, (2) software real time monitoring, and (3) user. The system is currently under clinical validation.

  14. Using software metrics and software reliability models to attain acceptable quality software for flight and ground support software for avionic systems

    Science.gov (United States)

    Lawrence, Stella

    1992-01-01

    This paper is concerned with methods of measuring and developing quality software. Reliable flight and ground support software is a highly important factor in the successful operation of the space shuttle program. Reliability is probably the most important of the characteristics inherent in the concept of 'software quality'. It is the probability of failure free operation of a computer program for a specified time and environment.

  15. Architecture of high reliable control systems using complex software

    International Nuclear Information System (INIS)

    Tallec, M.

    1990-01-01

    The problems involved by the use of complex softwares in control systems that must insure a very high level of safety are examined. The first part makes a brief description of the prototype of PROSPER system. PROSPER means protection system for nuclear reactor with high performances. It has been installed on a French nuclear power plant at the beginnning of 1987 and has been continually working since that time. This prototype is realized on a multi-processors system. The processors communicate between themselves using interruptions and protected shared memories. On each processor, one or more protection algorithms are implemented. Those algorithms use data coming directly from the plant and, eventually, data computed by the other protection algorithms. Each processor makes its own acquisitions from the process and sends warning messages if some operating anomaly is detected. All algorithms are activated concurrently on an asynchronous way. The results are presented and the safety related problems are detailed. - The second part is about measurements' validation. First, we describe how the sensors' measurements will be used in a protection system. Then, a proposal for a method based on the techniques of artificial intelligence (expert systems and neural networks) is presented. - The last part is about the problems of architectures of systems including hardware and software: the different types of redundancies used till now and a proposition of a multi-processors architecture which uses an operating system that is able to manage several tasks implemented on different processors, which verifies the good operating of each of those tasks and of the related processors and which allows to carry on the operation of the system, even in a degraded manner when a failure has been detected are detailed [fr

  16. Transition from Legacy to Connectivity Solution for Infrastructure Control of Smart Municipal Systems

    Science.gov (United States)

    Zabasta, A.; Kunicina, N.; Kondratjevs, K.

    2017-06-01

    Collaboration between heterogeneous systems and architectures is not an easy problem in the automation domain. By now, utilities and suppliers encounter real problems due to underestimated costs of technical solutions, frustration in selecting technical solutions relevant for local needs, and incompatibilities between a plenty of protocols and appropriate solutions. The paper presents research on creation of architecture of smart municipal systems in a local cloud of services that apply SOA and IoT approaches. The authors of the paper have developed a broker that applies orchestration services and resides on a gateway, which provides adapter and protocol translation functions, as well as applies a tool for wiring together hardware devices, APIs and online services.

  17. Transition from Legacy to Connectivity Solution for Infrastructure Control of Smart Municipal Systems

    Directory of Open Access Journals (Sweden)

    Zabasta A.

    2017-06-01

    Full Text Available Collaboration between heterogeneous systems and architectures is not an easy problem in the automation domain. By now, utilities and suppliers encounter real problems due to underestimated costs of technical solutions, frustration in selecting technical solutions relevant for local needs, and incompatibilities between a plenty of protocols and appropriate solutions. The paper presents research on creation of architecture of smart municipal systems in a local cloud of services that apply SOA and IoT approaches. The authors of the paper have developed a broker that applies orchestration services and resides on a gateway, which provides adapter and protocol translation functions, as well as applies a tool for wiring together hardware devices, APIs and online services.

  18. Guidelines for the verification and validation of expert system software and conventional software. Volume 7, User's manual: Final report

    International Nuclear Information System (INIS)

    Miller, L.A.; Hayes, J.E.; Mirsky, S.M.

    1995-05-01

    Reliable software is required for nuclear power industry applications. Verification and validation techniques applied during the software development process can help eliminate errors that could inhibit the proper operation of digital systems and cause availability and safety problems. Most of the techniques described in this report are valid for conventional software systems as well as for expert systems. The project resulted in a set of 16 V ampersand V guideline packages and 11 sets of procedures based on the class, development phase, and system component being tested. These guideline packages and procedures help a utility define the level of V ampersand V, which involves evaluating the complexity and type of software component along with the consequences of failure. In all, the project identified 153 V ampersand V techniques for conventional software systems and demonstrated their application to all aspects of expert systems except for the knowledge base, which requires specially developed tools. Each of these conventional techniques covers anywhere from 2-52 total types of conventional software defects, and each defect is covered by 21-50 V ampersand V techniques. The project also identified automated tools to Support V ampersand V activities

  19. The health systems funding platform and World Bank legacy: the gap between rhetoric and reality

    Science.gov (United States)

    2013-01-01

    Global health partnerships created to encourage funding efficiencies need to be approached with some caution, with claims for innovation and responsiveness to development needs based on untested assumptions around the potential of some partners to adapt their application, funding and evaluation procedures within these new structures. We examine this in the case of the Health Systems Funding Platform, which despite being set up some three years earlier, has stalled at the point of implementation of its key elements of collaboration. While much of the attention has been centred on the suspension of the Global Fund’s Round 11, and what this might mean for health systems strengthening and the Platform more broadly, we argue that inadequate scrutiny has been made of the World Bank’s contribution to this partnership, which might have been reasonably anticipated based on an historical analysis of development perspectives. Given the tensions being created by the apparent vulnerability of the health systems strengthening agenda, and the increasing rhetoric around the need for greater harmonization in development assistance, an examination of the positioning of the World Bank in this context is vital. PMID:23497327

  20. Computer systems and software description for gas characterization system

    International Nuclear Information System (INIS)

    Vo, C.V.

    1997-01-01

    The Gas Characterization System Project was commissioned by TWRS management with funding from TWRS Safety, on December 1, 1994. The project objective is to establish an instrumentation system to measure flammable gas concentrations in the vapor space of selected watch list tanks, starting with tank AN-105 and AW-101. Data collected by this system is meant to support first tank characterization, then tank safety. System design is premised upon Characterization rather than mitigation, therefore redundancy is not required

  1. Technical description of the burn-up software system MOP

    International Nuclear Information System (INIS)

    Schutte, C.K.

    1991-05-01

    The burn-up software system MOP is a research tool primary intended to study the behaviour of fission products in any reactor composition. Input data are multi-group cross-sections and data concerning the nuclide chains. An option is available to calculate a fundamental mode neutron spectrum for the specified reactor composition. A separate program can test the consistency of the specified nuclide chains. Options are available to calculate time-dependent cross-sections of lumped fission products and to take account of the leakage of gaseous fission products from the reactor core. The system is written in FORTRAN77 for a CYBER computer, using the operating system NOS/BE. The report gives a detailed technical description of the applied algorithms and the flow and storage of data. Information is provided for adapting the system to other computer configurations. (author). 5 refs.; 11 figs

  2. The software for the CERN LEP beam orbit measurement system

    International Nuclear Information System (INIS)

    Morpurgo, G.

    1992-01-01

    The Beam Orbit Measurement (BOM) system of LEP consists of 504 pickups, distributed all around the accelerator, that are capable of measuring the positions of the two beams. Their activity has to be synchronized, and the data produced by them have to be collected together, for example to form a 'closed orbit measurement' or a 'trajectory measurement'. On the user side, several clients can access simultaneously the results from this instrument. An automatic acquisition mode, and an 'on request' one, can run in parallel. This results in a very flexible and powerful system. The functionality of the BOM system is fully described, as well as the structure of the software processes which constitute the system, and their interconnections. Problems solved during the implementation are emphasized. (author)

  3. The Impact of Autonomous Systems Technology on JPL Mission Software

    Science.gov (United States)

    Doyle, Richard J.

    2000-01-01

    This paper discusses the following topics: (1) Autonomy for Future Missions- Mars Outposts, Titan Aerobot, and Europa Cryobot / Hydrobot; (2) Emergence of Autonomy- Remote Agent Architecture, Closing Loops Onboard, and New Millennium Flight Experiment; and (3) Software Engineering Challenges- Influence of Remote Agent, Scalable Autonomy, Autonomy Software Validation, Analytic Verification Technology, and Autonomy and Software Software Engineering.

  4. A Methodological Framework for Software Safety in Safety Critical Computer Systems

    OpenAIRE

    P. V. Srinivas Acharyulu; P. Seetharamaiah

    2012-01-01

    Software safety must deal with the principles of safety management, safety engineering and software engineering for developing safety-critical computer systems, with the target of making the system safe, risk-free and fail-safe in addition to provide a clarified differentaition for assessing and evaluating the risk, with the principles of software risk management. Problem statement: Prevailing software quality models, standards were not subsisting in adequately addressing the software safety ...

  5. An architectural model for software testing lesson learned systems

    OpenAIRE

    Pazos Sierra, Juan; Andrade, Javier; Ares Casal, Juan M.; Martínez Rey, María Aurora; Rodríguez, Santiago; Romera, Julio; Suárez, Sonia

    2013-01-01

    Software testing is a key aspect of software reliability and quality assurance in a context where software development constantly has to overcome mammoth challenges in a continuously changing environment. One of the characteristics of software testing is that it has a large intellectual capital component and can thus benefit from the use of the experience gained from past projects. Software testing can, then, potentially benefit from solutions provided by the knowledge management discipline. ...

  6. Modernization of tank floor scanning system (TAFLOSS) Software

    International Nuclear Information System (INIS)

    Mohd Fitri Abd Rahman; Jaafar Abdullah; Zainul A Hassan

    2002-01-01

    The main objective of the project is to develop new user-friendly software that combined the second-generation software (developed in-house) and commercial software. This paper describes the development of computer codes for analysing the initial data and plotting exponential curve fit. The method that used in curve fitting is least square technique. The software that had been developed is capable to give a comparable result as the commercial software. (Author)

  7. Advanced software tools for digital loose part monitoring systems

    International Nuclear Information System (INIS)

    Ding, Y.

    1996-01-01

    The paper describes two software modules as analysis tools for digital loose part monitoring systems. The first module is called acoustic module which utilizes the multi-media features of modern personal computers to replay the digital stored short-time bursts with sufficient length and in good quality. This is possible due to the so-called puzzle technique developed at ISTec. The second module is called classification module which calculates advanced burst parameters and classifies the acoustic events in pre-defined classes with the help of an artificial multi-layer perception neural network trained with the back propagation algorithm. (author). 7 refs, 7 figs

  8. New Abstract Submission Software System for AGU Meetings

    Science.gov (United States)

    Ward, Joanna

    2009-07-01

    New software for submitting abstracts has been deployed by AGU for the 2009 Fall Meeting. “Abstract Central” is a simplified interface providing a secure, complete method for abstract submission with easy-to-follow steps and a fresh look. A major component of the system will be an itinerary planner, downloadable to mobile devices, to help meeting attendees schedule their time at AGU conferences. Increased access to customer service is a key element that abstract submitters will find especially helpful. A call center, as well as 24-hour Web-based and e-mail technical support, will be available to help members.

  9. Project W-211, initial tank retrieval systems, retrieval control system software configuration management plan

    International Nuclear Information System (INIS)

    RIECK, C.A.

    1999-01-01

    This Software Configuration Management Plan (SCMP) provides the instructions for change control of the W-211 Project, Retrieval Control System (RCS) software after initial approval/release but prior to the transfer of custody to the waste tank operations contractor. This plan applies to the W-211 system software developed by the project, consisting of the computer human-machine interface (HMI) and programmable logic controller (PLC) software source and executable code, for production use by the waste tank operations contractor. The plan encompasses that portion of the W-211 RCS software represented on project-specific AUTOCAD drawings that are released as part of the C1 definitive design package (these drawings are identified on the drawing list associated with each C-1 package), and the associated software code. Implementation of the plan is required for formal acceptance testing and production release. The software configuration management plan does not apply to reports and data generated by the software except where specifically identified. Control of information produced by the software once it has been transferred for operation is the responsibility of the receiving organization

  10. Integrated software system for improving medical equipment management.

    Science.gov (United States)

    Bliznakov, Z; Pappous, G; Bliznakova, K; Pallikarakis, N

    2003-01-01

    The evolution of biomedical technology has led to an extraordinary use of medical devices in health care delivery. During the last decade, clinical engineering departments (CEDs) turned toward computerization and application of specific software systems for medical equipment management in order to improve their services and monitor outcomes. Recently, much emphasis has been given to patient safety. Through its Medical Device Directives, the European Union has required all member nations to use a vigilance system to prevent the reoccurrence of adverse events that could lead to injuries or death of patients or personnel as a result of equipment malfunction or improper use. The World Health Organization also has made this issue a high priority and has prepared a number of actions and recommendations. In the present workplace, a new integrated, Windows-oriented system is proposed, addressing all tasks of CEDs but also offering a global approach to their management needs, including vigilance. The system architecture is based on a star model, consisting of a central core module and peripheral units. Its development has been based on the integration of 3 software modules, each one addressing specific predefined tasks. The main features of this system include equipment acquisition and replacement management, inventory archiving and monitoring, follow up on scheduled maintenance, corrective maintenance, user training, data analysis, and reports. It also incorporates vigilance monitoring and information exchange for adverse events, together with a specific application for quality-control procedures. The system offers clinical engineers the ability to monitor and evaluate the quality and cost-effectiveness of the service provided by means of quality and cost indicators. Particular emphasis has been placed on the use of harmonized standards with regard to medical device nomenclature and classification. The system's practical applications have been demonstrated through a pilot

  11. Software requirements definition Shipping Cask Analysis System (SCANS)

    International Nuclear Information System (INIS)

    Johnson, G.L.; Serbin, R.

    1985-01-01

    The US Nuclear Regulatory Commission (NRC) staff reviews the technical adequacy of applications for certification of designs of shipping casks for spent nuclear fuel. In order to confirm an acceptable design, the NRC staff may perform independent calculations. The current NRC procedure for confirming cask design analyses is laborious and tedious. Most of the work is currently done by hand or through the use of a remote computer network. The time required to certify a cask can be long. The review process may vary somewhat with the engineer doing the reviewing. Similarly, the documentation on the results of the review can also vary with the reviewer. To increase the efficiency of this certification process, LLNL was requested to design and write an integrated set of user-oriented, interactive computer programs for a personal microcomputer. The system is known as the NRC Shipping Cask Analysis System (SCANS). The computer codes and the software system supporting these codes are being developed and maintained for the NRC by LLNL. The objective of this system is generally to lessen the time and effort needed to review an application. Additionally, an objective of the system is to assure standardized methods and documentation of the confirmatory analyses used in the review of these cask designs. A software system should be designed based on NRC-defined requirements contained in a requirements document. The requirements document is a statement of a project's wants and needs as the users and implementers jointly understand them. The requirements document states the desired end products (i.e. WHAT's) of the project, not HOW the project provides them. This document describes the wants and needs for the SCANS system. 1 fig., 3 tabs

  12. Performance evaluation of communication software systems for distributed computing

    Science.gov (United States)

    Fatoohi, R. A.

    1997-09-01

    In recent years there has been an increasing interest in object-oriented distributed computing since it is better equipped to deal with complex systems while providing extensibility, maintainability and reusability. At the same time, several new high-speed network technologies have emerged for local and wide area networks. However, the performance of networking software is not improving as fast as the networking hardware and the workstation microprocessors. This paper gives an overview and evaluates the performance of the Common Object Request Broker Architecture (CORBA) standard in a distributed computing environment at NASA Ames Research Center. The environment consists of two testbeds of SGI workstations connected by four networks: Ethernet, FDDI, HiPPI and ATM. The performance results for three communication software systems are presented, analysed and compared. These systems are: BSD socket programming interface, IONA's Orbix, an implementation of the CORBA specification and the PVM message passing library. The results show that high-level communication interfaces, such as CORBA and PVM, can achieve reasonable performance under certain conditions.

  13. Porting and redesign of Geotool software system to Qt

    Science.gov (United States)

    Miljanovic Tamarit, V.; Carneiro, L.; Henson, I. H.; Tomuta, E.

    2016-12-01

    Geotool is a software system that allows a user to interactively display and process seismoacoustic data from International Monitoring System (IMS) station. Geotool can be used to perform a number of analysis and review tasks, including data I/O, waveform filtering, quality control, component rotation, amplitude and arrival measurement and review, array beamforming, correlation, Fourier analysis, FK analysis, event review and location, particle motion visualization, polarization analysis, instrument response convolution/deconvolution, real-time display, signal to noise measurement, spectrogram, and travel time model display. The Geotool program was originally written in C using the X11/Xt/Motif libraries for graphics. It was later ported to C++. Now the program is being ported to the Qt graphics system to be more compatible with the other software in the International Data Centre (IDC). Along with this port, a redesign of the architecture is underway to achieve a separation between user interface, control, and data model elements, in line with design patterns such as Model-View-Controller. Qt is a cross-platform application framework that will allow geotool to easily run on Linux, Mac, and Windows. The Qt environment includes modern libraries and user interfaces for standard utilities such as file and database access, printing, and inter-process communications. The Qt Widgets for Technical Applications library (QWT) provides tools for displaying standard data analysis graphics.

  14. Software releases management for TDAQ system in ATLAS experiment

    CERN Document Server

    Kazarov, A; The ATLAS collaboration; Hauser, R; Soloviev, I

    2010-01-01

    ATLAS is a general-purpose experiment in high-energy physics at Large Hadron Collider at CERN. ATLAS Trigger and Data Acquisition (TDAQ) system is a distributed computing system which is responsible for transferring and filtering the physics data from the experiment to mass-storage. TDAQ software is developed since 1998 by a team of few dozens developers. It is used for integration of all ATLAS subsystem participating in data-taking, providing framework and API for building the s/w pieces of TDAQ system. It is currently composed of more then 200 s/w packages which are available for ATLAS users in form of regular software releases. The s/w is available for development on a shared filesystem, on test beds and it is deployed to the ATLAS pit where it is used for data-taking. The paper describes the working model, the policies and the tools which are used by s/w developers and s/w librarians in order to develop, release, deploy and maintain the TDAQ s/w for the long period of development, commissioning and runnin...

  15. 75 FR 43206 - In the Matter of Certain Wireless Communications System Server Software, Wireless Handheld...

    Science.gov (United States)

    2010-07-23

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-706] In the Matter of Certain Wireless Communications System Server Software, Wireless Handheld Devices and Battery Packs: Notice of Commission... United States after importation of certain wireless communications system server software, wireless...

  16. Software Verification and Validation Test Report for the HEPA filter Differential Pressure Fan Interlock System

    International Nuclear Information System (INIS)

    ERMI, A.M.

    2000-01-01

    The HEPA Filter Differential Pressure Fan Interlock System PLC ladder logic software was tested using a Software Verification and Validation (VandV) Test Plan as required by the ''Computer Software Quality Assurance Requirements''. The purpose of his document is to report on the results of the software qualification

  17. Modeling Physical Systems Using Vensim PLE Systems Dynamics Software

    Science.gov (United States)

    Widmark, Stephen

    2012-01-01

    Many physical systems are described by time-dependent differential equations or systems of such equations. This makes it difficult for students in an introductory physics class to solve many real-world problems since these students typically have little or no experience with this kind of mathematics. In my high school physics classes, I address…

  18. Software Engineering of Component-Based Systems-of-Systems: A Reference Framework

    OpenAIRE

    Loiret, Frédéric; Rouvoy, Romain; Seinturier, Lionel; Merle, Philippe

    2011-01-01

    CORE A.; International audience; Systems-of-Systems (SoS) are complex infrastructures, which are characterized by a wide diversity of technologies and requirements imposed by the domain(s) they target. In this context, the software engineering community has been focusing on assisting the developers by providing them domain-specific languages, component-based software engineering frameworks and tools to leverage on the design and the development of such systems. However, the adoption of such a...

  19. A requirements specification for a software design support system

    Science.gov (United States)

    Noonan, Robert E.

    1988-01-01

    Most existing software design systems (SDSS) support the use of only a single design methodology. A good SDSS should support a wide variety of design methods and languages including structured design, object-oriented design, and finite state machines. It might seem that a multiparadigm SDSS would be expensive in both time and money to construct. However, it is proposed that instead an extensible SDSS that directly implements only minimal database and graphical facilities be constructed. In particular, it should not directly implement tools to faciliate language definition and analysis. It is believed that such a system could be rapidly developed and put into limited production use, with the experience gained used to refine and evolve the systems over time.

  20. Bridging software engineering gaps towards system of systems development

    OpenAIRE

    Marcelo Augusto Ramos

    2014-01-01

    While there is a growing recognition of the importance of System of Systems (SoS), there is still little agreement on just what they are or on by what principles they should be constructed. Actually, there are numerous SoS definitions in the literature. The difficulty in specifying what are the constituent systems, what they are supposed to do, and how they are going to do it frequently lead SoS initiatives to complete failures. Guided by a sample SoS that comprises all the distinguishing SoS...

  1. High Energy Physics Forum for Computational Excellence: Working Group Reports (I. Applications Software II. Software Libraries and Tools III. Systems)

    Energy Technology Data Exchange (ETDEWEB)

    Habib, Salman [Argonne National Lab. (ANL), Argonne, IL (United States); Roser, Robert [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); LeCompte, Tom [Argonne National Lab. (ANL), Argonne, IL (United States); Marshall, Zach [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Borgland, Anders [SLAC National Accelerator Lab., Menlo Park, CA (United States); Viren, Brett [Brookhaven National Lab. (BNL), Upton, NY (United States); Nugent, Peter [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Asai, Makato [SLAC National Accelerator Lab., Menlo Park, CA (United States); Bauerdick, Lothar [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Finkel, Hal [Argonne National Lab. (ANL), Argonne, IL (United States); Gottlieb, Steve [Indiana Univ., Bloomington, IN (United States); Hoeche, Stefan [SLAC National Accelerator Lab., Menlo Park, CA (United States); Sheldon, Paul [Vanderbilt Univ., Nashville, TN (United States); Vay, Jean-Luc [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Elmer, Peter [Princeton Univ., NJ (United States); Kirby, Michael [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Patton, Simon [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Potekhin, Maxim [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Yanny, Brian [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Calafiura, Paolo [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Dart, Eli [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Gutsche, Oliver [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Izubuchi, Taku [Brookhaven National Lab. (BNL), Upton, NY (United States); Lyon, Adam [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Petravick, Don [Univ. of Illinois, Urbana-Champaign, IL (United States). National Center for Supercomputing Applications (NCSA)

    2015-10-29

    Computing plays an essential role in all aspects of high energy physics. As computational technology evolves rapidly in new directions, and data throughput and volume continue to follow a steep trend-line, it is important for the HEP community to develop an effective response to a series of expected challenges. In order to help shape the desired response, the HEP Forum for Computational Excellence (HEP-FCE) initiated a roadmap planning activity with two key overlapping drivers -- 1) software effectiveness, and 2) infrastructure and expertise advancement. The HEP-FCE formed three working groups, 1) Applications Software, 2) Software Libraries and Tools, and 3) Systems (including systems software), to provide an overview of the current status of HEP computing and to present findings and opportunities for the desired HEP computational roadmap. The final versions of the reports are combined in this document, and are presented along with introductory material.

  2. High Energy Physics Forum for Computational Excellence: Working Group Reports (I. Applications Software II. Software Libraries and Tools III. Systems)

    Energy Technology Data Exchange (ETDEWEB)

    Habib, Salman [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Roser, Robert [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States)

    2015-10-28

    Computing plays an essential role in all aspects of high energy physics. As computational technology evolves rapidly in new directions, and data throughput and volume continue to follow a steep trend-line, it is important for the HEP community to develop an effective response to a series of expected challenges. In order to help shape the desired response, the HEP Forum for Computational Excellence (HEP-FCE) initiated a roadmap planning activity with two key overlapping drivers -- 1) software effectiveness, and 2) infrastructure and expertise advancement. The HEP-FCE formed three working groups, 1) Applications Software, 2) Software Libraries and Tools, and 3) Systems (including systems software), to provide an overview of the current status of HEP computing and to present findings and opportunities for the desired HEP computational roadmap. The final versions of the reports are combined in this document, and are presented along with introductory material.

  3. The simulation library of the Belle II software system

    Science.gov (United States)

    Kim, D. Y.; Ritter, M.; Bilka, T.; Bobrov, A.; Casarosa, G.; Chilikin, K.; Ferber, T.; Godang, R.; Jaegle, I.; Kandra, J.; Kodys, P.; Kuhr, T.; Kvasnicka, P.; Nakayama, H.; Piilonen, L.; Pulvermacher, C.; Santelj, L.; Schwenker, B.; Sibidanov, A.; Soloviev, Y.; Starič, M.; Uglov, T.

    2017-10-01

    SuperKEKB, the next generation B factory, has been constructed in Japan as an upgrade of KEKB. This brand new e+ e- collider is expected to deliver a very large data set for the Belle II experiment, which will be 50 times larger than the previous Belle sample. Both the triggered physics event rate and the background event rate will be increased by at least 10 times than the previous ones, and will create a challenging data taking environment for the Belle II detector. The software system of the Belle II experiment is designed to execute this ambitious plan. A full detector simulation library, which is a part of the Belle II software system, is created based on Geant4 and has been tested thoroughly. Recently the library has been upgraded with Geant4 version 10.1. The library is behaving as expected and it is utilized actively in producing Monte Carlo data sets for various studies. In this paper, we will explain the structure of the simulation library and the various interfaces to other packages including geometry and beam background simulation.

  4. Architecture for Payload Planning System (PPS) Software Distribution

    Science.gov (United States)

    Howell, Eric; Hagopian, Jeff

    1995-01-01

    The complex and diverse nature of the pay load operations to be performed on the Space Station requires a robust and flexible planning approach, and the proper software tools which tools to support that approach. To date, the planning software for most manned operations in space has been utilized in a centralized planning environment. Centralized planning is characterized by the following: performed by a small team of people, performed at a single location, and performed using single-user planning systems. This approach, while valid for short duration flights, is not conducive to the long duration and highly distributed payload operations environment of the Space Station. The Payload Planning System (PPS) is being designed specifically to support the planning needs of the large number of geographically distributed users of the Space Station. This paper problem provides a general description of the distributed planning architecture that PPS must support and describes the concepts proposed for making PPS available to the Space Station payload user community.

  5. VisualEyes: a modular software system for oculomotor experimentation.

    Science.gov (United States)

    Guo, Yi; Kim, Eun H; Kim, Eun; Alvarez, Tara; Alvarez, Tara L

    2011-03-25

    Eye movement studies have provided a strong foundation forming an understanding of how the brain acquires visual information in both the normal and dysfunctional brain.(1) However, development of a platform to stimulate and store eye movements can require substantial programming, time and costs. Many systems do not offer the flexibility to program numerous stimuli for a variety of experimental needs. However, the VisualEyes System has a flexible architecture, allowing the operator to choose any background and foreground stimulus, program one or two screens for tandem or opposing eye movements and stimulate the left and right eye independently. This system can significantly reduce the programming development time needed to conduct an oculomotor study. The VisualEyes System will be discussed in three parts: 1) the oculomotor recording device to acquire eye movement responses, 2) the VisualEyes software written in LabView, to generate an array of stimuli and store responses as text files and 3) offline data analysis. Eye movements can be recorded by several types of instrumentation such as: a limbus tracking system, a sclera search coil, or a video image system. Typical eye movement stimuli such as saccadic steps, vergent ramps and vergent steps with the corresponding responses will be shown. In this video report, we demonstrate the flexibility of a system to create numerous visual stimuli and record eye movements that can be utilized by basic scientists and clinicians to study healthy as well as clinical populations.

  6. KAERI software safety guideline for developing safety-critical software in digital instrumentation and control system of nuclear power plant

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jang Soo; Kim, Jang Yeol; Eum, Heung Seop

    1997-07-01

    Recently, the safety planning for safety-critical software systems is being recognized as the most important phase in the software life cycle, and being developed new regulatory positions and standards by the regulatory and the standardization organization. The requirements for software important to safety of nuclear reactor are described in such positions and standards. Most of them are describing mandatory requirements, what shall be done, for the safety-critical software. The developers of such a software. However, there have been a lot of controversial factors on whether the work practices satisfy the regulatory requirements, and to justify the safety of such a system developed by the work practices, between the licenser and the licensee. We believe it is caused by the reason that there is a gap between the mandatory requirements (What) and the work practices (How). We have developed a guidance to fill such gap, which can be useful for both licenser and licensee to conduct a justification of the safety in the planning phase of developing the software for nuclear reactor protection systems. (author). 67 refs., 13 tabs., 2 figs.

  7. KAERI software safety guideline for developing safety-critical software in digital instrumentation and control system of nuclear power plant

    International Nuclear Information System (INIS)

    Lee, Jang Soo; Kim, Jang Yeol; Eum, Heung Seop.

    1997-07-01

    Recently, the safety planning for safety-critical software systems is being recognized as the most important phase in the software life cycle, and being developed new regulatory positions and standards by the regulatory and the standardization organization. The requirements for software important to safety of nuclear reactor are described in such positions and standards. Most of them are describing mandatory requirements, what shall be done, for the safety-critical software. The developers of such a software. However, there have been a lot of controversial factors on whether the work practices satisfy the regulatory requirements, and to justify the safety of such a system developed by the work practices, between the licenser and the licensee. We believe it is caused by the reason that there is a gap between the mandatory requirements (What) and the work practices (How). We have developed a guidance to fill such gap, which can be useful for both licenser and licensee to conduct a justification of the safety in the planning phase of developing the software for nuclear reactor protection systems. (author). 67 refs., 13 tabs., 2 figs

  8. Use of modern software - based instrumentation in safety critical systems

    International Nuclear Information System (INIS)

    Emmett, J.; Smith, B.

    2005-01-01

    Many Nuclear Power Plants are now ageing and in need of various degrees of refurbishment. Installed instrumentation usually uses out of date 'analogue' technology and is often no longer available in the market place. New technology instrumentation is generally un-qualified for nuclear use and specifically the new 'smart' technology contains 'firmware', (effectively 'soup' (Software of Uncertain Pedigree)) which must be assessed in accordance with relevant safety standards before it may be used in a safety application. Particular standards are IEC 61508 [1] and the British Energy (BE) PES (Programmable Electronic Systems) guidelines EPD/GEN/REP/0277/97. [2] This paper outlines a new instrument evaluation system, which has been developed in conjunction with the UK Nuclear Industry. The paper concludes with a discussion about on-line monitoring of Smart instrumentation in safety critical applications. (author)

  9. W-026 acceptance test plan plant control system software (submittal {number_sign} 216)

    Energy Technology Data Exchange (ETDEWEB)

    Watson, T.L., Fluor Daniel Hanford

    1997-02-14

    Acceptance Testing of the WRAP 1 Plant Control System software will be conducted throughout the construction of WRAP 1 with final testing on the glovebox software being completed in December 1996. The software tests will be broken out into five sections; one for each of the four Local Control Units and one for the supervisory software modules. The acceptance test report will contain completed copies of the software tests along with the applicable test log and completed Exception Test Reports.

  10. Safety review on unit testing of safety system software of nuclear power plant

    International Nuclear Information System (INIS)

    Liu Le; Zhang Qi

    2013-01-01

    Software unit testing has an important place in the testing of safety system software of nuclear power plants, and in the wider scope of the verification and validation. It is a comprehensive, systematic process, and its documentation shall meet the related requirements. When reviewing software unit testing, attention should be paid to the coverage of software safety requirements, the coverage of software internal structure, and the independence of the work. (authors)

  11. Software Acquisition Best Practices: Experiences From the Space Systems Domain

    National Research Council Canada - National Science Library

    Adams, R

    2004-01-01

    This report describes a comprehensive set of software acquisition best practices that the Software Acquisition MOlE research team has identified based on their experience with numerous space programs over many years...

  12. Integrated Power, Avionics, and Software (IPAS) Flexible Systems Integration

    Data.gov (United States)

    National Aeronautics and Space Administration — The Integrated Power, Avionics, and Software (IPAS) facility is a flexible, multi-mission hardware and software design environment. This project will develop a...

  13. Development of requirements tracking and verification system for the software design of distributed control system

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Chul Hwan; Kim, Jang Yeol; Kim, Jung Tack; Lee, Jang Soo; Ham, Chang Shik [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    In this paper a prototype of Requirement Tracking and Verification System(RTVS) for a Distributed Control System was implemented and tested. The RTVS is a software design and verification tool. The main functions required by the RTVS are managing, tracking and verification of the software requirements listed in the documentation of the DCS. The analysis of DCS software design procedures and interfaces with documents were performed to define the user of the RTVS, and the design requirements for RTVS were developed. 4 refs., 3 figs. (Author)

  14. Comparison of Software Quality Metrics for Object-Oriented System

    OpenAIRE

    Amit Sharma; Sanjay Kumar Dubey

    2012-01-01

    According to the IEEE standard glossary of softwareengineering, Object-Oriented design is becoming moreimportant in software development environment andsoftware Metrics are essential in software engineering formeasuring the software complexity, estimating size, qualityand project efforts. There are various approaches throughwhich we can find the software cost estimation andpredicates on various kinds of deliverable items. The toolsare used for measuring the estimations are lines of codes,func...

  15. Space and Missile Systems Center Standard: Software Development

    Science.gov (United States)

    2015-01-16

    Glossary: Defense Acquisition Acronyms and Terms, Eleventh Edition, September 2003. Dixon 2006. Dixon, J. M., C. M. Rink, and C. V. Sather, Digital ASIC ...Circuits ( ASICs ) and Field-Programmable Gate Arrays (FPGAs), see (Sather 2010) and (Dixon 2006). 4.1 Software Development Process The framework used...members performing software-related work on the contract. 2. Each software team member shall enforce the compliance of all subordinate software

  16. Software development for the PBX-M plasma control system

    International Nuclear Information System (INIS)

    Lagin, L.; Bell, R.; Chu, J.; Hatcher, R.; Hirsch, J.; Okabayashi, M.; Sichta, P.

    1995-01-01

    This paper describes the software development effort for the PBX-M plasma control system. The algorithms being developed for the system will serve to test advanced control concepts for TPX and ITER. This will include real-time algorithms for shaping control, vertical position control, current and density profile control and MHD avoidance. The control system consists of an interactive Host Processor (SPARC-10) interfaced through VME with four real-time Computer Processors (i860) which run at a maximum computational speed of 320 MFLOPs. Plasma shaping programs are being tested to duplicate the present PBX-M analog control system. Advanced algorithms for vertical control and x-point control will then be developed. Interactive graphical user interface programs running on the Host Processor will allow operators to control and monitor shot parameters. A waveform edit program will be used to download pre-programmed waveforms into the Compute Processor memory. Post-shot display programs will be used to interactively display data after the shot. Automatic pre-shot arming and data acquisition programs will run on the Host Processor. Event system programs will process interrupts and activate programs on the Host and Compute Processors. These programs are being written in C and Fortran and use system service routines to communicate with the Compute Processors and its memory. IDL and IDL widgets are being used to build the graphical user interfaces

  17. Software configuration management plan for HANDI 2000 business management system

    Energy Technology Data Exchange (ETDEWEB)

    BENNION, S.I.

    1999-02-10

    The Software Configuration Management Plan (SCMP) describes the configuration management and control environment for HANDI 2000 for the PP and PS software, as well as any custom developed software. This plan establishes requirements and processes for uniform documentation and coordination of HANDI 2000. This SCMP becomes effective as of this document's acceptance and will provide guidance through implementation efforts.

  18. Legacy sample disposition project. Volume 2: Final report

    International Nuclear Information System (INIS)

    Gurley, R.N.; Shifty, K.L.

    1998-02-01

    This report describes the legacy sample disposition project at the Idaho Engineering and Environmental Laboratory (INEEL), which assessed Site-wide facilities/areas to locate legacy samples and owner organizations and then characterized and dispositioned these samples. This project resulted from an Idaho Department of Environmental Quality inspection of selected areas of the INEEL in January 1996, which identified some samples at the Test Reactor Area and Idaho Chemical Processing Plant that had not been characterized and dispositioned according to Resource Conservation and Recovery Act (RCRA) requirements. The objective of the project was to manage legacy samples in accordance with all applicable environmental and safety requirements. A systems engineering approach was used throughout the project, which included collecting the legacy sample information and developing a system for amending and retrieving the information. All legacy samples were dispositioned by the end of 1997. Closure of the legacy sample issue was achieved through these actions

  19. Software development of the KSTAR Tokamak Monitoring System

    International Nuclear Information System (INIS)

    Kim, K.H.; Lee, T.G.; Baek, S.; Lee, S.I.; Chu, Y.; Kim, Y.O.; Kim, J.S.; Park, M.K.; Oh, Y.K.

    2008-01-01

    The Korea Superconducting Tokamak Advanced Research (KSTAR) project, which is constructing a superconducting Tokamak, was launched in 1996. Much progress in instrumentation and control has been made since then and the construction phase will be finished in August 2007. The Tokamak Monitoring System (TMS) measures the temperatures of the superconducting magnets, bus-lines, and structures and hence monitors the superconducting conditions during the operation of the KSTAR Tokamak. The TMS also measures the strains and displacements on the structures in order to monitor the mechanical safety. There are around 400 temperature sensors, more than 240 strain gauges, 10 displacement gauges and 10 Hall sensors. The TMS utilizes Cernox sensors for low temperature measurement and each sensor has its own characteristic curve. In addition, the TMS needs to perform complex arithmetic operations to convert the measurements into temperatures for each Cernox sensor for this large number of monitoring channels. A special software development effort was required to reduce the temperature conversion time and multi-threading to achieve the higher performance needed to handle the large number of channels. We have developed the TMS with PXI hardware and with EPICS software. We will describe the details of the implementations in this paper

  20. EON: software for long time simulations of atomic scale systems

    Science.gov (United States)

    Chill, Samuel T.; Welborn, Matthew; Terrell, Rye; Zhang, Liang; Berthet, Jean-Claude; Pedersen, Andreas; Jónsson, Hannes; Henkelman, Graeme

    2014-07-01

    The EON software is designed for simulations of the state-to-state evolution of atomic scale systems over timescales greatly exceeding that of direct classical dynamics. States are defined as collections of atomic configurations from which a minimization of the potential energy gives the same inherent structure. The time evolution is assumed to be governed by rare events, where transitions between states are uncorrelated and infrequent compared with the timescale of atomic vibrations. Several methods for calculating the state-to-state evolution have been implemented in EON, including parallel replica dynamics, hyperdynamics and adaptive kinetic Monte Carlo. Global optimization methods, including simulated annealing, basin hopping and minima hopping are also implemented. The software has a client/server architecture where the computationally intensive evaluations of the interatomic interactions are calculated on the client-side and the state-to-state evolution is managed by the server. The client supports optimization for different computer architectures to maximize computational efficiency. The server is written in Python so that developers have access to the high-level functionality without delving into the computationally intensive components. Communication between the server and clients is abstracted so that calculations can be deployed on a single machine, clusters using a queuing system, large parallel computers using a message passing interface, or within a distributed computing environment. A generic interface to the evaluation of the interatomic interactions is defined so that empirical potentials, such as in LAMMPS, and density functional theory as implemented in VASP and GPAW can be used interchangeably. Examples are given to demonstrate the range of systems that can be modeled, including surface diffusion and island ripening of adsorbed atoms on metal surfaces, molecular diffusion on the surface of ice and global structural optimization of nanoparticles.

  1. Assessment of the integration capability of system architectures from a complex and distributed software systems perspective

    Science.gov (United States)

    Leuchter, S.; Reinert, F.; Müller, W.

    2014-06-01

    Procurement and design of system architectures capable of network centric operations demand for an assessment scheme in order to compare different alternative realizations. In this contribution an assessment method for system architectures targeted at the C4ISR domain is presented. The method addresses the integration capability of software systems from a complex and distributed software system perspective focusing communication, interfaces and software. The aim is to evaluate the capability to integrate a system or its functions within a system-of-systems network. This method uses approaches from software architecture quality assessment and applies them on the system architecture level. It features a specific goal tree of several dimensions that are relevant for enterprise integration. These dimensions have to be weighed against each other and totalized using methods from the normative decision theory in order to reflect the intention of the particular enterprise integration effort. The indicators and measurements for many of the considered quality features rely on a model based view on systems, networks, and the enterprise. That means it is applicable to System-of-System specifications based on enterprise architectural frameworks relying on defined meta-models or domain ontologies for defining views and viewpoints. In the defense context we use the NATO Architecture Framework (NAF) to ground respective system models. The proposed assessment method allows evaluating and comparing competing system designs regarding their future integration potential. It is a contribution to the system-of-systems engineering methodology.

  2. Software system for fuel management at Embalse nuclear power plant

    International Nuclear Information System (INIS)

    Grant, C.; Pomerantz, M.E.; Moreno, C.A.

    2002-01-01

    For accurate tracking of flux and power distribution in a CANDU reactor, the information needed is evaluated from a neutronic code calculation adjusted with experimental values, making use of in-core vanadium detectors at 102 locations together with auxiliary programs.The basic data that feed these programs come from the geometric and neutronic features and the actual instantaneous operating parameters. The system that provides all this information should be designed to meet with software quality assurance requirements. A software system was implemented at Embalse Nuclear Power Plant and it is in operation since 1998 after two year testing. This PC version replaced the former system introducing new concepts in its architecture. The neutronic code runs by procedures implemented in a language of macro instructions, so only new data are loaded for two consecutive instantaneous cases avoiding unnecessary data repetition. After each step, all results of neutronic calculation are stored in master files. Afterwards other auxiliary programs retrieve basic data for further evaluation and files are sorted in different thematic folders using a specific codification, for reevaluating further calculations over any specific case. The whole system can be installed in any PC. The package is provided with its general and particular support documentation and procedures for each program.The main purpose of the system is to track fuel and power distribution calculated after a certain period where fuelling operation were done in between. The main code, PUMA, evaluates in a 3-D, two-group scheme using finite difference diffusion theory. After neutronic calculation is performed, other programs allow to retrieve assorted information valid for fuel strategy and to build the fuelling operation list to be sent to the operation shifts. This program also permits to evaluate the accuracy of PUMA by doing comparisons with experimental values. Along with these features, some other system

  3. Technical Evaluation Report 24: Open Source Software: an alternative to costly Learning Management Systems

    OpenAIRE

    Jim Depow

    2003-01-01

    This is the first in a series of two reports discussing the use of open source software (OSS) and free software (FS) in online education as an alternative to expensive proprietary software. It details the steps taken in a Canadian community college to download and install the Linux Operating System in order to support an OSS/ FS learning management system (LMS).

  4. Technical Evaluation Report 24: Open Source Software: an alternative to costly Learning Management Systems

    Directory of Open Access Journals (Sweden)

    Jim Depow

    2003-10-01

    Full Text Available This is the first in a series of two reports discussing the use of open source software (OSS and free software (FS in online education as an alternative to expensive proprietary software. It details the steps taken in a Canadian community college to download and install the Linux Operating System in order to support an OSS/ FS learning management system (LMS.

  5. Spaceport Command and Control System - Support Software Development

    Science.gov (United States)

    Tremblay, Shayne

    2016-01-01

    The Information Architecture Support (IAS) Team, the component of the Spaceport Command and Control System (SCCS) that is in charge of all the pre-runtime data, was in need of some report features to be added to their internal web application, Information Architecture (IA). Development of these reports is crucial for the speed and productivity of the development team, as they are needed to quickly and efficiently make specific and complicated data requests against the massive IA database. These reports were being put on the back burner, as other development of IA was prioritized over them, but the need for them resulted in internships being created to fill this need. The creation of these reports required learning Ruby on Rails development, along with related web technologies, and they will continue to serve IAS and other support software teams and their IA data needs.

  6. Methods to model-check parallel systems software

    International Nuclear Information System (INIS)

    Matlin, O. S.; McCune, W.; Lusk, E.

    2003-01-01

    We report on an effort to develop methodologies for formal verification of parts of the Multi-Purpose Daemon (MPD) parallel process management system. MPD is a distributed collection of communicating processes. While the individual components of the collection execute simple algorithms, their interaction leads to unexpected errors that are difficult to uncover by conventional means. Two verification approaches are discussed here: the standard model checking approach using the software model checker SPIN and the nonstandard use of a general-purpose first-order resolution-style theorem prover OTTER to conduct the traditional state space exploration. We compare modeling methodology and analyze performance and scalability of the two methods with respect to verification of MPD

  7. Quality factors in the life cycle of software oriented to safety systems in nuclear power plants

    International Nuclear Information System (INIS)

    Nunez McLeod, J.E.; Rivera, S.S.

    1997-01-01

    The inclusion of software in safety related systems for nuclear power plants, makes it necessary to include the software quality assurance concept. The software quality can be defined as the adjustment degree between the software and the specified requirements and user expectations. To guarantee a certain software quality level it is necessary to make a systematic and planned set of tasks, that constitute a software quality guaranty plan. The application of such a plan involves activities that should be performed all along the software life cycle, and that can be evaluated through the so called quality factors, due to the fact that the quality itself cannot be directly measured, but indirectly as some of it manifestations. In this work, a software life cycle model is proposed, for nuclear power plant safety related systems. A set os software quality factors is also proposed , with its corresponding classification according to the proposed model. (author) [es

  8. Rework of the ERA software system: ERA-8

    Science.gov (United States)

    Pavlov, D.; Skripnichenko, V.

    2015-08-01

    The software system that has been powering many products of the IAA during decades has undergone a major rework. ERA has capabilities for: processing tables of observations of different kinds, fitting parameters to observations, integrating equations of motion of the Solar system bodies. ERA comprises a domain-specific language called SLON, tailored for astronomical tasks. SLON provides a convenient syntax for reductions of observations, choosing of IAU standards to use, applying rules for filtering observations or selecting parameters for fitting. Also, ERA includes a table editor and a graph plotter. ERA-8 has a number of improvements over previous versions such as: integration of the Solar system and TT xA1 TDB with arbitrary number of asteroids; option to use different ephemeris (including DE and INPOP); integrator with 80-bit floating point. The code of ERA-8 has been completely rewritten from Pascal to C (for numerical computations) and Racket (for running SLON programs and managing data). ERA-8 is portable across major operating systems. The format of tables in ERA-8 is based on SQLite. The SPICE format has been chosen as the main format for ephemeris in ERA-8.

  9. CrossTalk: The Journal of Defense Software Engineering. Volume 19, Number 11

    Science.gov (United States)

    2006-11-01

    Maintenance. New York: McGraw Hill, 1994. 10. Wade, Stu and Andy Laws. Legacy System Management via the Triage Model, Software Triage. Liverpool , UK...shiminc.com Notes 1. Confucius, a Chinese philosopher and reformer (551-479 B.C.). 2. Bowl Championship Series, a system that selects the college football

  10. Visual software analytics for the build optimization of large-scale software systems

    NARCIS (Netherlands)

    Telea, Alexandru; Voinea, Lucian

    2011-01-01

    Visual analytics is the science of analytical reasoning facilitated by interactive visual interfaces. In this paper, we present an adaptation of the visual analytics framework to the context of software understanding for maintenance. We discuss the similarities and differences of the general visual

  11. 78 FR 47015 - Software Requirement Specifications for Digital Computer Software Used in Safety Systems of...

    Science.gov (United States)

    2013-08-02

    ..., the methods are consistent with the previously cited GDC and the criteria for quality assurance... related quality standards and quality assurance processes as well as the software elements of those... Development Branch, Division of Engineering, Office of Nuclear Regulatory Research. [FR Doc. 2013-18678 Filed...

  12. How Modeling Standards, Software, and Initiatives Support Reproducibility in Systems Biology and Systems Medicine.

    Science.gov (United States)

    Waltemath, Dagmar; Wolkenhauer, Olaf

    2016-10-01

    Only reproducible results are of significance to science. The lack of suitable standards and appropriate support of standards in software tools has led to numerous publications with irreproducible results. Our objectives are to identify the key challenges of reproducible research and to highlight existing solutions. In this paper, we summarize problems concerning reproducibility in systems biology and systems medicine. We focus on initiatives, standards, and software tools that aim to improve the reproducibility of simulation studies. The long-term success of systems biology and systems medicine depends on trustworthy models and simulations. This requires openness to ensure reusability and transparency to enable reproducibility of results in these fields.

  13. Software and system development using virtual platforms full-system simulation with wind river simics

    CERN Document Server

    Aarno, Daniel

    2014-01-01

    Virtual platforms are finding widespread use in both pre- and post-silicon computer software and system development. They reduce time to market, improve system quality, make development more efficient, and enable truly concurrent hardware/software design and bring-up. Virtual platforms increase productivity with unparalleled inspection, configuration, and injection capabilities. In combination with other types of simulators, they provide full-system simulations where computer systems can be tested together with the environment in which they operate. This book is not only about what simulat

  14. NUClear: A Loosely Coupled Software Architecture for Humanoid Robot Systems

    Directory of Open Access Journals (Sweden)

    Trent eHouliston

    2016-04-01

    Full Text Available This paper discusses the design and interface of NUClear, a new hybrid message-passing architecture for embodied humanoid robotics. NUClear is modular, low latency and promotes functional and expandable software design. It greatly reduces the latency for messages passed between modules as the messages routes are established at compile time. It also reduces the number of functions that must be written using a system called co-messages which aids in dealing with multiple simultaneous data. NUClear has primarily been evaluated on a humanoid robotic soccer platform and on a robotic boat platform, with evaluations showing that NUClear requires fewer callbacks and cache variables over existing message-passing architectures. NUClear does have limitations when applying these techniques on multi-processed systems. It performs best in lower power systems where computational resources are limited. Future work will focus on applying the architecture to new platforms, including a larger form humanoid platform and a virtual reality platform and further evaluating the impact of the novel techniques introduced.

  15. 75 FR 25165 - Defense Federal Acquisition Regulation Supplement; Cost and Software Data Reporting System

    Science.gov (United States)

    2010-05-07

    ... Regulation Supplement; Cost and Software Data Reporting System AGENCY: Defense Acquisition Regulations System... Reporting system requirements for major defense acquisition programs and major automated information system... summarized as follows: The objective of the rule is to set forth Cost and Software Data Reporting System...

  16. Software design analysis technique for the development of PLC-based safety-critical systems

    Energy Technology Data Exchange (ETDEWEB)

    Koo, Seo Ryong; Seong, Poong Hyun [Korea Advanced Institute of Science and Technology, Taejeon (Korea, Republic of)

    2005-11-15

    To develop and implement a safety-critical system, the requirements of the system must be analyzed thoroughly during the phases of a software development's life cycle because a single error in the requirements can generate serious software faults. In this study, a nuclear FBD-style design specification and analysis (NuFDS) approach was proposed for PLC based safety-critical systems. The NuFDS approach is suggested in a straightforward manner for the effective and formal specification and analysis of software designs. Accordingly, the proposed NuFDS approach comprises one technique for specifying the software design and another for analyzing the software design.

  17. Software engineering for the EBR-II data acquisition system conversion

    International Nuclear Information System (INIS)

    Schorzman, W.

    1988-01-01

    The purpose of this paper is to outline how EBR-II engineering approached the data acquisition system (DAS) software conversion project with the restraints of operational transparency and six weeks for final implementation and testing. Software engineering is a relatively new discipline that provides a structured philosopy for software conversion. The software life cycle is structured into six basic steps: 1) initiation, 2) requirements definition, 3) design, 4) programming, 5) testing, and 6) operations. These steps are loosely defined and can be altered to fit specific software applications. DAS software is encompassed from three sources: 1) custom software, 2) system software, and 3) in-house application software. A data flow structure is used to describe the DAS software. The categories are: 1) software used to bring signals into the central processer, 2) software that transforms the analog data to engineering units and then logs the data in the data store, and 3) software used to transport and display the data. The focus of this paper is to describe how the conversion team used a structured engineering approach and utilized the resources available to produce a quality system on time. Although successful, the conversion process provided some pit falls and stumbling blocks. Working through these obstacles enhanced our understanding and surfaced in the form of LESSONS LEARNED, which are gracefully shared in this paper

  18. Predictors of disability in a childhood-onset systemic lupus erythematosus cohort: results from the CARRA Legacy Registry.

    Science.gov (United States)

    Hersh, A O; Case, S M; Son, M B

    2018-03-01

    Objective Few descriptions of physical disability in childhood-onset SLE (cSLE) exist. We sought to describe disability in a large North American cohort of patients with cSLE and identify predictors of disability. Methods Sociodemographic and clinical data were obtained from the Childhood Arthritis and Rheumatology Research Alliance (CARRA) Legacy Registry for patients with cSLE enrolled between May 2010 and October 2014. The Childhood Health Assessment Questionnaire (CHAQ) was used to assess disability and physical functioning. Chi-square tests were used for univariate analyses, and multivariate logistic regression was used to assess predictors of disability. Results We analyzed data for 939 patients with cSLE. The median and mean CHAQ scores were 0 and 0.25, respectively, and 41% of the cohort had at least mild disability. Arthritis and higher pain scores were significantly associated with disability as compared to those without disability ( p disability at baseline. Conclusions Disability as measured by baseline CHAQ was fairly common in cSLE patients in the CARRA Legacy Registry, and was associated with low household income, arthritis, and higher pain scores. In addition to optimal disease control, ensuring psychosocial supports and addressing pain may reduce disability in cSLE. Further study is needed of disability in cSLE.

  19. A hybrid approach to quantify software reliability in nuclear safety systems

    International Nuclear Information System (INIS)

    Arun Babu, P.; Senthil Kumar, C.; Murali, N.

    2012-01-01

    Highlights: ► A novel method to quantify software reliability using software verification and mutation testing in nuclear safety systems. ► Contributing factors that influence software reliability estimate. ► Approach to help regulators verify the reliability of safety critical software system during software licensing process. -- Abstract: Technological advancements have led to the use of computer based systems in safety critical applications. As computer based systems are being introduced in nuclear power plants, effective and efficient methods are needed to ensure dependability and compliance to high reliability requirements of systems important to safety. Even after several years of research, quantification of software reliability remains controversial and unresolved issue. Also, existing approaches have assumptions and limitations, which are not acceptable for safety applications. This paper proposes a theoretical approach combining software verification and mutation testing to quantify the software reliability in nuclear safety systems. The theoretical results obtained suggest that the software reliability depends on three factors: the test adequacy, the amount of software verification carried out and the reusability of verified code in the software. The proposed approach may help regulators in licensing computer based safety systems in nuclear reactors.

  20. Software Development of High-Precision Ephemerides of Solar System

    Directory of Open Access Journals (Sweden)

    Jong-Seob Shin

    1995-06-01

    Full Text Available We solved n-body problem about 9 plants, moon, and 4 minor planets with relativistic effect related to the basic equation of motion of the solar system. Perturbations including figure potential of the earth and the moon and solid earth tidal effect were considered on this relativistic equation of motion. The orientations employed precession and nutation for the earth, and lunar libration model with Eckert's lunar libration model based on J2000.0 were used for the moon. Finally, we developed heliocentric ecliptic position and velocity of each planet using this software package named the SSEG (Solar System Ephemerides Generator by long-term (more than 100 years simulation on CRAY-2S super computer, through testing each subroutine on personal computer and short-time (within 800days running on SUN3/280 workstation. Epoch of input data JD2440400.5 were adopted in order to compare our results to the data archived from JPL's DE200 by Standish and Newhall. Above equation of motion was integrated numerically having 1-day step-size interval through 40,000 days (about 110 years long as total computing interval. We obtained high-precision ephemerides of the planets with maximum error, less than ~2 x 10-8AU (≈±3km compared with DE200 data(except for mars and moon.

  1. A Web-Based Learning System for Software Test Professionals

    Science.gov (United States)

    Wang, Minhong; Jia, Haiyang; Sugumaran, V.; Ran, Weijia; Liao, Jian

    2011-01-01

    Fierce competition, globalization, and technology innovation have forced software companies to search for new ways to improve competitive advantage. Web-based learning is increasingly being used by software companies as an emergent approach for enhancing the skills of knowledge workers. However, the current practice of Web-based learning is…

  2. Towards a benchmark for the maintainability evolution of industrial software systems

    NARCIS (Netherlands)

    Dohmen, Till; Bruntink, Magiel; Ceolin, Davide; Visser, Joost

    2017-01-01

    The maintainability of software is an important cost factor for organizations across all industries, as maintenance makes up approximately 40% to 70% of the total development costs of a software system. Organizations are often stuck in the situation where software maintenance costs dominate IT

  3. 78 FR 47014 - Configuration Management Plans for Digital Computer Software Used in Safety Systems of Nuclear...

    Science.gov (United States)

    2013-08-02

    .... ML12354A524. 3. Revision 1 of RG 1.170, ``Test Documentation for Digital Computer Software used in Safety... is in ADAMS at Accession No. ML12354A531. 4. Revision 1 of RG 1.171, ``Software Unit Testing for... Software Used in Safety Systems of Nuclear Power Plants AGENCY: Nuclear Regulatory Commission. ACTION...

  4. 75 FR 71560 - Defense Federal Acquisition Regulation Supplement; Cost and Software Data Reporting System (DFARS...

    Science.gov (United States)

    2010-11-24

    ... also asked what allowance is provided for contractors with accounting software that does not... RIN 0750-AG46 Defense Federal Acquisition Regulation Supplement; Cost and Software Data Reporting... Regulation Supplement (DFARS) to address DoD Cost and Software Data Reporting system requirements for Major...

  5. Evaluating Business Intelligence/Business Analytics Software for Use in the Information Systems Curriculum

    Science.gov (United States)

    Davis, Gary Alan; Woratschek, Charles R.

    2015-01-01

    Business Intelligence (BI) and Business Analytics (BA) Software has been included in many Information Systems (IS) curricula. This study surveyed current and past undergraduate and graduate students to evaluate various BI/BA tools. Specifically, this study compared several software tools from two of the major software providers in the BI/BA field.…

  6. Configuration management and software measurement in the Ground Systems Development Environment (GSDE)

    Science.gov (United States)

    Church, Victor E.; Long, D.; Hartenstein, Ray; Perez-Davila, Alfredo

    1992-01-01

    A set of functional requirements for software configuration management (CM) and metrics reporting for Space Station Freedom ground systems software are described. This report is one of a series from a study of the interfaces among the Ground Systems Development Environment (GSDE), the development systems for the Space Station Training Facility (SSTF) and the Space Station Control Center (SSCC), and the target systems for SSCC and SSTF. The focus is on the CM of the software following delivery to NASA and on the software metrics that relate to the quality and maintainability of the delivered software. The CM and metrics requirements address specific problems that occur in large-scale software development. Mechanisms to assist in the continuing improvement of mission operations software development are described.

  7. Woods Hole Image Processing System Software implementation; using NetCDF as a software interface for image processing

    Science.gov (United States)

    Paskevich, Valerie F.

    1992-01-01

    The Branch of Atlantic Marine Geology has been involved in the collection, processing and digital mosaicking of high, medium and low-resolution side-scan sonar data during the past 6 years. In the past, processing and digital mosaicking has been accomplished with a dedicated, shore-based computer system. With the need to process sidescan data in the field with increased power and reduced cost of major workstations, a need to have an image processing package on a UNIX based computer system which could be utilized in the field as well as be more generally available to Branch personnel was identified. This report describes the initial development of that package referred to as the Woods Hole Image Processing System (WHIPS). The software was developed using the Unidata NetCDF software interface to allow data to be more readily portable between different computer operating systems.

  8. Assisted Emulation for Legacy Executables

    Directory of Open Access Journals (Sweden)

    Kam Woods

    2010-07-01

    Full Text Available Emulation is frequently discussed as a failsafe preservation strategy for born-digital documents that depend on contemporaneous software for access (Rothenberg, 2000. Yet little has been written about the contextual knowledge required to successfully use such software. The approach we advocate is to preserve necessary contextual information through scripts designed to control the legacy environment, and created during the preservation workflow. We describe software designed to minimize dependence on this knowledge by offering automated configuration and execution of emulated environments. We demonstrate that even simple scripts can reduce impediments to casual use of the digital objects being preserved. We describe tools to automate the remote use of preserved objects on local emulation environments.  This can help eliminate both a dependence on physical reference workstations at preservation institutions, and provide users accessing materials over the web with simplified, easy-to-use environments. Our implementation is applied to examples from an existing collection of over 4,000 virtual CD-ROM images containing thousands of custom binary executables.

  9. SWEPP Assay System Version 2.0 software design description

    Energy Technology Data Exchange (ETDEWEB)

    East, L.V.; Marwil, E.S.

    1996-08-01

    The Idaho National Engineering Laboratory (INEL) Stored Waste Examination Pilot Plant (SWEPP) operations staff use nondestructive analysis methods to characterize the radiological contents of contact-handled radioactive waste containers. Containers of waste from Rocky Flats Environmental Technology Site and other Department of Energy (DOE) sites are currently stored at SWEPP. Before these containers can be shipped to the Waste Isolation Pilot Plant (WIPP), SWEPP must verify compliance with storage, shipping, and disposal requirements. This program has been in operation since 1985 at the INEL Radioactive Waste Management Complex (RWMC). One part of the SWEPP program measures neutron emissions from the containers and estimates the mass of plutonium and other transuranic (TRU) isotopes present. A Passive/Active Neutron (PAN) assay system developed at the Los Alamos National Laboratory is used to perform these measurements. A computer program named NEUT2 was originally used to perform the data acquisition and reduction functions for the neutron measurements. This program was originally developed at Los Alamos and extensively modified by a commercial vendor of PAN systems and by personnel at the INEL. NEUT2 uses the analysis methodology outlined, but no formal documentation exists on the program itself. The SWEPP Assay System (SAS) computer program replaced the NEUT2 program in early 1994. The SAS software was developed using an `object model` approach and is documented in accordance with American National Standards Institute (ANSI) and Institute of Electrical and Electronic Engineers (IEEE) standards. The new program incorporates the basic analysis algorithms found in NEUT2. Additional functionality and improvements include a graphical user interface, the ability to change analysis parameters without program code modification, an `object model` design approach and other features for improved flexibility and maintainability.

  10. Section 508 Electronic Information Accessibility Requirements for Software Development

    Science.gov (United States)

    Ellis, Rebecca

    2014-01-01

    Section 508 Subpart B 1194.21 outlines requirements for operating system and software development in order to create a product that is accessible to users with various disabilities. This portion of Section 508 contains a variety of standards to enable those using assistive technology and with visual, hearing, cognitive and motor difficulties to access all information provided in software. The focus on requirements was limited to the Microsoft Windows® operating system as it is the predominant operating system used at this center. Compliance with this portion of the requirements can be obtained by integrating the requirements into the software development cycle early and by remediating issues in legacy software if possible. There are certain circumstances with software that may arise necessitating an exemption from these requirements, such as design or engineering software using dynamically changing graphics or numbers to convey information. These exceptions can be discussed with the Section 508 Coordinator and another method of accommodation used.

  11. Methodologic model to scheduling on service systems: a software engineering approach

    Directory of Open Access Journals (Sweden)

    Eduyn Ramiro Lopez-Santana

    2016-06-01

    Full Text Available This paper presents an approach of software engineering to a research proposal to make an Expert System to scheduling on service systems using methodologies and processes of software development. We use the adaptive software development as methodology for the software architecture based on the description as a software metaprocess that characterizes the research process. We make UML’s diagrams (Unified Modeling Language to provide a visual modeling that describes the research methodology in order to identify the actors, elements and interactions in the research process.

  12. Enhancing requirements engineering for patient registry software systems with evidence-based components.

    Science.gov (United States)

    Lindoerfer, Doris; Mansmann, Ulrich

    2017-07-01

    Patient registries are instrumental for medical research. Often their structures are complex and their implementations use composite software systems to meet the wide spectrum of challenges. Commercial and open-source systems are available for registry implementation, but many research groups develop their own systems. Methodological approaches in the selection of software as well as the construction of proprietary systems are needed. We propose an evidence-based checklist, summarizing essential items for patient registry software systems (CIPROS), to accelerate the requirements engineering process. Requirements engineering activities for software systems follow traditional software requirements elicitation methods, general software requirements specification (SRS) templates, and standards. We performed a multistep procedure to develop a specific evidence-based CIPROS checklist: (1) A systematic literature review to build a comprehensive collection of technical concepts, (2) a qualitative content analysis to define a catalogue of relevant criteria, and (3) a checklist to construct a minimal appraisal standard. CIPROS is based on 64 publications and covers twelve sections with a total of 72 items. CIPROS also defines software requirements. Comparing CIPROS with traditional software requirements elicitation methods, SRS templates and standards show a broad consensus but differences in issues regarding registry-specific aspects. Using an evidence-based approach to requirements engineering for registry software adds aspects to the traditional methods and accelerates the software engineering process for registry software. The method we used to construct CIPROS serves as a potential template for creating evidence-based checklists in other fields. The CIPROS list supports developers in assessing requirements for existing systems and formulating requirements for their own systems, while strengthening the reporting of patient registry software system descriptions. It may be

  13. Embedded and real time system development a software engineering perspective concepts, methods and principles

    CERN Document Server

    Saeed, Saqib; Darwish, Ashraf; Abraham, Ajith

    2014-01-01

    Nowadays embedded and real-time systems contain complex software. The complexity of embedded systems is increasing, and the amount and variety of software in the embedded products are growing. This creates a big challenge for embedded and real-time software development processes and there is a need to develop separate metrics and benchmarks. “Embedded and Real Time System Development: A Software Engineering Perspective: Concepts, Methods and Principles” presents practical as well as conceptual knowledge of the latest tools, techniques and methodologies of embedded software engineering and real-time systems. Each chapter includes an in-depth investigation regarding the actual or potential role of software engineering tools in the context of the embedded system and real-time system. The book presents state-of-the art and future perspectives with industry experts, researchers, and academicians sharing ideas and experiences including surrounding frontier technologies, breakthroughs, innovative solutions and...

  14. Developing infrared array controller with software real time operating system

    Science.gov (United States)

    Sako, Shigeyuki; Miyata, Takashi; Nakamura, Tomohiko; Motohara, Kentaro; Uchimoto, Yuka Katsuno; Onaka, Takashi; Kataza, Hirokazu

    2008-07-01

    Real-time capabilities are required for a controller of a large format array to reduce a dead-time attributed by readout and data transfer. The real-time processing has been achieved by dedicated processors including DSP, CPLD, and FPGA devices. However, the dedicated processors have problems with memory resources, inflexibility, and high cost. Meanwhile, a recent PC has sufficient resources of CPUs and memories to control the infrared array and to process a large amount of frame data in real-time. In this study, we have developed an infrared array controller with a software real-time operating system (RTOS) instead of the dedicated processors. A Linux PC equipped with a RTAI extension and a dual-core CPU is used as a main computer, and one of the CPU cores is allocated to the real-time processing. A digital I/O board with DMA functions is used for an I/O interface. The signal-processing cores are integrated in the OS kernel as a real-time driver module, which is composed of two virtual devices of the clock processor and the frame processor tasks. The array controller with the RTOS realizes complicated operations easily, flexibly, and at a low cost.

  15. CERN Technical Training 2006: Software and System Technologies Curriculum - Scheduled

    CERN Multimedia

    2006-01-01

    Course Sessions (October 2006-March 2007) The Software and System Technologies Curriculum of the CERN Technical Training Programme offers comprehensive training in C++, Java, Perl, Python, XML, OO programming, JCOP/PVSS, database design and Oracle. In the PERL, C++, OO and Java course series there are some places available on the following course sessions, currently scheduled until March 2007: Object-Oriented Analysis and Design using UML: 17-19 October 2006 (3 days) JAVA 2 Enterprise Edition - Part 1: Web Applications: 19-20 October 2006 (2 days) JAVA - Level 1: 30 October -1 November 2006 (3 days) PERL 5 - Advanced Aspects: 2 November 2006 (1 day) C++ Programming Part 1 - Introduction to Object-Oriented Design and Programming: 14-16 November 2006 (3 days) JAVA - Level 2: 4-7 December 2006 (4 days) C++ Programming Part 2 - Advanced C++ and its Traps and Pitfalls: 12-15 December 2006 (4 days) JAVA 2 Enterprise Edition - Part 2: Enterprise JavaBeans: 18-20 December 2006 (3 days) C++ for Particle Physicists:...

  16. Advanced Transport Operating System (ATOPS) color displays software description microprocessor system

    Science.gov (United States)

    Slominski, Christopher J.; Plyler, Valerie E.; Dickson, Richard W.

    1992-01-01

    This document describes the software created for the Sperry Microprocessor Color Display System used for the Advanced Transport Operating Systems (ATOPS) project on the Transport Systems Research Vehicle (TSRV). The software delivery known as the 'baseline display system', is the one described in this document. Throughout this publication, module descriptions are presented in a standardized format which contains module purpose, calling sequence, detailed description, and global references. The global reference section includes procedures and common variables referenced by a particular module. The system described supports the Research Flight Deck (RFD) of the TSRV. The RFD contains eight cathode ray tubes (CRTs) which depict a Primary Flight Display, Navigation Display, System Warning Display, Takeoff Performance Monitoring System Display, and Engine Display.

  17. Guidelines for the verification and validation of expert system software and conventional software: Project summary. Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    Mirsky, S.M.; Hayes, J.E.; Miller, L.A. [Science Applications International Corp., McLean, VA (United States)

    1995-03-01

    This eight-volume report presents guidelines for performing verification and validation (V&V) on Artificial Intelligence (Al) systems with nuclear applications. The guidelines have much broader application than just expert systems; they are also applicable to object-oriented programming systems, rule-based systems, frame-based systems, model-based systems, neural nets, genetic algorithms, and conventional software systems. This is because many of the components of AI systems are implemented in conventional procedural programming languages, so there is no real distinction. The report examines the state of the art in verifying and validating expert systems. V&V methods traditionally applied to conventional software systems are evaluated for their applicability to expert systems. One hundred fifty-three conventional techniques are identified and evaluated. These methods are found to be useful for at least some of the components of expert systems, frame-based systems, and object-oriented systems. A taxonomy of 52 defect types and their delectability by the 153 methods is presented. With specific regard to expert systems, conventional V&V methods were found to apply well to all the components of the expert system with the exception of the knowledge base. The knowledge base requires extension of the existing methods. Several innovative static verification and validation methods for expert systems have been identified and are described here, including a method for checking the knowledge base {open_quotes}semantics{close_quotes} and a method for generating validation scenarios. Evaluation of some of these methods was performed both analytically and experimentally. A V&V methodology for expert systems is presented based on three factors: (1) a system`s judged need for V&V (based in turn on its complexity and degree of required integrity); (2) the life-cycle phase; and (3) the system component being tested.

  18. A dependability modeling of software under hardware faults digitized system in nuclear power plants

    International Nuclear Information System (INIS)

    Choi, Jong Gyun

    1996-02-01

    An analytic approach to the dependability evaluation of software in the operational phase is suggested in this work with special attention to the physical fault effects on the software dependability : The physical faults considered are memory faults and the dependability measure in question is the reliability. The model is based on the simple reliability theory and the graph theory with the path decomposition micro model. The model represents an application software with a graph consisting of nodes and arcs that probabilistic ally determine the flow from node to node. Through proper transformation of nodes and arcs, the graph can be reduced to a simple two-node graph and the software failure probability is derived from this graph. This model can be extended to the software system which consists of several complete modules without modification. The derived model is validated by the computer simulation, where the software is transformed to a probabilistic control flow graph. Simulation also shows a different viewpoint of software failure behavior. Using this model, we predict the reliability of an application software and a software system in a digitized system(ILS system) in the nuclear power plant and show the sensitivity of the software reliability to the major physical parameters which affect the software failure in the normal operation phase. The derived model is validated by the computer simulation, where the software is transformed to a probabilistic control flow graph. Simulation also shows a different viewpoint of software failure behavior. Using this model, we predict the reliability of an application software and a software system in a digitized system (ILS system) is the nuclear power plant and show the sensitivity of the software reliability to the major physical parameters which affect the software failure in the normal operation phase. This modeling method is particularly attractive for medium size programs such as software used in digitized systems of

  19. Development of the disable software reporting system on the basis of the neural network

    Science.gov (United States)

    Gavrylenko, S.; Babenko, O.; Ignatova, E.

    2018-04-01

    The PE structure of malicious and secure software is analyzed, features are highlighted, binary sign vectors are obtained and used as inputs for training the neural network. A software model for detecting malware based on the ART-1 neural network was developed, optimal similarity coefficients were found, and testing was performed. The obtained research results showed the possibility of using the developed system of identifying malicious software in computer systems protection systems

  20. Improving a data-acquisition software system with abstract data type components

    Science.gov (United States)

    Howard, S. D.

    1990-01-01

    Abstract data types and object-oriented design are active research areas in computer science and software engineering. Much of the interest is aimed at new software development. Abstract data type packages developed for a discontinued software project were used to improve a real-time data-acquisition system under maintenance. The result saved effort and contributed to a significant improvement in the performance, maintainability, and reliability of the Goldstone Solar System Radar Data Acquisition System.