WorldWideScience

Sample records for distributed software framework

  1. Managing Risks in Distributed Software Projects: An Integrative Framework

    DEFF Research Database (Denmark)

    Persson, John Stouby; Mathiassen, Lars; Boeg, Jesper

    2009-01-01

    techniques into an integrative framework for managing risks in distributed contexts. Subsequent implementation of a Web-based tool helped us refine the framework based on empirical evaluation of its practical usefulness.We conclude by discussing implications for both research and practice.......Software projects are increasingly geographically distributed with limited face-to-face interaction between participants. These projects face particular challenges that need carefulmanagerial attention. While risk management has been adopted with success to address other challenges within software...... development, there are currently no frameworks available for managing risks related to geographical distribution. On this background, we systematically review the literature on geographically distributed software projects. Based on the review, we synthesize what we know about risks and risk resolution...

  2. A Software Rejuvenation Framework for Distributed Computing

    Science.gov (United States)

    Chau, Savio

    2009-01-01

    A performability-oriented conceptual framework for software rejuvenation has been constructed as a means of increasing levels of reliability and performance in distributed stateful computing. As used here, performability-oriented signifies that the construction of the framework is guided by the concept of analyzing the ability of a given computing system to deliver services with gracefully degradable performance. The framework is especially intended to support applications that involve stateful replicas of server computers.

  3. Distributed software framework and continuous integration in hydroinformatics systems

    Science.gov (United States)

    Zhou, Jianzhong; Zhang, Wei; Xie, Mengfei; Lu, Chengwei; Chen, Xiao

    2017-08-01

    When encountering multiple and complicated models, multisource structured and unstructured data, complex requirements analysis, the platform design and integration of hydroinformatics systems become a challenge. To properly solve these problems, we describe a distributed software framework and it’s continuous integration process in hydroinformatics systems. This distributed framework mainly consists of server cluster for models, distributed database, GIS (Geographic Information System) servers, master node and clients. Based on it, a GIS - based decision support system for joint regulating of water quantity and water quality of group lakes in Wuhan China is established.

  4. Distributed inter process communication framework of BES III DAQ online software

    International Nuclear Information System (INIS)

    Li Fei; Liu Yingjie; Ren Zhenyu; Wang Liang; Chinese Academy of Sciences, Beijing; Chen Mali; Zhu Kejun; Zhao Jingwei

    2006-01-01

    DAQ (Data Acquisition) system is one important part of BES III, which is the large scale high-energy physics detector on the BEPC. The inter process communication (IPC) of online software in distributed environments is very pivotal for design and implement of DAQ system. This article will introduce one distributed inter process communication framework, which is based on CORBA and used in BES III DAQ online software. The article mainly presents the design and implementation of the IPC framework and application based on IPC. (authors)

  5. A conceptual framework to study the role of communication through social software for coordination in globally-distributed software teams

    DEFF Research Database (Denmark)

    Giuffrida, Rosalba; Dittrich, Yvonne

    2015-01-01

    Background In Global Software Development (GSD) the lack of face-to-face communication is a major challenge and effective computer-mediated practices are necessary to mitigate the effect of physical distance. Communication through Social Software (SoSo) supports team coordination, helping to deal...... with geographical distance; however, in Software Engineering literature, there is a lack of suitable theoretical concepts to analyze and describe everyday practices of globally-distributed software development teams and to study the role of communication through SoSo. Objective The paper proposes a theoretical...... framework for analyzing how communicative and coordinative practices are constituted and maintained in globally-distributed teams. Method The framework is based on the concepts of communicative genres and coordination mechanisms; it is motivated and explicated through examples from two qualitative empirical...

  6. Managing Distributed Software Projects

    DEFF Research Database (Denmark)

    Persson, John Stouby

    Increasingly, software projects are becoming geographically distributed, with limited face-toface interaction between participants. These projects face particular challenges that need careful managerial attention. This PhD study reports on how we can understand and support the management...... of distributed software projects, based on a literature study and a case study. The main emphasis of the literature study was on how to support the management of distributed software projects, but also contributed to an understanding of these projects. The main emphasis of the case study was on how to understand...... the management of distributed software projects, but also contributed to supporting the management of these projects. The literature study integrates what we know about risks and risk-resolution techniques, into a framework for managing risks in distributed contexts. This framework was developed iteratively...

  7. Agile distributed software development

    DEFF Research Database (Denmark)

    Persson, John Stouby; Mathiassen, Lars; Aaen, Ivan

    2012-01-01

    While face-to-face interaction is fundamental in agile software development, distributed environments must rely extensively on mediated interactions. Practicing agile principles in distributed environments therefore poses particular control challenges related to balancing fixed vs. evolving quality...... requirements and people vs. process-based collaboration. To investigate these challenges, we conducted an in-depth case study of a successful agile distributed software project with participants from a Russian firm and a Danish firm. Applying Kirsch’s elements of control framework, we offer an analysis of how...

  8. A QDWH-Based SVD Software Framework on Distributed-Memory Manycore Systems

    KAUST Repository

    Sukkari, Dalal

    2017-01-01

    This paper presents a high performance software framework for computing a dense SVD on distributed- memory manycore systems. Originally introduced by Nakatsukasa et al. (Nakatsukasa et al. 2010; Nakatsukasa and Higham 2013), the SVD solver relies on the polar decomposition using the QR Dynamically-Weighted Halley algorithm (QDWH). Although the QDWH-based SVD algorithm performs a significant amount of extra floating-point operations compared to the traditional SVD with the one-stage bidiagonal reduction, the inherent high level of concurrency associated with Level 3 BLAS compute-bound kernels ultimately compensates for the arithmetic complexity overhead. Using the ScaLAPACK two-dimensional block cyclic data distribution with a rectangular processor topology, the resulting QDWH-SVD further reduces excessive communications during the panel factorization, while increasing the degree of parallelism during the update of the trailing submatrix, as opposed to relying to the default square processor grid. After detailing the algorithmic complexity and the memory footprint of the algorithm, we conduct a thorough performance analysis and study the impact of the grid topology on the performance by looking at the communication and computation profiling trade-offs. We report performance results against state-of-the-art existing QDWH software implementations (e.g., Elemental) and their SVD extensions on large-scale distributed-memory manycore systems based on commodity Intel x86 Haswell processors and Knights Landing (KNL) architecture. The QDWH-SVD framework achieves up to 3/8-fold on the Haswell/KNL-based platforms, respectively, against ScaLAPACK PDGESVD and turns out to be a competitive alternative for well and ill-conditioned matrices. We finally come up herein with a performance model based on these empirical results. Our QDWH-based polar decomposition and its SVD extension are freely available at https://github.com/ecrc/qdwh.git and https

  9. Software Engineering Support of the Third Round of Scientific Grand Challenge Investigations: Earth System Modeling Software Framework Survey

    Science.gov (United States)

    Talbot, Bryan; Zhou, Shu-Jia; Higgins, Glenn; Zukor, Dorothy (Technical Monitor)

    2002-01-01

    One of the most significant challenges in large-scale climate modeling, as well as in high-performance computing in other scientific fields, is that of effectively integrating many software models from multiple contributors. A software framework facilitates the integration task, both in the development and runtime stages of the simulation. Effective software frameworks reduce the programming burden for the investigators, freeing them to focus more on the science and less on the parallel communication implementation. while maintaining high performance across numerous supercomputer and workstation architectures. This document surveys numerous software frameworks for potential use in Earth science modeling. Several frameworks are evaluated in depth, including Parallel Object-Oriented Methods and Applications (POOMA), Cactus (from (he relativistic physics community), Overture, Goddard Earth Modeling System (GEMS), the National Center for Atmospheric Research Flux Coupler, and UCLA/UCB Distributed Data Broker (DDB). Frameworks evaluated in less detail include ROOT, Parallel Application Workspace (PAWS), and Advanced Large-Scale Integrated Computational Environment (ALICE). A host of other frameworks and related tools are referenced in this context. The frameworks are evaluated individually and also compared with each other.

  10. Analyser Framework to Verify Software Components

    Directory of Open Access Journals (Sweden)

    Rolf Andreas Rasenack

    2009-01-01

    Full Text Available Today, it is important for software companies to build software systems in a short time-interval, to reduce costs and to have a good market position. Therefore well organized and systematic development approaches are required. Reusing software components, which are well tested, can be a good solution to develop software applications in effective manner. The reuse of software components is less expensive and less time consuming than a development from scratch. But it is dangerous to think that software components can be match together without any problems. Software components itself are well tested, of course, but even if they composed together problems occur. Most problems are based on interaction respectively communication. Avoiding such errors a framework has to be developed for analysing software components. That framework determines the compatibility of corresponding software components. The promising approach discussed here, presents a novel technique for analysing software components by applying an Abstract Syntax Language Tree (ASLT. A supportive environment will be designed that checks the compatibility of black-box software components. This article is concerned to the question how can be coupled software components verified by using an analyzer framework and determines the usage of the ASLT. Black-box Software Components and Abstract Syntax Language Tree are the basis for developing the proposed framework and are discussed here to provide the background knowledge. The practical implementation of this framework is discussed and shows the result by using a test environment.

  11. Software Engineering Support of the Third Round of Scientific Grand Challenge Investigations: An Earth Modeling System Software Framework Strawman Design that Integrates Cactus and UCLA/UCB Distributed Data Broker

    Science.gov (United States)

    Talbot, Bryan; Zhou, Shu-Jia; Higgins, Glenn

    2002-01-01

    One of the most significant challenges in large-scale climate modeling, as well as in high-performance computing in other scientific fields, is that of effectively integrating many software models from multiple contributors. A software framework facilitates the integration task. both in the development and runtime stages of the simulation. Effective software frameworks reduce the programming burden for the investigators, freeing them to focus more on the science and less on the parallel communication implementation, while maintaining high performance across numerous supercomputer and workstation architectures. This document proposes a strawman framework design for the climate community based on the integration of Cactus, from the relativistic physics community, and UCLA/UCB Distributed Data Broker (DDB) from the climate community. This design is the result of an extensive survey of climate models and frameworks in the climate community as well as frameworks from many other scientific communities. The design addresses fundamental development and runtime needs using Cactus, a framework with interfaces for FORTRAN and C-based languages, and high-performance model communication needs using DDB. This document also specifically explores object-oriented design issues in the context of climate modeling as well as climate modeling issues in terms of object-oriented design.

  12. Designing the Distributed Model Integration Framework – DMIF

    NARCIS (Netherlands)

    Belete, Getachew F.; Voinov, Alexey; Morales, Javier

    2017-01-01

    We describe and discuss the design and prototype of the Distributed Model Integration Framework (DMIF) that links models deployed on different hardware and software platforms. We used distributed computing and service-oriented development approaches to address the different aspects of

  13. Framework Programmable Platform for the Advanced Software Development Workstation: Preliminary system design document

    Science.gov (United States)

    Mayer, Richard J.; Blinn, Thomas M.; Mayer, Paula S. D.; Ackley, Keith A.; Crump, John W., IV; Henderson, Richard; Futrell, Michael T.

    1991-01-01

    The Framework Programmable Software Development Platform (FPP) is a project aimed at combining effective tool and data integration mechanisms with a model of the software development process in an intelligent integrated software environment. Guided by the model, this system development framework will take advantage of an integrated operating environment to automate effectively the management of the software development process so that costly mistakes during the development phase can be eliminated. The focus here is on the design of components that make up the FPP. These components serve as supporting systems for the Integration Mechanism and the Framework Processor and provide the 'glue' that ties the FPP together. Also discussed are the components that allow the platform to operate in a distributed, heterogeneous environment and to manage the development and evolution of software system artifacts.

  14. Uniframe: A Unified Framework for Developing Service-Oriented, Component-Based Distributed Software Systems

    National Research Council Canada - National Science Library

    Raje, Rajeev R; Olson, Andrew M; Bryant, Barrett R; Burt, Carol C; Auguston, Makhail

    2005-01-01

    .... It describes how this approach employs a unifying framework for specifying such systems to unite the concepts of service-oriented architectures, a component-based software engineering methodology...

  15. MAPI: a software framework for distributed biomedical applications

    Directory of Open Access Journals (Sweden)

    Karlsson Johan

    2013-01-01

    Full Text Available Abstract Background The amount of web-based resources (databases, tools etc. in biomedicine has increased, but the integrated usage of those resources is complex due to differences in access protocols and data formats. However, distributed data processing is becoming inevitable in several domains, in particular in biomedicine, where researchers face rapidly increasing data sizes. This big data is difficult to process locally because of the large processing, memory and storage capacity required. Results This manuscript describes a framework, called MAPI, which provides a uniform representation of resources available over the Internet, in particular for Web Services. The framework enhances their interoperability and collaborative use by enabling a uniform and remote access. The framework functionality is organized in modules that can be combined and configured in different ways to fulfil concrete development requirements. Conclusions The framework has been tested in the biomedical application domain where it has been a base for developing several clients that are able to integrate different web resources. The MAPI binaries and documentation are freely available at http://www.bitlab-es.com/mapi under the Creative Commons Attribution-No Derivative Works 2.5 Spain License. The MAPI source code is available by request (GPL v3 license.

  16. The SCEC Unified Community Velocity Model (UCVM) Software Framework for Distributing and Querying Seismic Velocity Models

    Science.gov (United States)

    Maechling, P. J.; Taborda, R.; Callaghan, S.; Shaw, J. H.; Plesch, A.; Olsen, K. B.; Jordan, T. H.; Goulet, C. A.

    2017-12-01

    Crustal seismic velocity models and datasets play a key role in regional three-dimensional numerical earthquake ground-motion simulation, full waveform tomography, modern physics-based probabilistic earthquake hazard analysis, as well as in other related fields including geophysics, seismology, and earthquake engineering. The standard material properties provided by a seismic velocity model are P- and S-wave velocities and density for any arbitrary point within the geographic volume for which the model is defined. Many seismic velocity models and datasets are constructed by synthesizing information from multiple sources and the resulting models are delivered to users in multiple file formats, such as text files, binary files, HDF-5 files, structured and unstructured grids, and through computer applications that allow for interactive querying of material properties. The Southern California Earthquake Center (SCEC) has developed the Unified Community Velocity Model (UCVM) software framework to facilitate the registration and distribution of existing and future seismic velocity models to the SCEC community. The UCVM software framework is designed to provide a standard query interface to multiple, alternative velocity models, even if the underlying velocity models are defined in different formats or use different geographic projections. The UCVM framework provides a comprehensive set of open-source tools for querying seismic velocity model properties, combining regional 3D models and 1D background models, visualizing 3D models, and generating computational models in the form of regular grids or unstructured meshes that can be used as inputs for ground-motion simulations. The UCVM framework helps researchers compare seismic velocity models and build equivalent simulation meshes from alternative velocity models. These capabilities enable researchers to evaluate the impact of alternative velocity models in ground-motion simulations and seismic hazard analysis applications

  17. HCI^2 Framework: A software framework for multimodal human-computer interaction systems

    NARCIS (Netherlands)

    Shen, Jie; Pantic, Maja

    2013-01-01

    This paper presents a novel software framework for the development and research in the area of multimodal human-computer interface (MHCI) systems. The proposed software framework, which is called the HCI∧2 Framework, is built upon publish/subscribe (P/S) architecture. It implements a

  18. BioContainers: an open-source and community-driven framework for software standardization

    Science.gov (United States)

    da Veiga Leprevost, Felipe; Grüning, Björn A.; Alves Aflitos, Saulo; Röst, Hannes L.; Uszkoreit, Julian; Barsnes, Harald; Vaudel, Marc; Moreno, Pablo; Gatto, Laurent; Weber, Jonas; Bai, Mingze; Jimenez, Rafael C.; Sachsenberg, Timo; Pfeuffer, Julianus; Vera Alvarez, Roberto; Griss, Johannes; Nesvizhskii, Alexey I.; Perez-Riverol, Yasset

    2017-01-01

    Abstract Motivation BioContainers (biocontainers.pro) is an open-source and community-driven framework which provides platform independent executable environments for bioinformatics software. BioContainers allows labs of all sizes to easily install bioinformatics software, maintain multiple versions of the same software and combine tools into powerful analysis pipelines. BioContainers is based on popular open-source projects Docker and rkt frameworks, that allow software to be installed and executed under an isolated and controlled environment. Also, it provides infrastructure and basic guidelines to create, manage and distribute bioinformatics containers with a special focus on omics technologies. These containers can be integrated into more comprehensive bioinformatics pipelines and different architectures (local desktop, cloud environments or HPC clusters). Availability and Implementation The software is freely available at github.com/BioContainers/. Contact yperez@ebi.ac.uk PMID:28379341

  19. BioContainers: an open-source and community-driven framework for software standardization.

    Science.gov (United States)

    da Veiga Leprevost, Felipe; Grüning, Björn A; Alves Aflitos, Saulo; Röst, Hannes L; Uszkoreit, Julian; Barsnes, Harald; Vaudel, Marc; Moreno, Pablo; Gatto, Laurent; Weber, Jonas; Bai, Mingze; Jimenez, Rafael C; Sachsenberg, Timo; Pfeuffer, Julianus; Vera Alvarez, Roberto; Griss, Johannes; Nesvizhskii, Alexey I; Perez-Riverol, Yasset

    2017-08-15

    BioContainers (biocontainers.pro) is an open-source and community-driven framework which provides platform independent executable environments for bioinformatics software. BioContainers allows labs of all sizes to easily install bioinformatics software, maintain multiple versions of the same software and combine tools into powerful analysis pipelines. BioContainers is based on popular open-source projects Docker and rkt frameworks, that allow software to be installed and executed under an isolated and controlled environment. Also, it provides infrastructure and basic guidelines to create, manage and distribute bioinformatics containers with a special focus on omics technologies. These containers can be integrated into more comprehensive bioinformatics pipelines and different architectures (local desktop, cloud environments or HPC clusters). The software is freely available at github.com/BioContainers/. yperez@ebi.ac.uk. © The Author(s) 2017. Published by Oxford University Press.

  20. Software development processes and analysis software: a mismatch and a novel framework

    International Nuclear Information System (INIS)

    Kelly, D.; Harauz, J.

    2011-01-01

    This paper discusses the salient characteristics of analysis software and the impact of those characteristics on its development. From this discussion, it can be seen that mainstream software development processes, usually characterized as Plan Driven or Agile, are built upon assumptions that are mismatched to the development and maintenance of analysis software. We propose a novel software development framework that would match the process normally observed in the development of analysis software. In the discussion of this framework, we suggest areas of research and directions for future work. (author)

  1. Surgical model-view-controller simulation software framework for local and collaborative applications.

    Science.gov (United States)

    Maciel, Anderson; Sankaranarayanan, Ganesh; Halic, Tansel; Arikatla, Venkata Sreekanth; Lu, Zhonghua; De, Suvranu

    2011-07-01

    Surgical simulations require haptic interactions and collaboration in a shared virtual environment. A software framework for decoupled surgical simulation based on a multi-controller and multi-viewer model-view-controller (MVC) pattern was developed and tested. A software framework for multimodal virtual environments was designed, supporting both visual interactions and haptic feedback while providing developers with an integration tool for heterogeneous architectures maintaining high performance, simplicity of implementation, and straightforward extension. The framework uses decoupled simulation with updates of over 1,000 Hz for haptics and accommodates networked simulation with delays of over 1,000 ms without performance penalty. The simulation software framework was implemented and was used to support the design of virtual reality-based surgery simulation systems. The framework supports the high level of complexity of such applications and the fast response required for interaction with haptics. The efficacy of the framework was tested by implementation of a minimally invasive surgery simulator. A decoupled simulation approach can be implemented as a framework to handle simultaneous processes of the system at the various frame rates each process requires. The framework was successfully used to develop collaborative virtual environments (VEs) involving geographically distributed users connected through a network, with the results comparable to VEs for local users.

  2. Professional Ethics of Software Engineers: An Ethical Framework.

    Science.gov (United States)

    Lurie, Yotam; Mark, Shlomo

    2016-04-01

    The purpose of this article is to propose an ethical framework for software engineers that connects software developers' ethical responsibilities directly to their professional standards. The implementation of such an ethical framework can overcome the traditional dichotomy between professional skills and ethical skills, which plagues the engineering professions, by proposing an approach to the fundamental tasks of the practitioner, i.e., software development, in which the professional standards are intrinsically connected to the ethical responsibilities. In so doing, the ethical framework improves the practitioner's professionalism and ethics. We call this approach Ethical-Driven Software Development (EDSD), as an approach to software development. EDSD manifests the advantages of an ethical framework as an alternative to the all too familiar approach in professional ethics that advocates "stand-alone codes of ethics". We believe that one outcome of this synergy between professional and ethical skills is simply better engineers. Moreover, since there are often different software solutions, which the engineer can provide to an issue at stake, the ethical framework provides a guiding principle, within the process of software development, that helps the engineer evaluate the advantages and disadvantages of different software solutions. It does not and cannot affect the end-product in and of-itself. However, it can and should, make the software engineer more conscious and aware of the ethical ramifications of certain engineering decisions within the process.

  3. Framework Programmable Platform for the advanced software development workstation: Framework processor design document

    Science.gov (United States)

    Mayer, Richard J.; Blinn, Thomas M.; Mayer, Paula S. D.; Ackley, Keith A.; Crump, Wes; Sanders, Les

    1991-01-01

    The design of the Framework Processor (FP) component of the Framework Programmable Software Development Platform (FFP) is described. The FFP is a project aimed at combining effective tool and data integration mechanisms with a model of the software development process in an intelligent integrated software development environment. Guided by the model, this Framework Processor will take advantage of an integrated operating environment to provide automated support for the management and control of the software development process so that costly mistakes during the development phase can be eliminated.

  4. Design and Implement a MapReduce Framework for Executing Standalone Software Packages in Hadoop-based Distributed Environments

    Directory of Open Access Journals (Sweden)

    Chao-Chun Chen

    2013-12-01

    Full Text Available The Hadoop MapReduce is the programming model of designing the auto scalable distributed computing applications. It provides developer an effective environment to attain automatic parallelization. However, most existing manufacturing systems are arduous and restrictive to migrate to MapReduce private cloud, due to the platform incompatible and tremendous complexity of system reconstruction. For increasing the efficiency of manufacturing systems with minimum modification of existing systems, we design a framework in this thesis, called MC-Framework: Multi-uses-based Cloudizing-Application Framework. It provides the simple interface to users for fairly executing requested tasks worked with traditional standalone software packages in MapReduce-based private cloud environments. Moreover, this thesis focuses on the multiuser workloads, but the default Hadoop scheduling scheme, i.e., FIFO, would increase delay under multiuser scenarios. Hence, we also propose a new scheduling mechanism, called Job-Sharing Scheduling, to explore and fairly share the jobs to machines in the MapReduce-based private cloud. Then, we prototype an experimental virtual-metrology module of a manufacturing system as a case study to verify and analysis the proposed MC-Framework. The results of our experiments indicate that our proposed framework enormously improved the time performance compared with the original package.

  5. Frameworks for Performing on Cloud Automated Software Testing Using Swarm Intelligence Algorithm: Brief Survey

    Directory of Open Access Journals (Sweden)

    Mohammad Hossain

    2018-04-01

    Full Text Available This paper surveys on Cloud Based Automated Testing Software that is able to perform Black-box testing, White-box testing, as well as Unit and Integration Testing as a whole. In this paper, we discuss few of the available automated software testing frameworks on the cloud. These frameworks are found to be more efficient and cost effective because they execute test suites over a distributed cloud infrastructure. One of the framework effectiveness was attributed to having a module that accepts manual test cases from users and it prioritize them accordingly. Software testing, in general, accounts for as much as 50% of the total efforts of the software development project. To lessen the efforts, one the frameworks discussed in this paper used swarm intelligence algorithms. It uses the Ant Colony Algorithm for complete path coverage to minimize time and the Bee Colony Optimization (BCO for regression testing to ensure backward compatibility.

  6. A Software Data Transport Framework for Trigger Applications on Clusters

    CERN Document Server

    Steinbeck, T M; Tilsner, H; Steinbeck, Timm M.; Lindenstruth, Volker; Tilsner, Heinz

    2003-01-01

    In the future ALICE heavy ion experiment at CERN's Large Hadron Collider input data rates of up to 25 GB/s have to be handled by the High Level Trigger (HLT) system, which has to scale them down to at most 1.25 GB/s before being written to permanent storage. The HLT system that is being designed to cope with these data rates consists of a large PC cluster, up to the order of a 1000 nodes, connected by a fast network. For the software that will run on these nodes a flexible data transport and distribution software framework has been developed. This framework consists of a set of separate components, that can be connected via a common interface, allowing to construct different configurations for the HLT, that are even changeable at runtime. To ensure a fault-tolerant operation of the HLT, the framework includes a basic fail-over mechanism that will be further expanded in the future, utilizing the runtime reconnection feature of the framework's component interface. First performance tests show very promising res...

  7. The NOvA software testing framework

    International Nuclear Information System (INIS)

    Tamsett, M; Group, C

    2015-01-01

    The NOvA experiment at Fermilab is a long-baseline neutrino experiment designed to study vε appearance in a vμ beam. NOvA has already produced more than one million Monte Carlo and detector generated files amounting to more than 1 PB in size. This data is divided between a number of parallel streams such as far and near detector beam spills, cosmic ray backgrounds, a number of data-driven triggers and over 20 different Monte Carlo configurations. Each of these data streams must be processed through the appropriate steps of the rapidly evolving, multi-tiered, interdependent NOvA software framework. In total there are greater than 12 individual software tiers, each of which performs a different function and can be configured differently depending on the input stream. In order to regularly test and validate that all of these software stages are working correctly NOvA has designed a powerful, modular testing framework that enables detailed validation and benchmarking to be performed in a fast, efficient and accessible way with minimal expert knowledge. The core of this system is a novel series of python modules which wrap, monitor and handle the underlying C++ software framework and then report the results to a slick front-end web-based interface. This interface utilises modern, cross-platform, visualisation libraries to render the test results in a meaningful way. They are fast and flexible, allowing for the easy addition of new tests and datasets. In total upwards of 14 individual streams are regularly tested amounting to over 70 individual software processes, producing over 25 GB of output files. The rigour enforced through this flexible testing framework enables NOvA to rapidly verify configurations, results and software and thus ensure that data is available for physics analysis in a timely and robust manner. (paper)

  8. A Conceptual Framework for Lean Regulated Software Development

    DEFF Research Database (Denmark)

    Cawley, Oisin; Richardson, Ita; Wang, Xiaofeng

    2015-01-01

    for software development within a regulated environment? This poster presents the results of our empirical research into lean and regulated software development. Built from a combination of data sources, we have developed a conceptual framework comprising five primary components. In addition the relationships...... they have with both the central focus of the framework (the situated software development practices) and with each other are indicated....

  9. COMDES-II: A Component-Based Framework for Generative Development of Distributed Real-Time Control Systems

    DEFF Research Database (Denmark)

    Ke, Xu; Sierszecki, Krzysztof; Angelov, Christo K.

    2007-01-01

    The paper presents a generative development methodology and component models of COMDES-II, a component-based software framework for distributed embedded control systems with real-time constraints. The adopted methodology allows for rapid modeling and validation of control software at a higher lev...... methodology for COMDES-II from a general perspective, describes the component models in details and demonstrates their application through a DC-Motor control system case study.......The paper presents a generative development methodology and component models of COMDES-II, a component-based software framework for distributed embedded control systems with real-time constraints. The adopted methodology allows for rapid modeling and validation of control software at a higher level...

  10. Software Engineering Frameworks: Textbooks vs. Student Perceptions

    Science.gov (United States)

    McMaster, Kirby; Hadfield, Steven; Wolthuis, Stuart; Sambasivam, Samuel

    2012-01-01

    This research examines the frameworks used by Computer Science and Information Systems students at the conclusion of their first semester of study of Software Engineering. A questionnaire listing 64 Software Engineering concepts was given to students upon completion of their first Software Engineering course. This survey was given to samples of…

  11. The user's manual of 'Manyo Library' data reduction software framework at MLF, J-PARC

    International Nuclear Information System (INIS)

    Inamura, Yasuhiro; Nakatani, Takeshi; Ito, Takayoshi; Suzuki, Jiro

    2016-06-01

    Manyo Library is a software framework for developing analysis software of neutron scattering data produced at MLF, J-PARC. This software framework is required to work on many instruments in MLF and to include base functions applied to various scientific purposes at beam lines. This framework mainly consists of data containers, which enable to store 1, 2 and 3 dimensional axes data for neutron scattering. Data containers have many functions to calculate four arithmetic operations with errors distribution between containers, to store the meta-data about measurements and to read or write text file. The analysis codes are constructed using various analysis operators defined in Manyo Library, which executes functions with given data containers and output the results. On the other hands, the main interface for instrument scientists and users must be easy and interactive to treat data containers and functions or to develop new analysis codes. Therefore we chose Python as user interface. Since Manyo Library is built in C++ language, we've introduced the technology to call C++ function from Python environment into the framework. As a result, we have already developed a lot of software for data reduction, analysis and visualization, which are utilized widely in beam lines at MLF. This document is the manual for the beginner to touch this framework. (author)

  12. Hierarchy Software Development Framework (h-dp-fwk) project

    International Nuclear Information System (INIS)

    Zaytsev, A

    2010-01-01

    Hierarchy Software Development Framework provides a lightweight tool for building portable modular applications for performing automated data analysis tasks in a batch mode. The history of design and development activities devoted to the project has begun in March 2005 and from the very beginning it was targeting the case of building experimental data processing applications for the CMD-3 experiment which is being commissioned at Budker Institute of Nuclear Physics (BINP, Novosibirsk, Russia). Its design addresses the generic case of modular data processing application operating within the well defined distributed computing environment. The main features of the framework are modularity, built-in message and data exchange mechanisms, XInclude and XML schema enabled XML configuration management tools, dedicated log management tools, internal debugging tools, both dynamic and static module chains support, internal DSO version and consistency checking, well defined API for developing specialized frameworks. It is supported on Scientific Linux 4 and 5 and planned to be ported to other platforms as well. The project is provided with the comprehensive set of technical documentation and users' guides. The licensing schema for the source code, binaries and documentation implies that the product is free for non-commercial use. Although the development phase is not over and many features are to be implemented yet the project is considered ready for public use and creating applications in various fields including development of events reconstruction software for small and moderate scale HEP experiments.

  13. Hierarchy Software Development Framework (h-dp-fwk) project

    Energy Technology Data Exchange (ETDEWEB)

    Zaytsev, A, E-mail: Alexander.S.Zaytsev@gmail.co [Budker Institute of Nuclear Physics, Novosibirsk (Russian Federation)

    2010-04-01

    Hierarchy Software Development Framework provides a lightweight tool for building portable modular applications for performing automated data analysis tasks in a batch mode. The history of design and development activities devoted to the project has begun in March 2005 and from the very beginning it was targeting the case of building experimental data processing applications for the CMD-3 experiment which is being commissioned at Budker Institute of Nuclear Physics (BINP, Novosibirsk, Russia). Its design addresses the generic case of modular data processing application operating within the well defined distributed computing environment. The main features of the framework are modularity, built-in message and data exchange mechanisms, XInclude and XML schema enabled XML configuration management tools, dedicated log management tools, internal debugging tools, both dynamic and static module chains support, internal DSO version and consistency checking, well defined API for developing specialized frameworks. It is supported on Scientific Linux 4 and 5 and planned to be ported to other platforms as well. The project is provided with the comprehensive set of technical documentation and users' guides. The licensing schema for the source code, binaries and documentation implies that the product is free for non-commercial use. Although the development phase is not over and many features are to be implemented yet the project is considered ready for public use and creating applications in various fields including development of events reconstruction software for small and moderate scale HEP experiments.

  14. Development of a software framework for data assimilation and its applications for streamflow forecasting in Japan

    Science.gov (United States)

    Noh, S. J.; Tachikawa, Y.; Shiiba, M.; Yorozu, K.; Kim, S.

    2012-04-01

    Data assimilation methods have received increased attention to accomplish uncertainty assessment and enhancement of forecasting capability in various areas. Despite of their potentials, applicable software frameworks to probabilistic approaches and data assimilation are still limited because the most of hydrologic modeling software are based on a deterministic approach. In this study, we developed a hydrological modeling framework for sequential data assimilation, so called MPI-OHyMoS. MPI-OHyMoS allows user to develop his/her own element models and to easily build a total simulation system model for hydrological simulations. Unlike process-based modeling framework, this software framework benefits from its object-oriented feature to flexibly represent hydrological processes without any change of the main library. Sequential data assimilation based on the particle filters is available for any hydrologic models based on MPI-OHyMoS considering various sources of uncertainty originated from input forcing, parameters and observations. The particle filters are a Bayesian learning process in which the propagation of all uncertainties is carried out by a suitable selection of randomly generated particles without any assumptions about the nature of the distributions. In MPI-OHyMoS, ensemble simulations are parallelized, which can take advantage of high performance computing (HPC) system. We applied this software framework for short-term streamflow forecasting of several catchments in Japan using a distributed hydrologic model. Uncertainty of model parameters and remotely-sensed rainfall data such as X-band or C-band radar is estimated and mitigated in the sequential data assimilation.

  15. Teamwork in Distributed Agile Software Development

    OpenAIRE

    Gurram, Chaitanya; Bandi, Srinivas Goud

    2013-01-01

    Context: Distributed software development has become a most desired way of software development. Application of agile development methodologies in distributed environments has taken a new trend in developing software due to its benefits of improved communication and collaboration. Teamwork is an important concept that agile methodologies facilitate and is one of the potential determinants of team performance which was not focused in distributed agile software development. Objectives: This res...

  16. A Configurable, Object-Oriented, Transportation System Software Framework

    Energy Technology Data Exchange (ETDEWEB)

    KELLY,SUZANNE M.; MYRE,JOHN W.; PRICE,MARK H.; RUSSELL,ERIC D.; SCOTT,DAN W.

    2000-08-01

    The Transportation Surety Center, 6300, has been conducting continuing research into and development of information systems for the Configurable Transportation Security and Information Management System (CTSS) project, an Object-Oriented Framework approach that uses Component-Based Software Development to facilitate rapid deployment of new systems while improving software cost containment, development reliability, compatibility, and extensibility. The direction has been to develop a Fleet Management System (FMS) framework using object-oriented technology. The goal for the current development is to provide a software and hardware environment that will demonstrate and support object-oriented development commonly in the FMS Central Command Center and Vehicle domains.

  17. An Interoperability Framework and Capability Profiling for Manufacturing Software

    Science.gov (United States)

    Matsuda, M.; Arai, E.; Nakano, N.; Wakai, H.; Takeda, H.; Takata, M.; Sasaki, H.

    ISO/TC184/SC5/WG4 is working on ISO16100: Manufacturing software capability profiling for interoperability. This paper reports on a manufacturing software interoperability framework and a capability profiling methodology which were proposed and developed through this international standardization activity. Within the context of manufacturing application, a manufacturing software unit is considered to be capable of performing a specific set of function defined by a manufacturing software system architecture. A manufacturing software interoperability framework consists of a set of elements and rules for describing the capability of software units to support the requirements of a manufacturing application. The capability profiling methodology makes use of the domain-specific attributes and methods associated with each specific software unit to describe capability profiles in terms of unit name, manufacturing functions, and other needed class properties. In this methodology, manufacturing software requirements are expressed in terns of software unit capability profiles.

  18. FUZZY LOGIC BASED SOFTWARE PROCESS IMPROVIZATION FRAMEWORK FOR INDIAN SMALL SCALE SOFTWARE ORGANIZATIONS

    OpenAIRE

    A.M.Kalpana; Dr.A.Ebenezer Jeyakumar

    2010-01-01

    In this paper, the authors elaborate the results obtained after analyzing and assessing the software process activities in five small to medium sized Indian software companies. This work demonstrates a cost effective framework for software process appraisal, specificallytargeted at Indian software Small-to-Medium-sized Enterprises (SMEs). Improvisation deals with the unforeseen. It involves continual experimentation with new possibilities to create innovative and improved solutions outside cu...

  19. ALFA: The new ALICE-FAIR software framework

    Science.gov (United States)

    Al-Turany, M.; Buncic, P.; Hristov, P.; Kollegger, T.; Kouzinopoulos, C.; Lebedev, A.; Lindenstruth, V.; Manafov, A.; Richter, M.; Rybalchenko, A.; Vande Vyvre, P.; Winckler, N.

    2015-12-01

    The commonalities between the ALICE and FAIR experiments and their computing requirements led to the development of large parts of a common software framework in an experiment independent way. The FairRoot project has already shown the feasibility of such an approach for the FAIR experiments and extending it beyond FAIR to experiments at other facilities[1, 2]. The ALFA framework is a joint development between ALICE Online- Offline (O2) and FairRoot teams. ALFA is designed as a flexible, elastic system, which balances reliability and ease of development with performance using multi-processing and multithreading. A message- based approach has been adopted; such an approach will support the use of the software on different hardware platforms, including heterogeneous systems. Each process in ALFA assumes limited communication and reliance on other processes. Such a design will add horizontal scaling (multiple processes) to vertical scaling provided by multiple threads to meet computing and throughput demands. ALFA does not dictate any application protocols. Potentially, any content-based processor or any source can change the application protocol. The framework supports different serialization standards for data exchange between different hardware and software languages.

  20. Software engineering frameworks for the cloud computing paradigm

    CERN Document Server

    Mahmood, Zaigham

    2013-01-01

    This book presents the latest research on Software Engineering Frameworks for the Cloud Computing Paradigm, drawn from an international selection of researchers and practitioners. The book offers both a discussion of relevant software engineering approaches and practical guidance on enterprise-wide software deployment in the cloud environment, together with real-world case studies. Features: presents the state of the art in software engineering approaches for developing cloud-suitable applications; discusses the impact of the cloud computing paradigm on software engineering; offers guidance an

  1. Evolution of the ATLAS Software Framework towards Concurrency

    CERN Document Server

    Jones, Roger; The ATLAS collaboration; Leggett, Charles; Wynne, Benjamin

    2015-01-01

    The ATLAS experiment has successfully used its Gaudi/Athena software framework for data taking and analysis during the first LHC run, with billions of events successfully processed. However, the design of Gaudi/Athena dates from early 2000 and the software and the physics code has been written using a single threaded, serial design. This programming model has increasing difficulty in exploiting the potential of current CPUs, which offer their best performance only through taking full advantage of multiple cores and wide vector registers. Future CPU evolution will intensify this trend, with core counts increasing and memory per core falling. Maximising performance per watt will be a key metric, so all of these cores must be used as efficiently as possible. In order to address the deficiencies of the current framework, ATLAS has embarked upon two projects: first, a practical demonstration of the use of multi-threading in our reconstruction software, using the GaudiHive framework; second, an exercise to gather r...

  2. A framework to integrate software behavior into dynamic probabilistic risk assessment

    International Nuclear Information System (INIS)

    Zhu Dongfeng; Mosleh, Ali; Smidts, Carol

    2007-01-01

    Software plays an increasingly important role in modern safety-critical systems. Although, research has been done to integrate software into the classical probabilistic risk assessment (PRA) framework, current PRA practice overwhelmingly neglects the contribution of software to system risk. Dynamic probabilistic risk assessment (DPRA) is considered to be the next generation of PRA techniques. DPRA is a set of methods and techniques in which simulation models that represent the behavior of the elements of a system are exercised in order to identify risks and vulnerabilities of the system. The fact remains, however, that modeling software for use in the DPRA framework is also quite complex and very little has been done to address the question directly and comprehensively. This paper develops a methodology to integrate software contributions in the DPRA environment. The framework includes a software representation, and an approach to incorporate the software representation into the DPRA environment SimPRA. The software representation is based on multi-level objects and the paper also proposes a framework to simulate the multi-level objects in the simulation-based DPRA environment. This is a new methodology to address the state explosion problem in the DPRA environment. This study is the first systematic effort to integrate software risk contributions into DPRA environments

  3. Understanding flexible and distributed software development processes

    OpenAIRE

    Agerfalk, Par J.; Fitzgerald, Brian

    2006-01-01

    peer-reviewed The minitrack on Flexible and Distributed Software Development Processes addresses two important and partially intertwined current themes in software development: process flexibility and globally distributed software development

  4. Requisite Information Collaboration and Distributed Knowledge Management in Software Development

    DEFF Research Database (Denmark)

    Petersen, Mogens K.; Bjørn, Pernille; Frank, L.

    distributed knowledge management product state models. The paper draws upon a series of discussion with Scandinavian IT Group (SIG). With an interest in how performance in their new organization develops SIG invited the research group to study measures of organizational performance and the use and effect...... of knowledge management tools in software development. The paper does not represent the viewpoint of SIG but outline our framework and major research questions....

  5. Software agent Technology: A Framework for Minimizing Fraud in ...

    African Journals Online (AJOL)

    Software agent Technology: A Framework for Minimizing Fraud in Postpaid Billing Systems. ... Journal of Research in National Development ... to the traditional Object-oriented Software engineering methodology was used to come up with this ...

  6. Holistic Framework For Establishing Interoperability of Heterogeneous Software Development Tools

    National Research Council Canada - National Science Library

    Puett, Joseph

    2003-01-01

    This dissertation presents a Holistic Framework for Software Engineering (HFSE) that establishes collaborative mechanisms by which existing heterogeneous software development tools and models will interoperate...

  7. A framework for evaluating distributed control systems in nuclear power plants

    International Nuclear Information System (INIS)

    O'Donell, C.; Jiang, J.

    2004-01-01

    A framework for evaluating the use of distributed control systems (DCS) in nuclear power plants (NPP) is proposed in this paper. The framework consists of advanced communication, control, hardware and software technology. This paper presents the results of an experiment using the framework test-bench, and elaborates on a variety of other research possibilities. Using a hardware in the loop system (HIL) a DeltaV M3 controller from Emerson Process is connected to a desktop NPP simulator. The industry standard communication protocol, Modbus, has been selected in this study. A simplified boiler pressure control (BPC) module is created on the NPP simulator. The test-bench provides an interface between the controller and the simulator. Through software monitoring the performance of the DCS can be evaluated. Controller access and response times over the Modbus network are observed and compared with theoretical values. The controller accomplishes its task under the specifications set out for the BPC. This novel framework allows a performance metric to be applied against different industrial controllers. (author)

  8. A novel optimal distribution system planning framework implementing distributed generation in a deregulated electricity market

    International Nuclear Information System (INIS)

    Porkar, S.; Poure, P.; Abbaspour-Tehrani-fard, A.; Saadate, S.

    2010-01-01

    This paper introduces a new framework included mathematical model and a new software package interfacing two powerful softwares (MATLAB and GAMS) for obtaining the optimal distributed generation (DG) capacity sizing and sitting investments with capability to simulate large distribution system planning. The proposed optimization model allows minimizing total system planning costs for DG investment, DG operation and maintenance, purchase of power by the distribution companies (DISCOs) from transmission companies (TRANSCOs) and system power losses. The proposed model provides not only the DG size and site but also the new market price as well. Three different cases depending on system conditions and three different scenarios depending on different planning alternatives and electrical market structures, have been considered. They have allowed validating the economical and electrical benefits of introducing DG by solving the distribution system planning problem and by improving power quality of distribution system. DG installation increases the feeders' lifetime by reducing their loading and adds the benefit of using the existing distribution system for further load growth without the need for feeders upgrading. More, by investing in DG, the DISCO can minimize its total planning cost and reduce its customers' bills. (author)

  9. A novel optimal distribution system planning framework implementing distributed generation in a deregulated electricity market

    Energy Technology Data Exchange (ETDEWEB)

    Porkar, S. [Department of Electrical Engineering, Sharif University of Technology, Tehran (Iran); Groupe de Recherches en Electrotechnique et Electronique de Nancy, GREEN-UHP, Universite Henri Poincare de Nancy I, BP 239, 54506 Vandoeuvre les Nancy Cedex (France); Poure, P. [Laboratoire d' Instrumentation Electronique de Nancy, LIEN, EA 3440, Universite Henri Poincare de Nancy I, BP 239, 54506 Vandoeuvre les Nancy Cedex (France); Abbaspour-Tehrani-fard, A. [Department of Electrical Engineering, Sharif University of Technology, Tehran (Iran); Saadate, S. [Groupe de Recherches en Electrotechnique et Electronique de Nancy, GREEN-UHP, Universite Henri Poincare de Nancy I, BP 239, 54506 Vandoeuvre les Nancy Cedex (France)

    2010-07-15

    This paper introduces a new framework included mathematical model and a new software package interfacing two powerful softwares (MATLAB and GAMS) for obtaining the optimal distributed generation (DG) capacity sizing and sitting investments with capability to simulate large distribution system planning. The proposed optimization model allows minimizing total system planning costs for DG investment, DG operation and maintenance, purchase of power by the distribution companies (DISCOs) from transmission companies (TRANSCOs) and system power losses. The proposed model provides not only the DG size and site but also the new market price as well. Three different cases depending on system conditions and three different scenarios depending on different planning alternatives and electrical market structures, have been considered. They have allowed validating the economical and electrical benefits of introducing DG by solving the distribution system planning problem and by improving power quality of distribution system. DG installation increases the feeders' lifetime by reducing their loading and adds the benefit of using the existing distribution system for further load growth without the need for feeders upgrading. More, by investing in DG, the DISCO can minimize its total planning cost and reduce its customers' bills. (author)

  10. The proposal of a novel software testing framework

    OpenAIRE

    Ahmad, Munib; Bajaber, Fuad; Qureshi, M. Rizwan Jameel

    2014-01-01

    Software testing is normally used to check the validity of a program. Test oracle performs an important role in software testing. The focus in this research is to perform class level test by introducing a testing framework. A technique is developed to generate test oracle for specification-based software testing using Vienna Development Method (VDM++) formal language. A three stage translation process, of VDM++ specifications of container classes to C++ test oracle classes, is described in th...

  11. The control software framework of the web base

    International Nuclear Information System (INIS)

    Nakatani, Takeshi; Inamura, Yasuhiro; Ito, Takayoshi; Otomo, Toshiya

    2015-01-01

    Web browsers are one of the most platform-independent user interfaces. In particular, web pages created using responsive web design (RWD) are available for use on desktop and laptop computers, as well as tablet terminals and smart phones. We developed a common software framework, IROHA, for the instrument control system in the Materials and Life Science Experimental Facility at the Japan Proton Accelerator Research Complex to build a flexible and scalable system by adopting XML/HTTP. However, its user interface was platform-dependent, and we wanted it to be more user-friendly. In 2013, we developed the prototype of a new software framework, IROHA2, comprising several device control servers and an instrument management server, retaining the flexibility and scalability of IROHA. We also adopted the Bootstrap framework to create an RWD user interface for these servers. (author)

  12. A Survey of Software Infrastructures and Frameworks for Ubiquitous Computing

    Directory of Open Access Journals (Sweden)

    Christoph Endres

    2005-01-01

    Full Text Available In this survey, we discuss 29 software infrastructures and frameworks which support the construction of distributed interactive systems. They range from small projects with one implemented prototype to large scale research efforts, and they come from the fields of Augmented Reality (AR, Intelligent Environments, and Distributed Mobile Systems. In their own way, they can all be used to implement various aspects of the ubiquitous computing vision as described by Mark Weiser [60]. This survey is meant as a starting point for new projects, in order to choose an existing infrastructure for reuse, or to get an overview before designing a new one. It tries to provide a systematic, relatively broad (and necessarily not very deep overview, while pointing to relevant literature for in-depth study of the systems discussed.

  13. A framework for distributed mixed-language scientific applications

    International Nuclear Information System (INIS)

    Quarrie, D.R.

    1996-01-01

    The Object Management Group has defined an architecture (COBRA) for distributed object applications based on an Object Broker and Interface Definition Language. This project builds upon this architecture to establish a framework for the creation of mixed language scientific applications. A prototype compiler has been written that generates FORTRAN 90 or Eiffel subs and skeletons and the required C++ glue code from an input IDL file that specifies object interfaces. This generated code can be used directly for non-distributed mixed language applications or in conjunction with the C++ code generated from a commercial IDL compiler for distributed applications. A feasibility study is presently to see whether a fully integrated software development environment for distributed, mixed-language applications can be created by modifying the back-end code generator of a commercial CASE tool to emit IDL. (author)

  14. Problem Solving Frameworks for Mathematics and Software Development

    Science.gov (United States)

    McMaster, Kirby; Sambasivam, Samuel; Blake, Ashley

    2012-01-01

    In this research, we examine how problem solving frameworks differ between Mathematics and Software Development. Our methodology is based on the assumption that the words used frequently in a book indicate the mental framework of the author. We compared word frequencies in a sample of 139 books that discuss problem solving. The books were grouped…

  15. Knowledge coordination in distributed software management

    DEFF Research Database (Denmark)

    Persson, John Stouby; Mathiassen, Lars

    2012-01-01

    Software organizations are increasingly relying on cross-organizational and cross-border collaboration, requiring effective coordination of distributed knowledge. However, such coordination is challenging due to spatial separation, diverging communities-of-practice, and unevenly distributed...... communication breakdowns on recordings of their combined teleconferencing and real-time collaborative modeling. As a result, we offer theoretical propositions that explain how distributed software managers can deal with communication breakdowns and effectively coordinate knowledge through multimodal virtual...

  16. A Software Framework for Multimodal Human-Computer Interaction Systems

    NARCIS (Netherlands)

    Shen, Jie; Pantic, Maja

    2009-01-01

    This paper describes a software framework we designed and implemented for the development and research in the area of multimodal human-computer interface. The proposed framework is based on publish / subscribe architecture, which allows developers and researchers to conveniently configure, test and

  17. A software framework for real-time multi-modal detection of microsleeps.

    Science.gov (United States)

    Knopp, Simon J; Bones, Philip J; Weddell, Stephen J; Jones, Richard D

    2017-09-01

    A software framework is described which was designed to process EEG, video of one eye, and head movement in real time, towards achieving early detection of microsleeps for prevention of fatal accidents, particularly in transport sectors. The framework is based around a pipeline structure with user-replaceable signal processing modules. This structure can encapsulate a wide variety of feature extraction and classification techniques and can be applied to detecting a variety of aspects of cognitive state. Users of the framework can implement signal processing plugins in C++ or Python. The framework also provides a graphical user interface and the ability to save and load data to and from arbitrary file formats. Two small studies are reported which demonstrate the capabilities of the framework in typical applications: monitoring eye closure and detecting simulated microsleeps. While specifically designed for microsleep detection/prediction, the software framework can be just as appropriately applied to (i) other measures of cognitive state and (ii) development of biomedical instruments for multi-modal real-time physiological monitoring and event detection in intensive care, anaesthesiology, cardiology, neurosurgery, etc. The software framework has been made freely available for researchers to use and modify under an open source licence.

  18. Distribution and communication in software engineering environments. Application to the HELIOS Software Bus.

    OpenAIRE

    Jean, F. C.; Jaulent, M. C.; Coignard, J.; Degoulet, P.

    1991-01-01

    Modularity, distribution and integration are current trends in Software Engineering. To reach these goals HELIOS, a distributive Software Engineering Environment dedicated to the medical field, has been conceived and a prototype implemented. This environment is made by the collaboration of several, well encapsulated Software Components. This paper presents the architecture retained to allow communication between the different components and focus on the implementation details of the Software ...

  19. The Five 'R's' for Developing Trusted Software Frameworks to increase confidence in, and maximise reuse of, Open Source Software.

    Science.gov (United States)

    Fraser, Ryan; Gross, Lutz; Wyborn, Lesley; Evans, Ben; Klump, Jens

    2015-04-01

    Recent investments in HPC, cloud and Petascale data stores, have dramatically increased the scale and resolution that earth science challenges can now be tackled. These new infrastructures are highly parallelised and to fully utilise them and access the large volumes of earth science data now available, a new approach to software stack engineering needs to be developed. The size, complexity and cost of the new infrastructures mean any software deployed has to be reliable, trusted and reusable. Increasingly software is available via open source repositories, but these usually only enable code to be discovered and downloaded. As a user it is hard for a scientist to judge the suitability and quality of individual codes: rarely is there information on how and where codes can be run, what the critical dependencies are, and in particular, on the version requirements and licensing of the underlying software stack. A trusted software framework is proposed to enable reliable software to be discovered, accessed and then deployed on multiple hardware environments. More specifically, this framework will enable those who generate the software, and those who fund the development of software, to gain credit for the effort, IP, time and dollars spent, and facilitate quantification of the impact of individual codes. For scientific users, the framework delivers reviewed and benchmarked scientific software with mechanisms to reproduce results. The trusted framework will have five separate, but connected components: Register, Review, Reference, Run, and Repeat. 1) The Register component will facilitate discovery of relevant software from multiple open source code repositories. The registration process of the code should include information about licensing, hardware environments it can be run on, define appropriate validation (testing) procedures and list the critical dependencies. 2) The Review component is targeting on the verification of the software typically against a set of

  20. Framework Programmable Platform for the Advanced Software Development Workstation (FPP/ASDW). Demonstration framework document. Volume 1: Concepts and activity descriptions

    Science.gov (United States)

    Mayer, Richard J.; Blinn, Thomas M.; Dewitte, Paul S.; Crump, John W.; Ackley, Keith A.

    1992-01-01

    The Framework Programmable Software Development Platform (FPP) is a project aimed at effectively combining tool and data integration mechanisms with a model of the software development process to provide an intelligent integrated software development environment. Guided by the model, this system development framework will take advantage of an integrated operating environment to automate effectively the management of the software development process so that costly mistakes during the development phase can be eliminated. The Advanced Software Development Workstation (ASDW) program is conducting research into development of advanced technologies for Computer Aided Software Engineering (CASE).

  1. Framework programmable platform for the advanced software development workstation. Integration mechanism design document

    Science.gov (United States)

    Mayer, Richard J.; Blinn, Thomas M.; Mayer, Paula S. D.; Reddy, Uday; Ackley, Keith; Futrell, Mike

    1991-01-01

    The Framework Programmable Software Development Platform (FPP) is a project aimed at combining effective tool and data integration mechanisms with a model of the software development process in an intelligent integrated software development environment. Guided by this model, this system development framework will take advantage of an integrated operating environment to automate effectively the management of the software development process so that costly mistakes during the development phase can be eliminated.

  2. Command and Data Handling Flight Software test framework: A Radiation Belt Storm Probes practice

    Science.gov (United States)

    Hill, T. A.; Reid, W. M.; Wortman, K. A.

    During the Radiation Belt Storm Probes (RBSP) mission, a test framework was developed by the Embedded Applications Group in the Space Department at the Johns Hopkins Applied Physics Laboratory (APL). The test framework is implemented for verification of the Command and Data Handling (C& DH) Flight Software. The RBSP C& DH Flight Software consists of applications developed for use with Goddard Space Flight Center's core Flight Executive (cFE) architecture. The test framework's initial concept originated with tests developed for verification of the Autonomy rules that execute with the Autonomy Engine application of the RBSP C& DH Flight Software. The test framework was adopted and expanded for system and requirements verification of the RBSP C& DH Flight Software. During the evolution of the RBSP C& DH Flight Software test framework design, a set of script conventions and a script library were developed. The script conventions and library eased integration of system and requirements verification tests into a comprehensive automated test suite. The comprehensive test suite is currently being used to verify releases of the RBSP C& DH Flight Software. In addition to providing the details and benefits of the test framework, the discussion will include several lessons learned throughout the verification process of RBSP C& DH Flight Software. Our next mission, Solar Probe Plus (SPP), will use the cFE architecture for the C& DH Flight Software. SPP also plans to use the same ground system as RBSP. Many of the RBSP C& DH Flight Software applications are reusable on the SPP mission, therefore there is potential for test design and test framework reuse for system and requirements verification.

  3. An integrated framework for software vulnerability detection ...

    Indian Academy of Sciences (India)

    Manoj Kumar

    2017-07-15

    Jul 15, 2017 ... concern and intelligent framework and provides more secured ... In the present scenario, the software systems are being .... human. In human body, the autonomic nervous system ..... such as artificial neural networks, genetic algorithm, grey ..... [8] Bansiya J 1997 A hierarchical model for quality assessment.

  4. Software framework for automatic learning of telescope operation

    Science.gov (United States)

    Rodríguez, Jose A.; Molgó, Jordi; Guerra, Dailos

    2016-07-01

    The "Gran Telescopio de Canarias" (GTC) is an optical-infrared 10-meter segmented mirror telescope at the ORM observatory in Canary Islands (Spain). The GTC Control System (GCS) is a distributed object and component oriented system based on RT-CORBA and it is responsible for the operation of the telescope, including its instrumentation. The current development state of GCS is mature and fully operational. On the one hand telescope users as PI's implement the sequences of observing modes of future scientific instruments that will be installed in the telescope and operators, in turn, design their own sequences for maintenance. On the other hand engineers develop new components that provide new functionality required by the system. This great work effort is possible to minimize so that costs are reduced, especially if one considers that software maintenance is the most expensive phase of the software life cycle. Could we design a system that allows the progressive assimilation of sequences of operation and maintenance of the telescope, through an automatic self-programming system, so that it can evolve from one Component oriented organization to a Service oriented organization? One possible way to achieve this is to use mechanisms of learning and knowledge consolidation to reduce to the minimum expression the effort to transform the specifications of the different telescope users to the operational deployments. This article proposes a framework for solving this problem based on the combination of the following tools: data mining, self-Adaptive software, code generation, refactoring based on metrics, Hierarchical Agglomerative Clustering and Service Oriented Architectures.

  5. A Framework for Teaching Software Development Methods

    Science.gov (United States)

    Dubinsky, Yael; Hazzan, Orit

    2005-01-01

    This article presents a study that aims at constructing a teaching framework for software development methods in higher education. The research field is a capstone project-based course, offered by the Technion's Department of Computer Science, in which Extreme Programming is introduced. The research paradigm is an Action Research that involves…

  6. Software metrics: Software quality metrics for distributed systems. [reliability engineering

    Science.gov (United States)

    Post, J. V.

    1981-01-01

    Software quality metrics was extended to cover distributed computer systems. Emphasis is placed on studying embedded computer systems and on viewing them within a system life cycle. The hierarchy of quality factors, criteria, and metrics was maintained. New software quality factors were added, including survivability, expandability, and evolvability.

  7. PALNS - A software framework for parallel large neighborhood search

    DEFF Research Database (Denmark)

    Røpke, Stefan

    2009-01-01

    This paper propose a simple, parallel, portable software framework for the metaheuristic named large neighborhood search (LNS). The aim is to provide a framework where the user has to set up a few data structures and implement a few functions and then the framework provides a metaheuristic where ...... parallelization "comes for free". We apply the parallel LNS heuristic to two different problems: the traveling salesman problem with pickup and delivery (TSPPD) and the capacitated vehicle routing problem (CVRP)....

  8. Software reliability growth models with normal failure time distributions

    International Nuclear Information System (INIS)

    Okamura, Hiroyuki; Dohi, Tadashi; Osaki, Shunji

    2013-01-01

    This paper proposes software reliability growth models (SRGM) where the software failure time follows a normal distribution. The proposed model is mathematically tractable and has sufficient ability of fitting to the software failure data. In particular, we consider the parameter estimation algorithm for the SRGM with normal distribution. The developed algorithm is based on an EM (expectation-maximization) algorithm and is quite simple for implementation as software application. Numerical experiment is devoted to investigating the fitting ability of the SRGMs with normal distribution through 16 types of failure time data collected in real software projects

  9. A penalized framework for distributed lag non-linear models.

    Science.gov (United States)

    Gasparrini, Antonio; Scheipl, Fabian; Armstrong, Ben; Kenward, Michael G

    2017-09-01

    Distributed lag non-linear models (DLNMs) are a modelling tool for describing potentially non-linear and delayed dependencies. Here, we illustrate an extension of the DLNM framework through the use of penalized splines within generalized additive models (GAM). This extension offers built-in model selection procedures and the possibility of accommodating assumptions on the shape of the lag structure through specific penalties. In addition, this framework includes, as special cases, simpler models previously proposed for linear relationships (DLMs). Alternative versions of penalized DLNMs are compared with each other and with the standard unpenalized version in a simulation study. Results show that this penalized extension to the DLNM class provides greater flexibility and improved inferential properties. The framework exploits recent theoretical developments of GAMs and is implemented using efficient routines within freely available software. Real-data applications are illustrated through two reproducible examples in time series and survival analysis. © 2017 The Authors Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.

  10. Compiling software for a hierarchical distributed processing system

    Science.gov (United States)

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2013-12-31

    Compiling software for a hierarchical distributed processing system including providing to one or more compiling nodes software to be compiled, wherein at least a portion of the software to be compiled is to be executed by one or more nodes; compiling, by the compiling node, the software; maintaining, by the compiling node, any compiled software to be executed on the compiling node; selecting, by the compiling node, one or more nodes in a next tier of the hierarchy of the distributed processing system in dependence upon whether any compiled software is for the selected node or the selected node's descendents; sending to the selected node only the compiled software to be executed by the selected node or selected node's descendent.

  11. STATIC CODE ANALYSIS FOR SOFTWARE QUALITY IMPROVEMENT: A CASE STUDY IN BCI FRAMEWORK DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    Indar Sugiarto

    2008-01-01

    Full Text Available This paper shows how the systematic approach in software testing using static code analysis method can be used for improving the software quality of a BCI framework. The method is best performed during the development phase of framework programs. In the proposed approach, we evaluate several software metrics which are based on the principles of object oriented design. Since such method is depending on the underlying programming language, we describe the method in term of C++ language programming whereas the Qt platform is also currently being used. One of the most important metric is so called software complexity. Applying the software complexity calculation using both McCabe and Halstead method for the BCI framework which consists of two important types of BCI, those are SSVEP and P300, we found that there are two classes in the framework which have very complex and prone to violation of cohesion principle in OOP. The other metrics are fit the criteria of the proposed framework aspects, such as: MPC is less than 20; average complexity is around value of 5; and the maximum depth is below 10 blocks. Such variables are considered very important when further developing the BCI framework in the future.

  12. Framework de evaluación de productos Software

    OpenAIRE

    Angeleri, Paula; Titiosky, Rolando; Ceballos, Jorge

    2016-01-01

    El objetivo de este artículo es presentar la situación actual y los avances realizados en el proyecto de investigación MyFEPS Metodologías y Framework para la Evaluación de Productos de Software, desarrollado en la Facultad de Ingeniería y Tecnología Informática de la Universidad de Belgrano. En este contexto se describen las actividades de transferencia y ajustes al Framework, producto de la última fase del proyecto.

  13. Active resources concept of computation for enterprise software

    Directory of Open Access Journals (Sweden)

    Koryl Maciej

    2017-06-01

    Full Text Available Traditional computational models for enterprise software are still to a great extent centralized. However, rapid growing of modern computation techniques and frameworks causes that contemporary software becomes more and more distributed. Towards development of new complete and coherent solution for distributed enterprise software construction, synthesis of three well-grounded concepts is proposed: Domain-Driven Design technique of software engineering, REST architectural style and actor model of computation. As a result new resources-based framework arises, which after first cases of use seems to be useful and worthy of further research.

  14. Software Distribution Statement and Disclaimer | OSTI, US Dept of Energy

    Science.gov (United States)

    Search Search Software Distribution Statement and Disclaimer Rights-in-technical-data clauses for many . The following distribution statement and disclaimer meet those requirements for software and should be affixed to all distributed DOE-sponsored software. Contractors may have specific requirements and required

  15. Fostering Multirepresentational Levels of Chemical Concepts: A Framework to Develop Educational Software

    Science.gov (United States)

    Marson, Guilherme A.; Torres, Bayardo B.

    2011-01-01

    This work presents a convenient framework for developing interactive chemical education software to facilitate the integration of macroscopic, microscopic, and symbolic dimensions of chemical concepts--specifically, via the development of software for gel permeation chromatography. The instructional role of the software was evaluated in a study…

  16. NHPP-Based Software Reliability Models Using Equilibrium Distribution

    Science.gov (United States)

    Xiao, Xiao; Okamura, Hiroyuki; Dohi, Tadashi

    Non-homogeneous Poisson processes (NHPPs) have gained much popularity in actual software testing phases to estimate the software reliability, the number of remaining faults in software and the software release timing. In this paper, we propose a new modeling approach for the NHPP-based software reliability models (SRMs) to describe the stochastic behavior of software fault-detection processes. The fundamental idea is to apply the equilibrium distribution to the fault-detection time distribution in NHPP-based modeling. We also develop efficient parameter estimation procedures for the proposed NHPP-based SRMs. Through numerical experiments, it can be concluded that the proposed NHPP-based SRMs outperform the existing ones in many data sets from the perspective of goodness-of-fit and prediction performance.

  17. Global Software and IT A Guide to Distributed Development, Projects, and Outsourcing

    CERN Document Server

    Ebert, Christof

    2011-01-01

    Global software engineering, implying both internal and outsourced development, is a fast-growing scenario within industry; the growth rates in some sectors are more than 20% per year. However, half of all offshoring activities are cancelled within the first 2 years, at tremendous unanticipated cost to the organization.   This book will provide a more balanced framework for planning global development, covering topics such as managing people in distributed sites, managing a project across locations, mitigating the risk of offshoring, processes for global development, practical outsourcin

  18. Craniux: a LabVIEW-based modular software framework for brain-machine interface research.

    Science.gov (United States)

    Degenhart, Alan D; Kelly, John W; Ashmore, Robin C; Collinger, Jennifer L; Tyler-Kabara, Elizabeth C; Weber, Douglas J; Wang, Wei

    2011-01-01

    This paper presents "Craniux," an open-access, open-source software framework for brain-machine interface (BMI) research. Developed in LabVIEW, a high-level graphical programming environment, Craniux offers both out-of-the-box functionality and a modular BMI software framework that is easily extendable. Specifically, it allows researchers to take advantage of multiple features inherent to the LabVIEW environment for on-the-fly data visualization, parallel processing, multithreading, and data saving. This paper introduces the basic features and system architecture of Craniux and describes the validation of the system under real-time BMI operation using simulated and real electrocorticographic (ECoG) signals. Our results indicate that Craniux is able to operate consistently in real time, enabling a seamless work flow to achieve brain control of cursor movement. The Craniux software framework is made available to the scientific research community to provide a LabVIEW-based BMI software platform for future BMI research and development.

  19. Craniux: A LabVIEW-Based Modular Software Framework for Brain-Machine Interface Research

    Directory of Open Access Journals (Sweden)

    Alan D. Degenhart

    2011-01-01

    Full Text Available This paper presents “Craniux,” an open-access, open-source software framework for brain-machine interface (BMI research. Developed in LabVIEW, a high-level graphical programming environment, Craniux offers both out-of-the-box functionality and a modular BMI software framework that is easily extendable. Specifically, it allows researchers to take advantage of multiple features inherent to the LabVIEW environment for on-the-fly data visualization, parallel processing, multithreading, and data saving. This paper introduces the basic features and system architecture of Craniux and describes the validation of the system under real-time BMI operation using simulated and real electrocorticographic (ECoG signals. Our results indicate that Craniux is able to operate consistently in real time, enabling a seamless work flow to achieve brain control of cursor movement. The Craniux software framework is made available to the scientific research community to provide a LabVIEW-based BMI software platform for future BMI research and development.

  20. PyPWA: A partial-wave/amplitude analysis software framework

    Science.gov (United States)

    Salgado, Carlos

    2016-05-01

    The PyPWA project aims to develop a software framework for Partial Wave and Amplitude Analysis of data; providing the user with software tools to identify resonances from multi-particle final states in photoproduction. Most of the code is written in Python. The software is divided into two main branches: one general-shell where amplitude's parameters (or any parametric model) are to be estimated from the data. This branch also includes software to produce simulated data-sets using the fitted amplitudes. A second branch contains a specific realization of the isobar model (with room to include Deck-type and other isobar model extensions) to perform PWA with an interface into the computer resources at Jefferson Lab. We are currently implementing parallelism and vectorization using the Intel's Xeon Phi family of coprocessors.

  1. A Framework for Effective Software Monitoring in Project Management

    African Journals Online (AJOL)

    A Framework for Effective Software Monitoring in Project Management. ... is shown to provide meaningful interpretation of collected metric data by embedding certain quality function. Key words: Project Management, Feedback, project control, metrics, process model, quantitative validity ... AJOL African Journals Online.

  2. Reviewing the health of software ecosystems – a conceptual framework proposal

    DEFF Research Database (Denmark)

    Manikas, Konstantinos; Hansen, Klaus Marius

    2013-01-01

    The health of a software ecosystem is an indication of how well the ecosystem is functioning. The measurement of health can point to issues that need to be addressed in the ecosystem and areas for the ecosystem to improve. However, the software ecosystem field lacks an applicable way to measure a...... influenced by theories from natural ecosystems and open source, (ii) identify two areas where software ecosystems differ from business and natural ecosystems, and (iii) propose a conceptual framework for defining and measuring the health of software ecosystems....

  3. Organization of the STAR experiment software framework at JINR. Results and experience from the first two years of work

    International Nuclear Information System (INIS)

    Arkhipkin, D.A.; Zul'karneeva, Yu.R.

    2004-01-01

    The organization of STAR experiment software framework at JINR is described. The approach being based on the distributed file system ASF was implemented at the NEOSTAR minicluster at LPP, JINR. An operation principle of the cluster as well as its work description and samples of the performed analysis are also given. The results of the NEOSTAR minicluster performance have demonstrated broad facilities of the distributed computing concept to be employed in experimental data analysis and high-energy physics modeling

  4. Internet-based hardware/software co-design framework for embedded 3D graphics applications

    Directory of Open Access Journals (Sweden)

    Wong Weng-Fai

    2011-01-01

    Full Text Available Abstract Advances in technology are making it possible to run three-dimensional (3D graphics applications on embedded and handheld devices. In this article, we propose a hardware/software co-design environment for 3D graphics application development that includes the 3D graphics software, OpenGL ES application programming interface (API, device driver, and 3D graphics hardware simulators. We developed a 3D graphics system-on-a-chip (SoC accelerator using transaction-level modeling (TLM. This gives software designers early access to the hardware even before it is ready. On the other hand, hardware designers also stand to gain from the more complex test benches made available in the software for verification. A unique aspect of our framework is that it allows hardware and software designers from geographically dispersed areas to cooperate and work on the same framework. Designs can be entered and executed from anywhere in the world without full access to the entire framework, which may include proprietary components. This results in controlled and secure transparency and reproducibility, granting leveled access to users of various roles.

  5. A Process Framework for Designing Software Reference Architectures for Providing Tools as a Service

    DEFF Research Database (Denmark)

    Chauhan, Muhammad Aufeef; Babar, Muhammad Ali; Probst, Christian W.

    2016-01-01

    of software systems need customized and systematic SRA design and evaluation methods. In this paper, we present a software Reference Architecture Design process Framework (RADeF) that can be used for analysis, design and evaluation of the SRA for provisioning of Tools as a Service as part of a cloud......Software Reference Architecture (SRA), which is a generic architecture solution for a specific type of software systems, provides foundation for the design of concrete architectures in terms of architecture design guidelines and architecture elements. The complexity and size of certain types......-enabled workSPACE (TSPACE). The framework is based on the state of the art results from literature and our experiences with designing software architectures for cloud-based systems. We have applied RADeF SRA design two types of TSPACE: software architecting TSPACE and software implementation TSPACE...

  6. Flexible test automation a software framework for easily developing measurement applications

    CERN Document Server

    Arpaia, Pasquale; De Matteis, Ernesto

    2014-01-01

    In laboratory management of an industrial test division, a test laboratory, or a research center, one of the main activities is producing suitable software for automatic benches by satisfying a given set of requirements. This activity is particularly costly and burdensome when test requirements are variable over time. If the batches of objects have small size and frequent occurrence, the activity of measurement automation becomes predominating with respect to the test execution. Flexible Test Automation shows the development of a software framework as a useful solution to satisfy this exigency. The framework supports the user in producing measurement applications for a wide range of requirements with low effort and development time.

  7. GNU polyxmass: a software framework for mass spectrometric simulations of linear (bio-polymeric analytes

    Directory of Open Access Journals (Sweden)

    Rusconi Filippo

    2006-04-01

    Full Text Available Abstract Background Nowadays, a variety of (bio-polymers can be analyzed by mass spectrometry. The detailed interpretation of the spectra requires a huge number of "hypothesis cycles", comprising the following three actions 1 put forth a structural hypothesis, 2 test it, 3 (invalidate it. This time-consuming and painstaking data scrutiny is alleviated by using specialized software tools. However, all the software tools available to date are polymer chemistry-specific. This imposes a heavy overhead to researchers who do mass spectrometry on a variety of (bio-polymers, as each polymer type will require a different software tool to perform data simulations and analyses. We developed a software to address the lack of an integrated software framework able to deal with different polymer chemistries. Results The GNU polyxmass software framework performs common (bio-chemical simulations–along with simultaneous mass spectrometric calculations–for any kind of linear bio-polymeric analyte (DNA, RNA, saccharides or proteins. The framework is organized into three modules, all accessible from one single binary program. The modules let the user to 1 define brand new polymer chemistries, 2 perform quick mass calculations using a desktop calculator paradigm, 3 graphically edit polymer sequences and perform (bio-chemical/mass spectrometric simulations. Any aspect of the mass calculations, polymer chemistry reactions or graphical polymer sequence editing is configurable. Conclusion The scientist who uses mass spectrometry to characterize (bio-polymeric analytes of different chemistries is provided with a single software framework for his data prediction/analysis needs, whatever the polymer chemistry being involved.

  8. Using Software Architectures for Designing Distributed Embedded Systems

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak

    In this paper, we outline an on-going project of designing distributed embedded systems for closed-loop process control. The project is a joint effort between software architecture researchers and developers from two companies that produce commercial embedded process control systems. The project...... has a strong emphasis on software architectural issues and terminology in order to envision, design and analyze design alternatives. We present two results. First, we outline how focusing on software architecture, architectural issues and qualities are beneficial in designing distributed, embedded......, systems. Second, we present two different architectures for closed-loop process control and discuss benefits and reliabilities....

  9. ProjectQ: An Open Source Software Framework for Quantum Computing

    OpenAIRE

    Steiger, Damian S.; Häner, Thomas; Troyer, Matthias

    2016-01-01

    We introduce ProjectQ, an open source software effort for quantum computing. The first release features a compiler framework capable of targeting various types of hardware, a high-performance simulator with emulation capabilities, and compiler plug-ins for circuit drawing and resource estimation. We introduce our Python-embedded domain-specific language, present the features, and provide example implementations for quantum algorithms. The framework allows testing of quantum algorithms through...

  10. iAssist: a software framework for intelligent patient monitoring.

    Science.gov (United States)

    Brouse, Christopher; Dumont, Guy; Yang, Ping; Lim, Joanne; Ansermino, J Mark

    2007-01-01

    A software framework (iAssist) has been developed for intelligent patient monitoring, and forms the foundation of a clinical monitoring expert system. The framework is extensible, flexible, and interoperable. It supports plugins to perform data acquisition, signal processing, graphical display, data storage, and output to external devices. iAssist currently incorporates two plugins to detect change point events in physiological trends. In 38 surgical cases, iAssist detected 868 events, of which clinicians rated more than 50% as clinically significant and less than 7% as artifacts. Clinicians found iAssist intuitive and easy to use.

  11. Towards a comprehensive framework for reuse: A reuse-enabling software evolution environment

    Science.gov (United States)

    Basili, V. R.; Rombach, H. D.

    1988-01-01

    Reuse of products, processes and knowledge will be the key to enable the software industry to achieve the dramatic improvement in productivity and quality required to satisfy the anticipated growing demand. Although experience shows that certain kinds of reuse can be successful, general success has been elusive. A software life-cycle technology which allows broad and extensive reuse could provide the means to achieving the desired order-of-magnitude improvements. The scope of a comprehensive framework for understanding, planning, evaluating and motivating reuse practices and the necessary research activities is outlined. As a first step towards such a framework, a reuse-enabling software evolution environment model is introduced which provides a basis for the effective recording of experience, the generalization and tailoring of experience, the formalization of experience, and the (re-)use of experience.

  12. Software Image J to study soil pore distribution

    Directory of Open Access Journals (Sweden)

    Sabrina Passoni

    2014-04-01

    Full Text Available In the soil science, a direct method that allows the study of soil pore distribution is the bi-dimensional (2D digital image analysis. Such technique provides quantitative results of soil pore shape, number and size. The use of specific softwares for the treatment and processing of images allows a fast and efficient method to quantify the soil porous system. However, due to the high cost of commercial softwares, public ones can be an interesting alternative for soil structure analysis. The objective of this work was to evaluate the quality of data provided by the Image J software (public domain used to characterize the voids of two soils, characterized as Geric Ferralsol and Rhodic Ferralsol, from the southeast region of Brazil. The pore distribution analysis technique from impregnated soil blocks was utilized for this purpose. The 2D image acquisition was carried out by using a CCD camera coupled to a conventional optical microscope. After acquisition and treatment of images, they were processed and analyzed by the software Noesis Visilog 5.4® (chosen as the reference program and ImageJ. The parameters chosen to characterize the soil voids were: shape, number and pore size distribution. For both soils, the results obtained for the image total porosity (%, the total number of pores and the pore size distribution showed that the Image J is a suitable software to be applied in the characterization of the soil sample voids impregnated with resin.

  13. A distributed cloud-based cyberinfrastructure framework for integrated bridge monitoring

    Science.gov (United States)

    Jeong, Seongwoon; Hou, Rui; Lynch, Jerome P.; Sohn, Hoon; Law, Kincho H.

    2017-04-01

    This paper describes a cloud-based cyberinfrastructure framework for the management of the diverse data involved in bridge monitoring. Bridge monitoring involves various hardware systems, software tools and laborious activities that include, for examples, a structural health monitoring (SHM), sensor network, engineering analysis programs and visual inspection. Very often, these monitoring systems, tools and activities are not coordinated, and the collected information are not shared. A well-designed integrated data management framework can support the effective use of the data and, thereby, enhance bridge management and maintenance operations. The cloud-based cyberinfrastructure framework presented herein is designed to manage not only sensor measurement data acquired from the SHM system, but also other relevant information, such as bridge engineering model and traffic videos, in an integrated manner. For the scalability and flexibility, cloud computing services and distributed database systems are employed. The information stored can be accessed through standard web interfaces. For demonstration, the cyberinfrastructure system is implemented for the monitoring of the bridges located along the I-275 Corridor in the state of Michigan.

  14. BOA: Framework for Automated Builds

    CERN Document Server

    Ratnikova, N

    2003-01-01

    Managing large-scale software products is a complex software engineering task. The automation of the software development, release and distribution process is most beneficial in the large collaborations, where the big number of developers, multiple platforms and distributed environment are typical factors. This paper describes Build and Output Analyzer framework and its components that have been developed in CMS to facilitate software maintenance and improve software quality. The system allows to generate, control and analyze various types of automated software builds and tests, such as regular rebuilds of the development code, software integration for releases and installation of the existing versions.

  15. BOA: Framework for automated builds

    International Nuclear Information System (INIS)

    Ratnikova, N.

    2003-01-01

    Managing large-scale software products is a complex software engineering task. The automation of the software development, release and distribution process is most beneficial in the large collaborations, where the big number of developers, multiple platforms and distributed environment are typical factors. This paper describes Build and Output Analyzer framework and its components that have been developed in CMS to facilitate software maintenance and improve software quality. The system allows to generate, control and analyze various types of automated software builds and tests, such as regular rebuilds of the development code, software integration for releases and installation of the existing versions

  16. Software framework developed for the slice test of the ATLAS endcap muon trigger system

    CERN Document Server

    Komatsu, S; Ishida, Y; Tanaka, K; Hasuko, K; Kano, H; Matsumoto, Y; Yakamura, Y; Sakamoto, H; Ikeno, M; Nakayoshi, K; Sasaki, O; Yasu, Y; Hasegawa, Y; Totsuka, M; Tsuji, S; Maeno, T; Ichimiya, R; Kurashige, H

    2002-01-01

    A sliced system test of the ATLAS end cap muon level 1 trigger system has been done in 2001 and 2002 separately. We have developed an own software framework for property and run controls for the slice test in 2001. The system is described in C++ throughout. The multi-PC control system is accomplished using the CORBA system. We have then restructured the software system on top of the ATLAS online software framework, and used this one for the slice test in 2002. In this report we discuss two systems in detail with emphasizing the module property configuration and run control. (8 refs).

  17. A Reusable Software Architecture for Small Satellite AOCS Systems

    DEFF Research Database (Denmark)

    Alminde, Lars; Bendtsen, Jan Dimon; Laursen, Karl Kaas

    2006-01-01

    This paper concerns the software architecture called Sophy, which is an abbreviation for Simulation, Observation, and Planning in HYbrid systems. We present a framework that allows execution of hybrid dynamical systems in an on-line distributed computing environment, which includes interaction...... with both hardware and on-board software. Some of the key issues addressed by the framework are automatic translation of mathematical specifications of hybrid systems into executable software entities, management of execution of coupled models in a parallel distributed environment, as well as interaction...... with external components, hardware and/or software, through generic interfaces. Sophy is primarily intended as a tool for development of model based reusable software for the control and autonomous functions of satellites and/or satellite clusters....

  18. A Framework for Performing Verification and Validation in Reuse Based Software Engineering

    Science.gov (United States)

    Addy, Edward A.

    1997-01-01

    Verification and Validation (V&V) is currently performed during application development for many systems, especially safety-critical and mission- critical systems. The V&V process is intended to discover errors, especially errors related to critical processing, as early as possible during the development process. The system application provides the context under which the software artifacts are validated. This paper describes a framework that extends V&V from an individual application system to a product line of systems that are developed within an architecture-based software engineering environment. This framework includes the activities of traditional application-level V&V, and extends these activities into domain engineering and into the transition between domain engineering and application engineering. The framework includes descriptions of the types of activities to be performed during each of the life-cycle phases, and provides motivation for the activities.

  19. HistFitter software framework for statistical data analysis

    CERN Document Server

    Baak, M.; Côte, D.; Koutsman, A.; Lorenz, J.; Short, D.

    2015-01-01

    We present a software framework for statistical data analysis, called HistFitter, that has been used extensively by the ATLAS Collaboration to analyze big datasets originating from proton-proton collisions at the Large Hadron Collider at CERN. Since 2012 HistFitter has been the standard statistical tool in searches for supersymmetric particles performed by ATLAS. HistFitter is a programmable and flexible framework to build, book-keep, fit, interpret and present results of data models of nearly arbitrary complexity. Starting from an object-oriented configuration, defined by users, the framework builds probability density functions that are automatically fitted to data and interpreted with statistical tests. A key innovation of HistFitter is its design, which is rooted in core analysis strategies of particle physics. The concepts of control, signal and validation regions are woven into its very fabric. These are progressively treated with statistically rigorous built-in methods. Being capable of working with mu...

  20. Composable Framework Support for Software-FMEA Through Model Execution

    Science.gov (United States)

    Kocsis, Imre; Patricia, Andras; Brancati, Francesco; Rossi, Francesco

    2016-08-01

    Performing Failure Modes and Effect Analysis (FMEA) during software architecture design is becoming a basic requirement in an increasing number of domains; however, due to the lack of standardized early design phase model execution, classic SW-FMEA approaches carry significant risks and are human effort-intensive even in processes that use Model-Driven Engineering.Recently, modelling languages with standardized executable semantics have emerged. Building on earlier results, this paper describes framework support for generating executable error propagation models from such models during software architecture design. The approach carries the promise of increased precision, decreased risk and more automated execution for SW-FMEA during dependability- critical system development.

  1. The role of original equipment manufacturers in software distribution

    Directory of Open Access Journals (Sweden)

    Herţanu, A.

    2012-01-01

    Full Text Available The software distribution channels are having a significant impact on the mix of marketing not only for big companies in this domain, but also for small companies that activate in this domain. The Original Equipment Manufacturer’s distribution channel it’s having a significant impact on the marketing strategy of different companies. If the traditional distribution channels are still used to, the OEM’s channels are used more and more to distribute the software products or services not only to the segment of consumers formed by companies, but also to the segment of costumers formed by individual users.

  2. A multi-GPU real-time dose simulation software framework for lung radiotherapy.

    Science.gov (United States)

    Santhanam, A P; Min, Y; Neelakkantan, H; Papp, N; Meeks, S L; Kupelian, P A

    2012-09-01

    Medical simulation frameworks facilitate both the preoperative and postoperative analysis of the patient's pathophysical condition. Of particular importance is the simulation of radiation dose delivery for real-time radiotherapy monitoring and retrospective analyses of the patient's treatment. In this paper, a software framework tailored for the development of simulation-based real-time radiation dose monitoring medical applications is discussed. A multi-GPU-based computational framework coupled with inter-process communication methods is introduced for simulating the radiation dose delivery on a deformable 3D volumetric lung model and its real-time visualization. The model deformation and the corresponding dose calculation are allocated among the GPUs in a task-specific manner and is performed in a pipelined manner. Radiation dose calculations are computed on two different GPU hardware architectures. The integration of this computational framework with a front-end software layer and back-end patient database repository is also discussed. Real-time simulation of the dose delivered is achieved at once every 120 ms using the proposed framework. With a linear increase in the number of GPU cores, the computational time of the simulation was linearly decreased. The inter-process communication time also improved with an increase in the hardware memory. Variations in the delivered dose and computational speedup for variations in the data dimensions are investigated using D70 and D90 as well as gEUD as metrics for a set of 14 patients. Computational speed-up increased with an increase in the beam dimensions when compared with a CPU-based commercial software while the error in the dose calculation was lung model-based radiotherapy is an effective tool for performing both real-time and retrospective analyses.

  3. A Component-based Software Development and Execution Framework for CAx Applications

    Directory of Open Access Journals (Sweden)

    N. Matsuki

    2004-01-01

    Full Text Available Digitalization of the manufacturing process and technologies is regarded as the key to increased competitive ability. The MZ-Platform infrastructure is a component-based software development framework, designed for supporting enterprises to enhance digitalized technologies using software tools and CAx components in a self-innovative way. In the paper we show the algorithm, system architecture, and a CAx application example on MZ-Platform. We also propose a new parametric data structure based on MZ-Platform.

  4. Ajustes al framework de evaluación de productos de software MyFEPS

    OpenAIRE

    Angeleri, Paula; Titiosky, Rolando; Sorgen, Amos; Wuille Bille, Jaquelina; Oliveros, Alejandro

    2014-01-01

    El objetivo de este artículo es presentar la situación actual del proyecto de investigación MyFEPS Metodologías y Framework para la Evaluación de Productos de Software, basados en normas internacionales, en desarrollo en la Facultad de Ingeniería y Tecnología Informática de la Universidad de Belgrano, cuyo propósito es diseñar e implementar un framework que de apoyo al proceso de evaluación de software completo: desde la determinación de los objetivos de la evaluación, su planificación, ejecu...

  5. Applying a Framework to Evaluate Assignment Marking Software: A Case Study on Lightwork

    Science.gov (United States)

    Heinrich, Eva; Milne, John

    2012-01-01

    This article presents the findings of a qualitative evaluation on the effect of a specialised software tool on the efficiency and quality of assignment marking. The software, Lightwork, combines with the Moodle learning management system and provides support through marking rubrics and marker allocations. To enable the evaluation a framework has…

  6. Towards a New Paradigm of Software Development: an Ambassador Driven Process in Distributed Software Companies

    Science.gov (United States)

    Kumlander, Deniss

    The globalization of companies operations and competitor between software vendors demand improving quality of delivered software and decreasing the overall cost. The same in fact introduce a lot of problem into software development process as produce distributed organization breaking the co-location rule of modern software development methodologies. Here we propose a reformulation of the ambassador position increasing its productivity in order to bridge communication and workflow gap by managing the entire communication process rather than concentrating purely on the communication result.

  7. A software framework for the portable parallelization of particle-mesh simulations

    DEFF Research Database (Denmark)

    Sbalzarini, I.F.; Walther, Jens Honore; Polasek, B.

    2006-01-01

    Abstract: We present a software framework for the transparent and portable parallelization of simulations using particle-mesh methods. Particles are used to transport physical properties and a mesh is required in order to reinitialize the distorted particle locations, ensuring the convergence...

  8. Multi-threaded software framework development for the ATLAS experiment

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00226135; Baines, John; Bold, Tomasz; Calafiura, Paolo; Dotti, Andrea; Farrell, Steven; Leggett, Charles; Malon, David; Ritsch, Elmar; Snyder, Scott; Tsulaia, Vakhtang; van Gemmeren, Peter; Wynne, Benjamin

    2016-01-01

    ATLAS's current software framework, Gaudi/Athena, has been very successful for the experiment in LHC Runs 1 and 2. However, its single threaded design has been recognised for some time to be increasingly problematic as CPUs have increased core counts and decreased available memory per core. Even the multi-process version of Athena, AthenaMP, will not scale to the range of architectures we expect to use beyond Run2. ATLAS examined the requirements on an updated multi-threaded framework and laid out plans for a new framework, including better support for high level trigger (HLT) use cases, in 2014. In this paper we report on our progress in developing the new multi-threaded task parallel extension of Athena, AthenaMT. Implementing AthenaMT has required many significant code changes. Progress has been made in updating key concepts of the framework, to allow the incorporation of different levels of thread safety in algorithmic code (from un-migrated thread-unsafe code, to thread safe copyable code to reentrant co...

  9. Multi-threaded Software Framework Development for the ATLAS Experiment

    CERN Document Server

    Stewart, Graeme; The ATLAS collaboration; Baines, John; Calafiura, Paolo; Dotti, Andrea; Farrell, Steven; Leggett, Charles; Malon, David; Ritsch, Elmar; Snyder, Scott; Tsulaia, Vakhtang; van Gemmeren, Peter; Wynne, Benjamin

    2016-01-01

    ATLAS's current software framework, Gaudi/Athena, has been very successful for the experiment in LHC Runs 1 and 2. However, its single threaded design has been recognised for some time to be increasingly problematic as CPUs have increased core counts and decreased available memory per core. Even the multi-process version of Athena, AthenaMP, will not scale to the range of architectures we expect to use beyond Run2. ATLAS examined the requirements on an updated multi-threaded framework and layed out plans for a new framework, including better support for high level trigger (HLT) use cases, in 2014. In this paper we report on our progress in developing the new multi-threaded task parallel extension of Athena, AthenaMT. Implementing AthenaMT has required many significant code changes. Progress has been made in updating key concepts of the framework, to allow the incorporation of different levels of thread safety in algorithmic code (from un-migrated thread-unsafe code, to thread safe copyable code to reentrant c...

  10. Software Framework for Development of Web-GIS Systems for Analysis of Georeferenced Geophysical Data

    Science.gov (United States)

    Okladnikov, I.; Gordov, E. P.; Titov, A. G.

    2011-12-01

    Georeferenced datasets (meteorological databases, modeling and reanalysis results, remote sensing products, etc.) are currently actively used in numerous applications including modeling, interpretation and forecast of climatic and ecosystem changes for various spatial and temporal scales. Due to inherent heterogeneity of environmental datasets as well as their size which might constitute up to tens terabytes for a single dataset at present studies in the area of climate and environmental change require a special software support. A dedicated software framework for rapid development of providing such support information-computational systems based on Web-GIS technologies has been created. The software framework consists of 3 basic parts: computational kernel developed using ITTVIS Interactive Data Language (IDL), a set of PHP-controllers run within specialized web portal, and JavaScript class library for development of typical components of web mapping application graphical user interface (GUI) based on AJAX technology. Computational kernel comprise of number of modules for datasets access, mathematical and statistical data analysis and visualization of results. Specialized web-portal consists of web-server Apache, complying OGC standards Geoserver software which is used as a base for presenting cartographical information over the Web, and a set of PHP-controllers implementing web-mapping application logic and governing computational kernel. JavaScript library aiming at graphical user interface development is based on GeoExt library combining ExtJS Framework and OpenLayers software. Based on the software framework an information-computational system for complex analysis of large georeferenced data archives was developed. Structured environmental datasets available for processing now include two editions of NCEP/NCAR Reanalysis, JMA/CRIEPI JRA-25 Reanalysis, ECMWF ERA-40 Reanalysis, ECMWF ERA Interim Reanalysis, MRI/JMA APHRODITE's Water Resources Project Reanalysis

  11. Upgrade Software and Computing

    CERN Document Server

    The LHCb Collaboration, CERN

    2018-01-01

    This document reports the Research and Development activities that are carried out in the software and computing domains in view of the upgrade of the LHCb experiment. The implementation of a full software trigger implies major changes in the core software framework, in the event data model, and in the reconstruction algorithms. The increase of the data volumes for both real and simulated datasets requires a corresponding scaling of the distributed computing infrastructure. An implementation plan in both domains is presented, together with a risk assessment analysis.

  12. Towards a Fraud-Prevention Framework for Software Defined Radio Mobile Devices

    Directory of Open Access Journals (Sweden)

    Brawerman Alessandro

    2005-01-01

    Full Text Available The superior reconfigurability of software defined radio mobile devices has made it the most promising technology on the wireless network and in the communication industry. Despite several advantages, there are still a lot to discuss regarding security, for instance, the radio configuration data download, storage and installation, user's privacy, and cloning. The objective of this paper is to present a fraud-prevention framework for software defined radio mobile devices that enhances overall security through the use of new pieces of hardware, modules, and protocols. The framework offers security monitoring against malicious attacks and viruses, protects sensitive information, creates and protects an identity for the system, employs a secure protocol for radio configuration download, and finally, establishes an anticloning scheme, which besides guaranteeing that no units can be cloned over the air, also elevates the level of difficulty to clone units if the attacker has physical access to the mobile device. Even if cloned units exist, the anticloning scheme is able to identify and deny services to those units. Preliminary experiments and proofs that analyze the correctness of the fraud-prevention framework are also presented.

  13. Assessment of the integration capability of system architectures from a complex and distributed software systems perspective

    Science.gov (United States)

    Leuchter, S.; Reinert, F.; Müller, W.

    2014-06-01

    Procurement and design of system architectures capable of network centric operations demand for an assessment scheme in order to compare different alternative realizations. In this contribution an assessment method for system architectures targeted at the C4ISR domain is presented. The method addresses the integration capability of software systems from a complex and distributed software system perspective focusing communication, interfaces and software. The aim is to evaluate the capability to integrate a system or its functions within a system-of-systems network. This method uses approaches from software architecture quality assessment and applies them on the system architecture level. It features a specific goal tree of several dimensions that are relevant for enterprise integration. These dimensions have to be weighed against each other and totalized using methods from the normative decision theory in order to reflect the intention of the particular enterprise integration effort. The indicators and measurements for many of the considered quality features rely on a model based view on systems, networks, and the enterprise. That means it is applicable to System-of-System specifications based on enterprise architectural frameworks relying on defined meta-models or domain ontologies for defining views and viewpoints. In the defense context we use the NATO Architecture Framework (NAF) to ground respective system models. The proposed assessment method allows evaluating and comparing competing system designs regarding their future integration potential. It is a contribution to the system-of-systems engineering methodology.

  14. The ATLAS online High Level Trigger framework: Experience reusing offline software components in the ATLAS trigger

    International Nuclear Information System (INIS)

    Wiedenmann, Werner

    2010-01-01

    Event selection in the ATLAS High Level Trigger is accomplished to a large extent by reusing software components and event selection algorithms developed and tested in an offline environment. Many of these offline software modules are not specifically designed to run in a heavily multi-threaded online data flow environment. The ATLAS High Level Trigger (HLT) framework based on the GAUDI and ATLAS ATHENA frameworks, forms the interface layer, which allows the execution of the HLT selection and monitoring code within the online run control and data flow software. While such an approach provides a unified environment for trigger event selection across all of ATLAS, it also poses strict requirements on the reused software components in terms of performance, memory usage and stability. Experience of running the HLT selection software in the different environments and especially on large multi-node trigger farms has been gained in several commissioning periods using preloaded Monte Carlo events, in data taking periods with cosmic events and in a short period with proton beams from LHC. The contribution discusses the architectural aspects of the HLT framework, its performance and its software environment within the ATLAS computing, trigger and data flow projects. Emphasis is also put on the architectural implications for the software by the use of multi-core processors in the computing farms and the experiences gained with multi-threading and multi-process technologies.

  15. Distributed controller clustering in software defined networks.

    Directory of Open Access Journals (Sweden)

    Ahmed Abdelaziz

    Full Text Available Software Defined Networking (SDN is an emerging promising paradigm for network management because of its centralized network intelligence. However, the centralized control architecture of the software-defined networks (SDNs brings novel challenges of reliability, scalability, fault tolerance and interoperability. In this paper, we proposed a novel clustered distributed controller architecture in the real setting of SDNs. The distributed cluster implementation comprises of multiple popular SDN controllers. The proposed mechanism is evaluated using a real world network topology running on top of an emulated SDN environment. The result shows that the proposed distributed controller clustering mechanism is able to significantly reduce the average latency from 8.1% to 1.6%, the packet loss from 5.22% to 4.15%, compared to distributed controller without clustering running on HP Virtual Application Network (VAN SDN and Open Network Operating System (ONOS controllers respectively. Moreover, proposed method also shows reasonable CPU utilization results. Furthermore, the proposed mechanism makes possible to handle unexpected load fluctuations while maintaining a continuous network operation, even when there is a controller failure. The paper is a potential contribution stepping towards addressing the issues of reliability, scalability, fault tolerance, and inter-operability.

  16. Distributed controller clustering in software defined networks.

    Science.gov (United States)

    Abdelaziz, Ahmed; Fong, Ang Tan; Gani, Abdullah; Garba, Usman; Khan, Suleman; Akhunzada, Adnan; Talebian, Hamid; Choo, Kim-Kwang Raymond

    2017-01-01

    Software Defined Networking (SDN) is an emerging promising paradigm for network management because of its centralized network intelligence. However, the centralized control architecture of the software-defined networks (SDNs) brings novel challenges of reliability, scalability, fault tolerance and interoperability. In this paper, we proposed a novel clustered distributed controller architecture in the real setting of SDNs. The distributed cluster implementation comprises of multiple popular SDN controllers. The proposed mechanism is evaluated using a real world network topology running on top of an emulated SDN environment. The result shows that the proposed distributed controller clustering mechanism is able to significantly reduce the average latency from 8.1% to 1.6%, the packet loss from 5.22% to 4.15%, compared to distributed controller without clustering running on HP Virtual Application Network (VAN) SDN and Open Network Operating System (ONOS) controllers respectively. Moreover, proposed method also shows reasonable CPU utilization results. Furthermore, the proposed mechanism makes possible to handle unexpected load fluctuations while maintaining a continuous network operation, even when there is a controller failure. The paper is a potential contribution stepping towards addressing the issues of reliability, scalability, fault tolerance, and inter-operability.

  17. Quantification frameworks and their application for evaluating the software quality factor using quality characteristic value

    International Nuclear Information System (INIS)

    Kim, C.; Chung, C.H.; Won-Ahn, K.

    2004-01-01

    Many problems, related with safety, frequently occur because Digital Instrument and Control Systems are widely used and expanding their ranges to many applications in Nuclear Power Plants. It, however, does not hold a general position to estimate an appropriate software quality. Thus, the Quality Characteristic Value, a software quality factor through each software life cycle, is suggested in this paper. The Quality Characteristic Value is obtained as following procedure: 1) Scoring Quality Characteristic Factors (especially correctness, traceability, completeness, and understandability) onto Software Verification and Validation results, 2) Deriving the diamond-shaped graphs by setting values of Factors at each axis and lining every points, and lastly 3) Measuring the area of the graph for Quality Characteristic Value. In this paper, this methodology is applied to Plant Control System. In addition, the series of quantification frameworks exhibit some good characteristics in the view of software quality factor. More than any thing else, it is believed that introduced framework may be applicable to regulatory guide, software approval procedures, due to its soundness and simple characteristics. (authors)

  18. A Modular GIS-Based Software Architecture for Model Parameter Estimation using the Method of Anchored Distributions (MAD)

    Science.gov (United States)

    Ames, D. P.; Osorio-Murillo, C.; Over, M. W.; Rubin, Y.

    2012-12-01

    The Method of Anchored Distributions (MAD) is an inverse modeling technique that is well-suited for estimation of spatially varying parameter fields using limited observations and Bayesian methods. This presentation will discuss the design, development, and testing of a free software implementation of the MAD technique using the open source DotSpatial geographic information system (GIS) framework, R statistical software, and the MODFLOW groundwater model. This new tool, dubbed MAD-GIS, is built using a modular architecture that supports the integration of external analytical tools and models for key computational processes including a forward model (e.g. MODFLOW, HYDRUS) and geostatistical analysis (e.g. R, GSLIB). The GIS-based graphical user interface provides a relatively simple way for new users of the technique to prepare the spatial domain, to identify observation and anchor points, to perform the MAD analysis using a selected forward model, and to view results. MAD-GIS uses the Managed Extensibility Framework (MEF) provided by the Microsoft .NET programming platform to support integration of different modeling and analytical tools at run-time through a custom "driver." Each driver establishes a connection with external programs through a programming interface, which provides the elements for communicating with core MAD software. This presentation gives an example of adapting the MODFLOW to serve as the external forward model in MAD-GIS for inferring the distribution functions of key MODFLOW parameters. Additional drivers for other models are being developed and it is expected that the open source nature of the project will engender the development of additional model drivers by 3rd party scientists.

  19. Intercultural Competence in International Software R&D Cooperation. Toward a Conceptual Framework

    DEFF Research Database (Denmark)

    Skaates, Maria Anne

    2001-01-01

    As part of a research project on cooperation between software development subcontractors from small countries and foreign customers, the dynamics of intercultural competence are being examined. This paper builds a conceptual bridge by developing a definition of organizational intercultural....... It is envisioned that the presented novel framework could be helpful to software developing subcontractors from small national states who already use the competence terminology in discussions of their firms' capabilities and strategies....

  20. Architectural notes: a framework for distributed systems development

    NARCIS (Netherlands)

    Pires, L.F.; Ferreira Pires, Luis

    1994-01-01

    This thesis develops a framework of methods and techniques for distributed systems development. This framework consists of two related domains in which design concepts for distributed systems are defined: the entity domain and the behaviour domain. In the entity domain we consider structures of

  1. Software-Based Challenges of Developing the Future Distribution Grid

    Energy Technology Data Exchange (ETDEWEB)

    Stewart, Emma; Kiliccote, Sila; McParland, Charles

    2014-06-01

    The software that the utility industry currently uses may be insufficient to analyze the distribution grid as it rapidly modernizes to include active resources such as distributed generation, switch and voltage control, automation, and increasingly complex loads. Although planners and operators have traditionally viewed the distribution grid as a passive load, utilities and consultants increasingly need enhanced analysis that incorporates active distribution grid loads in order to ensure grid reliability. Numerous commercial and open-source tools are available for analyzing distribution grid systems. These tools vary in complexity from providing basic load-flow and capacity analysis under steady-state conditions to time-series analysis and even geographical representations of dynamic and transient events. The need for each type of analysis is not well understood in the industry, nor are the reasons that distribution analysis requires different techniques and tools both from those now available and from those used for transmission analysis. In addition, there is limited understanding of basic capability of the tools and how they should be practically applied to the evolving distribution system. The study reviews the features and state of the art capability of current tools, including usability and visualization, basic analysis functionality, advanced analysis including inverters, and renewable generation and load modeling. We also discuss the need for each type of distribution grid system analysis. In addition to reviewing basic functionality current models, we discuss dynamics and transient simulation in detail and draw conclusions about existing software?s ability to address the needs of the future distribution grid as well as the barriers to modernization of the distribution grid that are posed by the current state of software and model development. Among our conclusions are that accuracy, data transfer, and data processing abilities are key to future

  2. Methods of Run-Time Error Detection in Distributed Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.

    In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design...... of error detection methods includes a high level software specification. this has the purpose of illustrating that the designed can be used in practice....

  3. Developer’s time spent in a software project part using the SGD framework

    OpenAIRE

    Ciesluk, Simon

    2016-01-01

    Resource management is important for software projects to be successful. Time is one of these resources that needs to be managed. To do this you need to know how time resources are spent. Currently the existence of published material on time resources spent in a software project is almost none. In this thesis a research was conducted on how time resources are spent by an individual developer in a software project. The Self-Governance Developer framework was the tool used to gather these resou...

  4. A Software Framework for Remote Patient Monitoring by Using Multi-Agent Systems Support.

    Science.gov (United States)

    Fernandes, Chrystinne Oliveira; Lucena, Carlos José Pereira De

    2017-03-27

    Although there have been significant advances in network, hardware, and software technologies, the health care environment has not taken advantage of these developments to solve many of its inherent problems. Research activities in these 3 areas make it possible to apply advanced technologies to address many of these issues such as real-time monitoring of a large number of patients, particularly where a timely response is critical. The objective of this research was to design and develop innovative technological solutions to offer a more proactive and reliable medical care environment. The short-term and primary goal was to construct IoT4Health, a flexible software framework to generate a range of Internet of things (IoT) applications, containing components such as multi-agent systems that are designed to perform Remote Patient Monitoring (RPM) activities autonomously. An investigation into its full potential to conduct such patient monitoring activities in a more proactive way is an expected future step. A framework methodology was selected to evaluate whether the RPM domain had the potential to generate customized applications that could achieve the stated goal of being responsive and flexible within the RPM domain. As a proof of concept of the software framework's flexibility, 3 applications were developed with different implementations for each framework hot spot to demonstrate potential. Agents4Health was selected to illustrate the instantiation process and IoT4Health's operation. To develop more concrete indicators of the responsiveness of the simulated care environment, an experiment was conducted while Agents4Health was operating, to measure the number of delays incurred in monitoring the tasks performed by agents. IoT4Health's construction can be highlighted as our contribution to the development of eHealth solutions. As a software framework, IoT4Health offers extensibility points for the generation of applications. Applications can extend the framework in

  5. A unified software framework for deriving, visualizing, and exploring abstraction networks for ontologies

    Science.gov (United States)

    Ochs, Christopher; Geller, James; Perl, Yehoshua; Musen, Mark A.

    2016-01-01

    Software tools play a critical role in the development and maintenance of biomedical ontologies. One important task that is difficult without software tools is ontology quality assurance. In previous work, we have introduced different kinds of abstraction networks to provide a theoretical foundation for ontology quality assurance tools. Abstraction networks summarize the structure and content of ontologies. One kind of abstraction network that we have used repeatedly to support ontology quality assurance is the partial-area taxonomy. It summarizes structurally and semantically similar concepts within an ontology. However, the use of partial-area taxonomies was ad hoc and not generalizable. In this paper, we describe the Ontology Abstraction Framework (OAF), a unified framework and software system for deriving, visualizing, and exploring partial-area taxonomy abstraction networks. The OAF includes support for various ontology representations (e.g., OWL and SNOMED CT's relational format). A Protégé plugin for deriving “live partial-area taxonomies” is demonstrated. PMID:27345947

  6. CONRAD—A software framework for cone-beam imaging in radiology

    International Nuclear Information System (INIS)

    Maier, Andreas; Choi, Jang-Hwan; Riess, Christian; Keil, Andreas; Fahrig, Rebecca; Hofmann, Hannes G.; Berger, Martin; Fischer, Peter; Schwemmer, Chris; Wu, Haibo; Müller, Kerstin; Hornegger, Joachim

    2013-01-01

    Purpose: In the community of x-ray imaging, there is a multitude of tools and applications that are used in scientific practice. Many of these tools are proprietary and can only be used within a certain lab. Often the same algorithm is implemented multiple times by different groups in order to enable comparison. In an effort to tackle this problem, the authors created CONRAD, a software framework that provides many of the tools that are required to simulate basic processes in x-ray imaging and perform image reconstruction with consideration of nonlinear physical effects.Methods: CONRAD is a Java-based state-of-the-art software platform with extensive documentation. It is based on platform-independent technologies. Special libraries offer access to hardware acceleration such as OpenCL. There is an easy-to-use interface for parallel processing. The software package includes different simulation tools that are able to generate up to 4D projection and volume data and respective vector motion fields. Well known reconstruction algorithms such as FBP, DBP, and ART are included. All algorithms in the package are referenced to a scientific source.Results: A total of 13 different phantoms and 30 processing steps have already been integrated into the platform at the time of writing. The platform comprises 74.000 nonblank lines of code out of which 19% are used for documentation. The software package is available for download at http://conrad.stanford.edu. To demonstrate the use of the package, the authors reconstructed images from two different scanners, a table top system and a clinical C-arm system. Runtimes were evaluated using the RabbitCT platform and demonstrate state-of-the-art runtimes with 2.5 s for the 256 problem size and 12.4 s for the 512 problem size.Conclusions: As a common software framework, CONRAD enables the medical physics community to share algorithms and develop new ideas. In particular this offers new opportunities for scientific collaboration and

  7. HeteroGenius: A Framework for Hybrid Analysis of Heterogeneous Software Specifications

    Directory of Open Access Journals (Sweden)

    Manuel Giménez

    2014-01-01

    Full Text Available Nowadays, software artifacts are ubiquitous in our lives being an essential part of home appliances, cars, cell phones, and even in more critical activities like aeronautics and health sciences. In this context software failures may produce enormous losses, either economical or, in the worst case, in human lives. Software analysis is an area in software engineering concerned with the application of diverse techniques in order to prove the absence of errors in software pieces. In many cases different analysis techniques are applied by following specific methodological combinations that ensure better results. These interactions between tools are usually carried out at the user level and it is not supported by the tools. In this work we present HeteroGenius, a framework conceived to develop tools that allow users to perform hybrid analysis of heterogeneous software specifications. HeteroGenius was designed prioritising the possibility of adding new specification languages and analysis tools and enabling a synergic relation of the techniques under a graphical interface satisfying several well-known usability enhancement criteria. As a case-study we implemented the functionality of Dynamite on top of HeteroGenius.

  8. Revisioning Theoretical Framework of Electronic Performance Support Systems (EPSS within the Software Application Examples

    Directory of Open Access Journals (Sweden)

    Dr. Servet BAYRAM,

    2004-04-01

    Full Text Available Revisioning Theoretical Framework of Electronic Performance Support Systems (EPSS within the Software Application Examples Assoc. Prof. Dr. Servet BAYRAM Computer Education & Instructional Technologies Marmara University , TURKEY ABSTRACT EPSS provides electronic support to learners in achieving a performance objective; a feature which makes it universally and consistently available on demand any time, any place, regardless of situation, without unnecessary intermediaries involved in the process. The aim of this review is to develop a set of theoretical construct that provide descriptive power for explanation of EPSS and its roots and features within the software application examples (i.e., Microsoft SharePoint Server”v2.0” Beta 2, IBM Lotus Notes 6 & Domino 6, Oracle 9i Collaboration Suite, and Mac OS X v10.2. From the educational and training point of view, the paper visualizes a pentagon model for the interrelated domains of the theoretical framework of EPSS. These domains are: learning theories, information processing theories, developmental theories, instructional theories, and acceptance theories. This descriptive framework explains a set of descriptions as to which outcomes occur under given theoretical conditions for a given EPSS model within software examples. It summarizes some of the theoretical concepts supporting to the EPSS’ related features and explains how such concepts sharing same features with the example software programs in education and job training.

  9. A Framework for Performing V&V within Reuse-Based Software Engineering

    Science.gov (United States)

    Addy, Edward A.

    1996-01-01

    Verification and validation (V&V) is performed during application development for many systems, especially safety-critical and mission-critical systems. The V&V process is intended to discover errors, especially errors related to critical processing, as early as possible during the development process. Early discovery is important in order to minimize the cost and other impacts of correcting these errors. In order to provide early detection of errors, V&V is conducted in parallel with system development, often beginning with the concept phase. In reuse-based software engineering, however, decisions on the requirements, design and even implementation of domain assets can be made prior to beginning development of a specific system. In this case, V&V must be performed during domain engineering in order to have an impact on system development. This paper describes a framework for performing V&V within architecture-centric, reuse-based software engineering. This framework includes the activities of traditional application-level V&V, and extends these activities into domain engineering and into the transition between domain engineering and application engineering. The framework includes descriptions of the types of activities to be performed during each of the life-cycle phases, and provides motivation for the activities.

  10. Software Validation in ATLAS

    International Nuclear Information System (INIS)

    Hodgkinson, Mark; Seuster, Rolf; Simmons, Brinick; Sherwood, Peter; Rousseau, David

    2012-01-01

    The ATLAS collaboration operates an extensive set of protocols to validate the quality of the offline software in a timely manner. This is essential in order to process the large amounts of data being collected by the ATLAS detector in 2011 without complications on the offline software side. We will discuss a number of different strategies used to validate the ATLAS offline software; running the ATLAS framework software, Athena, in a variety of configurations daily on each nightly build via the ATLAS Nightly System (ATN) and Run Time Tester (RTT) systems; the monitoring of these tests and checking the compilation of the software via distributed teams of rotating shifters; monitoring of and follow up on bug reports by the shifter teams and periodic software cleaning weeks to improve the quality of the offline software further.

  11. Monitoring extensions for component-based distributed software

    NARCIS (Netherlands)

    Diakov, N.K.; Papir, Z.; van Sinderen, Marten J.; Quartel, Dick

    2000-01-01

    This paper defines a generic class of monitoring extensions to component-based distributed enterprise software. Introducing a monitoring extension to a legacy application system can be very costly. In this paper, we identify the minimum support for application monitoring within the generic

  12. Robotic Software Integration Using MARIE

    Directory of Open Access Journals (Sweden)

    Carle Côté

    2006-03-01

    Full Text Available This paper presents MARIE, a middleware framework oriented towards developing and integrating new and existing software for robotic systems. By using a generic communication framework, MARIE aims to create a flexible distributed component system that allows robotics developers to share software programs and algorithms, and design prototypes rapidly based on their own integration needs. The use of MARIE is illustrated with the design of a socially interactive autonomous mobile robot platform capable of map building, localization, navigation, tasks scheduling, sound source localization, tracking and separation, speech recognition and generation, visual tracking, message reading and graphical interaction using a touch screen interface.

  13. PAPIRUS, a parallel computing framework for sensitivity analysis, uncertainty propagation, and estimation of parameter distribution

    International Nuclear Information System (INIS)

    Heo, Jaeseok; Kim, Kyung Doo

    2015-01-01

    Highlights: • We developed an interface between an engineering simulation code and statistical analysis software. • Multiple packages of the sensitivity analysis, uncertainty quantification, and parameter estimation algorithms are implemented in the framework. • Parallel computing algorithms are also implemented in the framework to solve multiple computational problems simultaneously. - Abstract: This paper introduces a statistical data analysis toolkit, PAPIRUS, designed to perform the model calibration, uncertainty propagation, Chi-square linearity test, and sensitivity analysis for both linear and nonlinear problems. The PAPIRUS was developed by implementing multiple packages of methodologies, and building an interface between an engineering simulation code and the statistical analysis algorithms. A parallel computing framework is implemented in the PAPIRUS with multiple computing resources and proper communications between the server and the clients of each processor. It was shown that even though a large amount of data is considered for the engineering calculation, the distributions of the model parameters and the calculation results can be quantified accurately with significant reductions in computational effort. A general description about the PAPIRUS with a graphical user interface is presented in Section 2. Sections 2.1–2.5 present the methodologies of data assimilation, uncertainty propagation, Chi-square linearity test, and sensitivity analysis implemented in the toolkit with some results obtained by each module of the software. Parallel computing algorithms adopted in the framework to solve multiple computational problems simultaneously are also summarized in the paper

  14. PAPIRUS, a parallel computing framework for sensitivity analysis, uncertainty propagation, and estimation of parameter distribution

    Energy Technology Data Exchange (ETDEWEB)

    Heo, Jaeseok, E-mail: jheo@kaeri.re.kr; Kim, Kyung Doo, E-mail: kdkim@kaeri.re.kr

    2015-10-15

    Highlights: • We developed an interface between an engineering simulation code and statistical analysis software. • Multiple packages of the sensitivity analysis, uncertainty quantification, and parameter estimation algorithms are implemented in the framework. • Parallel computing algorithms are also implemented in the framework to solve multiple computational problems simultaneously. - Abstract: This paper introduces a statistical data analysis toolkit, PAPIRUS, designed to perform the model calibration, uncertainty propagation, Chi-square linearity test, and sensitivity analysis for both linear and nonlinear problems. The PAPIRUS was developed by implementing multiple packages of methodologies, and building an interface between an engineering simulation code and the statistical analysis algorithms. A parallel computing framework is implemented in the PAPIRUS with multiple computing resources and proper communications between the server and the clients of each processor. It was shown that even though a large amount of data is considered for the engineering calculation, the distributions of the model parameters and the calculation results can be quantified accurately with significant reductions in computational effort. A general description about the PAPIRUS with a graphical user interface is presented in Section 2. Sections 2.1–2.5 present the methodologies of data assimilation, uncertainty propagation, Chi-square linearity test, and sensitivity analysis implemented in the toolkit with some results obtained by each module of the software. Parallel computing algorithms adopted in the framework to solve multiple computational problems simultaneously are also summarized in the paper.

  15. Coordinating Management Activities in Distributed Software Development Projects

    OpenAIRE

    Bendeck, Fawsy; Goldmann, Sigrid; Holz, Harald; Kötting, Boris

    1999-01-01

    Coordinating distributed processes, especially engineering and software design processes, has been a research topic for some time now. Several approaches have been published that aim at coordinating large projects in general, and large software development processes in specific. However, most of these approaches focus on the technical part of the design process and omit management activities like planning and scheduling the project, or monitoring it during execution. In this paper, we focus o...

  16. The ATLAS online High Level Trigger framework experience reusing offline software components in the ATLAS trigger

    CERN Document Server

    Wiedenmann, W

    2009-01-01

    Event selection in the Atlas High Level Trigger is accomplished to a large extent by reusing software components and event selection algorithms developed and tested in an offline environment. Many of these offline software modules are not specifically designed to run in a heavily multi-threaded online data flow environment. The Atlas High Level Trigger (HLT) framework based on the Gaudi and Atlas Athena frameworks, forms the interface layer, which allows the execution of the HLT selection and monitoring code within the online run control and data flow software. While such an approach provides a unified environment for trigger event selection across all of Atlas, it also poses strict requirements on the reused software components in terms of performance, memory usage and stability. Experience of running the HLT selection software in the different environments and especially on large multi-node trigger farms has been gained in several commissioning periods using preloaded Monte Carlo events, in data taking peri...

  17. Toward a User Driven Innovation for Distributed Software Teams

    Science.gov (United States)

    Hossain, Liaquat; Zhou, David

    The software industry has emerged to include some of the most revolutionized distributed work groups; however, not all such groups achieve their set goals and some even fail miserably. The distributed nature of open source software project teams provides an intriguing context for the study of distributed coordination. OSS team structures have traditionally been geographically dispersed and, therefore, the coordination of post-release activities such as testing are made difficult due to the fact that the only means of communication is via electronic forms, such as e-mail or message boards and forums. Nevertheless, large scale, complex, and innovative software packages have been the fruits of labor for some OSS teams set in such coordination-unfriendly environments, while others end in flames. Why are some distributed work groups more effective than others? In our current communication-enriched environment, best practices for coordination are adopted by all software projects yet some still fall by the wayside. Does the team structure have bearing on the success of the project? How does the communication between the team and external parties affect the project's ultimate success or failure? In this study, we seek to answer these questions by applying existing theories from social networks and their analytical methods in the coordination of defect management activities found in OSS projects. We propose the social network based theoretical model for exploring distributed coordination structures and apply that for the case of the OSS defect management process for exploring the structural properties, which induce the greatest coordination performance. The outcome suggests that there is correlation between certain network measures such as density, centrality, and betweenness and coordination performance measures of defect management systems such as quality and timeliness.

  18. AFECS. multi-agent framework for experiment control systems

    Energy Technology Data Exchange (ETDEWEB)

    Gyurjyan, V; Abbott, D; Heyes, G; Jastrzembski, E; Timmer, C; Wolin, E [Jefferson Lab, 12000 Jefferson Ave. MS-12B3, Newport News, VA 23606 (United States)], E-mail: gurjyan@jlab.org

    2008-07-01

    AFECS is a pure Java based software framework for designing and implementing distributed control systems. AFECS creates a control system environment as a collection of software agents behaving as finite state machines. These agents can represent real entities, such as hardware devices, software tasks, or control subsystems. A special control oriented ontology language (COOL), based on RDFS (Resource Definition Framework Schema) is provided for control system description as well as for agent communication. AFECS agents can be distributed over a variety of platforms. Agents communicate with their associated physical components using range of communication protocols, including tcl-DP, cMsg (publish-subscribe communication system developed at Jefferson Lab), SNMP (simple network management protocol), EPICS channel access protocol and JDBC.

  19. AFECS. Multi-Agent Framework for Experiment Control Systems

    Energy Technology Data Exchange (ETDEWEB)

    Vardan Gyurjyan; David Abbott; William Heyes; Edward Jastrzembski; Carl Timmer; Elliott Wolin

    2008-01-23

    AFECS is a pure Java based software framework for designing and implementing distributed control systems. AFECS creates a control system environment as a collection of software agents behaving as finite state machines. These agents can represent real entities, such as hardware devices, software tasks, or control subsystems. A special control oriented ontology language (COOL), based on RDFS (Resource Definition Framework Schema) is provided for control system description as well as for agent communication. AFECS agents can be distributed over a variety of platforms. Agents communicate with their associated physical components using range of communication protocols, including tcl-DP, cMsg (publish-subscribe communication system developed at Jefferson Lab), SNMP (simple network management protocol), EPICS channel access protocol and JDBC.

  20. AFECS. multi-agent framework for experiment control systems

    International Nuclear Information System (INIS)

    Gyurjyan, V; Abbott, D; Heyes, G; Jastrzembski, E; Timmer, C; Wolin, E

    2008-01-01

    AFECS is a pure Java based software framework for designing and implementing distributed control systems. AFECS creates a control system environment as a collection of software agents behaving as finite state machines. These agents can represent real entities, such as hardware devices, software tasks, or control subsystems. A special control oriented ontology language (COOL), based on RDFS (Resource Definition Framework Schema) is provided for control system description as well as for agent communication. AFECS agents can be distributed over a variety of platforms. Agents communicate with their associated physical components using range of communication protocols, including tcl-DP, cMsg (publish-subscribe communication system developed at Jefferson Lab), SNMP (simple network management protocol), EPICS channel access protocol and JDBC

  1. Software defined networking applications in distributed datacenters

    CERN Document Server

    Qi, Heng

    2016-01-01

    This SpringerBrief provides essential insights on the SDN application designing and deployment in distributed datacenters. In this book, three key problems are discussed: SDN application designing, SDN deployment and SDN management. This book demonstrates how to design the SDN-based request allocation application in distributed datacenters. It also presents solutions for SDN controller placement to deploy SDN in distributed datacenters. Finally, an SDN management system is proposed to guarantee the performance of datacenter networks which are covered and controlled by many heterogeneous controllers. Researchers and practitioners alike will find this book a valuable resource for further study on Software Defined Networking. .

  2. HistFitter software framework for statistical data analysis

    Energy Technology Data Exchange (ETDEWEB)

    Baak, M. [CERN, Geneva (Switzerland); Besjes, G.J. [Radboud University Nijmegen, Nijmegen (Netherlands); Nikhef, Amsterdam (Netherlands); Cote, D. [University of Texas, Arlington (United States); Koutsman, A. [TRIUMF, Vancouver (Canada); Lorenz, J. [Ludwig-Maximilians-Universitaet Muenchen, Munich (Germany); Excellence Cluster Universe, Garching (Germany); Short, D. [University of Oxford, Oxford (United Kingdom)

    2015-04-15

    We present a software framework for statistical data analysis, called HistFitter, that has been used extensively by the ATLAS Collaboration to analyze big datasets originating from proton-proton collisions at the Large Hadron Collider at CERN. Since 2012 HistFitter has been the standard statistical tool in searches for supersymmetric particles performed by ATLAS. HistFitter is a programmable and flexible framework to build, book-keep, fit, interpret and present results of data models of nearly arbitrary complexity. Starting from an object-oriented configuration, defined by users, the framework builds probability density functions that are automatically fit to data and interpreted with statistical tests. Internally HistFitter uses the statistics packages RooStats and HistFactory. A key innovation of HistFitter is its design, which is rooted in analysis strategies of particle physics. The concepts of control, signal and validation regions are woven into its fabric. These are progressively treated with statistically rigorous built-in methods. Being capable of working with multiple models at once that describe the data, HistFitter introduces an additional level of abstraction that allows for easy bookkeeping, manipulation and testing of large collections of signal hypotheses. Finally, HistFitter provides a collection of tools to present results with publication quality style through a simple command-line interface. (orig.)

  3. HistFitter software framework for statistical data analysis

    International Nuclear Information System (INIS)

    Baak, M.; Besjes, G.J.; Cote, D.; Koutsman, A.; Lorenz, J.; Short, D.

    2015-01-01

    We present a software framework for statistical data analysis, called HistFitter, that has been used extensively by the ATLAS Collaboration to analyze big datasets originating from proton-proton collisions at the Large Hadron Collider at CERN. Since 2012 HistFitter has been the standard statistical tool in searches for supersymmetric particles performed by ATLAS. HistFitter is a programmable and flexible framework to build, book-keep, fit, interpret and present results of data models of nearly arbitrary complexity. Starting from an object-oriented configuration, defined by users, the framework builds probability density functions that are automatically fit to data and interpreted with statistical tests. Internally HistFitter uses the statistics packages RooStats and HistFactory. A key innovation of HistFitter is its design, which is rooted in analysis strategies of particle physics. The concepts of control, signal and validation regions are woven into its fabric. These are progressively treated with statistically rigorous built-in methods. Being capable of working with multiple models at once that describe the data, HistFitter introduces an additional level of abstraction that allows for easy bookkeeping, manipulation and testing of large collections of signal hypotheses. Finally, HistFitter provides a collection of tools to present results with publication quality style through a simple command-line interface. (orig.)

  4. Agent-oriented software engineering reflections on architectures, methodologies, languages, and frameworks

    CERN Document Server

    Shehory, Onn

    2014-01-01

    With this book, Onn Shehory and Arnon Sturm, together with further contributors, introduce the reader to various facets of agent-oriented software engineering (AOSE). They provide a selected collection of state-of-the-art findings, which combines research from information systems, artificial intelligence, distributed systems and software engineering and covers essential development aspects of agent-based systems. The book chapters are organized into five parts. The first part introduces the AOSE domain in general, including introduction to agents and the peculiarities of software engineerin

  5. Software Comparison for Renewable Energy Deployment in a Distribution Network

    Energy Technology Data Exchange (ETDEWEB)

    Gao, David Wenzhong [Alternative Power Innovations, LLC, Sharonville, OH (United States); Muljadi, Eduard [National Renewable Energy Lab. (NREL), Golden, CO (United States); Tian, Tian [National Renewable Energy Lab. (NREL), Golden, CO (United States); Miller, Mackay [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2017-02-22

    The main objective of this report is to evaluate different software options for performing robust distributed generation (DG) power system modeling. The features and capabilities of four simulation tools, OpenDSS, GridLAB-D, CYMDIST, and PowerWorld Simulator, are compared to analyze their effectiveness in analyzing distribution networks with DG. OpenDSS and GridLAB-D, two open source software, have the capability to simulate networks with fluctuating data values. These packages allow the running of a simulation each time instant by iterating only the main script file. CYMDIST, a commercial software, allows for time-series simulation to study variations on network controls. PowerWorld Simulator, another commercial tool, has a batch mode simulation function through the 'Time Step Simulation' tool, which obtains solutions for a list of specified time points. PowerWorld Simulator is intended for analysis of transmission-level systems, while the other three are designed for distribution systems. CYMDIST and PowerWorld Simulator feature easy-to-use graphical user interfaces (GUIs). OpenDSS and GridLAB-D, on the other hand, are based on command-line programs, which increase the time necessary to become familiar with the software packages.

  6. Methods of Run-Time Error Detection in Distributed Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.

    of generic run-time error types, design of methods of observing application software behaviorduring execution and design of methods of evaluating run time constraints. In the definition of error types it is attempted to cover all relevant aspects of the application softwaree behavior. Methods of observation......In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design...

  7. Management of Globally Distributed Component-Based Software Development Projects

    NARCIS (Netherlands)

    J. Kotlarsky (Julia)

    2005-01-01

    textabstractGlobally Distributed Component-Based Development (GD CBD) is expected to become a promising area, as increasing numbers of companies are setting up software development in a globally distributed environment and at the same time are adopting CBD methodologies. Being an emerging area, the

  8. Some software issues in mapping of power distribution feeders

    International Nuclear Information System (INIS)

    Mufti, I.A.

    1994-01-01

    This paper is about the in-house developed software for distribution feeders mapping project. It first gives birds eye view of the project, highlight its technical complexity in management and logistics, introduced by sheer size of the project. It gives an overview of the software developed and the moves on to describe circuit tracing, circuit model, leaves isolation (for tree structured network) and backtracking in more detail among many different parts of software, description of all which is not possible because of space limitations. (author)

  9. Proyectos de evaluación de productos de software con un nuevo framework de calidad

    OpenAIRE

    Titiosky, Rolando; Angeleri, Paula; Sorgen, Amos; Wuille Bille, Jaquelina

    2013-01-01

    El objetivo de este artículo es presentar la situación actual del proyecto de investigación MyFEPS [1] Metodologías y Framework para la Evaluación de Productos de Software, basados en normas internacionales, en desarrollo en la Facultad de Ingeniería y Tecnología Informática de la Universidad de Belgrano, cuyo propósito es diseñar e implementar un framework para ayudar a técnicos, ingenieros y gerentes en todo el proceso de evaluación de software, desde la determinación de los objetivos de la...

  10. An integrated software testing framework for FGA-based controllers in nuclear power plants

    International Nuclear Information System (INIS)

    Kim, Jae Yeob; Kim, Eun Sub; Yoo, Jun Beom; Lee, Young Jun; Choi, Jong Gyun

    2016-01-01

    Field-programmable gate arrays (FPGAs) have received much attention from the nuclear industry as an alternative platform to programmable logic controllers for digital instrumentation and control. The software aspect of FPGA development consists of several steps of synthesis and refinement, and also requires verification activities, such as simulations that are performed individually at each step. This study proposed an integrated software-testing framework for simulating all artifacts of the FPGA software development simultaneously and evaluating whether all artifacts work correctly using common oracle programs. This method also generates a massive number of meaningful simulation scenarios that reflect reactor shutdown logics. The experiment, which was performed on two FPGA software implementations, showed that it can dramatically save both time and costs

  11. Managing distributed software development in the Virtual Astronomical Observatory

    Science.gov (United States)

    Evans, Janet D.; Plante, Raymond L.; Boneventura, Nina; Busko, Ivo; Cresitello-Dittmar, Mark; D'Abrusco, Raffaele; Doe, Stephen; Ebert, Rick; Laurino, Omar; Pevunova, Olga; Refsdal, Brian; Thomas, Brian

    2012-09-01

    The U.S. Virtual Astronomical Observatory (VAO) is a product-driven organization that provides new scientific research capabilities to the astronomical community. Software development for the VAO follows a lightweight framework that guides development of science applications and infrastructure. Challenges to be overcome include distributed development teams, part-time efforts, and highly constrained schedules. We describe the process we followed to conquer these challenges while developing Iris, the VAO application for analysis of 1-D astronomical spectral energy distributions (SEDs). Iris was successfully built and released in less than a year with a team distributed across four institutions. The project followed existing International Virtual Observatory Alliance inter-operability standards for spectral data and contributed a SED library as a by-product of the project. We emphasize lessons learned that will be folded into future development efforts. In our experience, a well-defined process that provides guidelines to ensure the project is cohesive and stays on track is key to success. Internal product deliveries with a planned test and feedback loop are critical. Release candidates are measured against use cases established early in the process, and provide the opportunity to assess priorities and make course corrections during development. Also key is the participation of a stakeholder such as a lead scientist who manages the technical questions, advises on priorities, and is actively involved as a lead tester. Finally, frequent scheduled communications (for example a bi-weekly tele-conference) assure issues are resolved quickly and the team is working toward a common vision.

  12. Towards a Framework for the Evaluation Design of Enterprise Social Software

    DEFF Research Database (Denmark)

    Herzog, Christian; Richter, Alexander; Steinhüser, Melanie

    2015-01-01

    a design theory that highlights the various design options and ensures completeness and consistency. Based on a comprehensive literature analysis, as well as an interview study with 31 ESS experts from 29 companies, we suggest a conceptual framework intended as decision support for the ESS evaluation...... design for different stakeholders. Beyond providing an orientation the framework also reveals six evaluation classes that represent typical application instantiations and can be understood as principles of implementation. A first validation in five organizations confirms that the framework can lead......While the use of Enterprise Social Software (ESS) increases, reports from science and practice show that evaluating its impact remains a major challenge. Various interests and points of view make each ESS evaluation an individual matter and lead to diverse requirements. In this paper, we propose...

  13. A reference model and technical framework for mobile social software for learning

    NARCIS (Netherlands)

    De Jong, Tim; Specht, Marcus; Koper, Rob

    2008-01-01

    De Jong,T., Specht, M., & Koper, R. (2008). A reference model and technical framework for mobile social software for learning. In I. A. Sánchez & P. Isaías (Eds.), Proceedings of the IADIS Mobile Learning Conference 2008 (pp. 206-210). April, 11-13, 2008, Carvoeiro, Portugal.

  14. Hardware-assisted software clock synchronization for homogeneous distributed systems

    Science.gov (United States)

    Ramanathan, P.; Kandlur, Dilip D.; Shin, Kang G.

    1990-01-01

    A clock synchronization scheme that strikes a balance between hardware and software solutions is proposed. The proposed is a software algorithm that uses minimal additional hardware to achieve reasonably tight synchronization. Unlike other software solutions, the guaranteed worst-case skews can be made insensitive to the maximum variation of message transit delay in the system. The scheme is particularly suitable for large partially connected distributed systems with topologies that support simple point-to-point broadcast algorithms. Examples of such topologies include the hypercube and the mesh interconnection structures.

  15. FACET: A simulation software framework for modeling complex societal processes and interactions

    Energy Technology Data Exchange (ETDEWEB)

    Christiansen, J. H.

    2000-06-02

    FACET, the Framework for Addressing Cooperative Extended Transactions, was developed at Argonne National Laboratory to address the need for a simulation software architecture in the style of an agent-based approach, but with sufficient robustness, expressiveness, and flexibility to be able to deal with the levels of complexity seen in real-world social situations. FACET is an object-oriented software framework for building models of complex, cooperative behaviors of agents. It can be used to implement simulation models of societal processes such as the complex interplay of participating individuals and organizations engaged in multiple concurrent transactions in pursuit of their various goals. These transactions can be patterned on, for example, clinical guidelines and procedures, business practices, government and corporate policies, etc. FACET can also address other complex behaviors such as biological life cycles or manufacturing processes. To date, for example, FACET has been applied to such areas as land management, health care delivery, avian social behavior, and interactions between natural and social processes in ancient Mesopotamia.

  16. Virtual reality devices integration in scientific visualization software in the VtkVRPN framework

    International Nuclear Information System (INIS)

    Journe, G.; Guilbaud, C.

    2005-01-01

    A high-quality scientific visualization software relies on ergonomic navigation and exploration. Those are essential to be able to perform an efficient data analysis. To help solving this issue, management of virtual reality devices has been developed inside the CEA 'VtkVRPN' framework. This framework is based on VTK, a 3D graphical library, and VRPN, a virtual reality devices management library. This document describes the developments done during a post-graduate training course. (authors)

  17. ProjectQ: an open source software framework for quantum computing

    Directory of Open Access Journals (Sweden)

    Damian S. Steiger

    2018-01-01

    Full Text Available We introduce ProjectQ, an open source software effort for quantum computing. The first release features a compiler framework capable of targeting various types of hardware, a high-performance simulator with emulation capabilities, and compiler plug-ins for circuit drawing and resource estimation. We introduce our Python-embedded domain-specific language, present the features, and provide example implementations for quantum algorithms. The framework allows testing of quantum algorithms through simulation and enables running them on actual quantum hardware using a back-end connecting to the IBM Quantum Experience cloud service. Through extension mechanisms, users can provide back-ends to further quantum hardware, and scientists working on quantum compilation can provide plug-ins for additional compilation, optimization, gate synthesis, and layout strategies.

  18. A Framework for Software-as-a-Service Selection and Provisioning

    OpenAIRE

    Badidi, Elarbi

    2013-01-01

    As cloud computing is increasingly transforming the information technology landscape, organizations and businesses are exhibiting strong interest in Software-as-a-Service (SaaS) offerings that can help them increase business agility and reduce their operational costs. They increasingly demand services that can meet their functional and non-functional requirements. Given the plethora and the variety of SaaS offerings, we propose, in this paper, a framework for SaaS provisioning, which relies o...

  19. Modeling of ultrasonic processes utilizing a generic software framework

    Science.gov (United States)

    Bruns, P.; Twiefel, J.; Wallaschek, J.

    2017-06-01

    Modeling of ultrasonic processes is typically characterized by a high degree of complexity. Different domains and size scales must be regarded, so that it is rather difficult to build up a single detailed overall model. Developing partial models is a common approach to overcome this difficulty. In this paper a generic but simple software framework is presented which allows to coupe arbitrary partial models by slave modules with well-defined interfaces and a master module for coordination. Two examples are given to present the developed framework. The first one is the parameterization of a load model for ultrasonically-induced cavitation. The piezoelectric oscillator, its mounting, and the process load are described individually by partial models. These partial models then are coupled using the framework. The load model is composed of spring-damper-elements which are parameterized by experimental results. In the second example, the ideal mounting position for an oscillator utilized in ultrasonic assisted machining of stone is determined. Partial models for the ultrasonic oscillator, its mounting, the simplified contact process, and the workpiece’s material characteristics are presented. For both applications input and output variables are defined to meet the requirements of the framework’s interface.

  20. Heartbeat-based error diagnosis framework for distributed embedded systems

    Science.gov (United States)

    Mishra, Swagat; Khilar, Pabitra Mohan

    2012-01-01

    Distributed Embedded Systems have significant applications in automobile industry as steer-by-wire, fly-by-wire and brake-by-wire systems. In this paper, we provide a general framework for fault detection in a distributed embedded real time system. We use heartbeat monitoring, check pointing and model based redundancy to design a scalable framework that takes care of task scheduling, temperature control and diagnosis of faulty nodes in a distributed embedded system. This helps in diagnosis and shutting down of faulty actuators before the system becomes unsafe. The framework is designed and tested using a new simulation model consisting of virtual nodes working on a message passing system.

  1. Software framework and jet energy scale calibration in the ATLAS experiment

    International Nuclear Information System (INIS)

    Binet, Sebastien

    2006-01-01

    This thesis presents the work achieved to instrument the ATLAS software framework, ATHENA, with a library of tools and utensils for the physics analysis as well as the extraction of the jet energy scale using physics events (in-situ calibration). The software part presents the various components of the ATHENA framework which handles the simulated and reconstructed data flow as well as the different stages of this process, before and during the data taking. The building of a library of tools easing the reconstruction of physics objects, their association with Monte-Carlo particles and their API is then explained. The need for common language and collaboration-wide utensils is emphasised as it allows to share the workload of validating these tools and to get reproducible physics results. The analysis part deals with the implementation of a light jet energy scale calibration algorithm within the C++ framework. This calibration algorithm makes use of W bosons decaying into light jets within semileptonic t t-bar events. From the processing of fast and full simulation data with this algorithm, it seems possible to reach a percent level knowledge of the light jet energy scale. Finally, the feasibility study of the b-jet energy scale calibration using γZ 0 → γb b-bar events is presented. It is shown that a purely sequential approach is not sufficient to extract the signal nor to collect a sufficient amount of Z 0 to calibrate the b-jet energy scale. (author)

  2. Real-time Control Mediation in Agile Distributed Software Development

    DEFF Research Database (Denmark)

    Persson, John Stouby; Aaen, Ivan; Mathiassen, Lars

    2008-01-01

    Agile distributed environments pose particular challenges related to control of quality and collaboration in software development. Moreover, while face-to-face interaction is fundamental in agile development, distributed environments must rely extensively on mediated interactions. On this backdrop...... control was mediated over distance by technology through real-time exchanges. Contrary to previous research, the analysis suggests that both formal and informal elements of real-time mediated control were used; that evolving goals and adjustment of expectations were two of the main issues in real......-time mediated control exchanges; and, that the actors, despite distances in space and culture, developed a clan-like pattern mediated by technology to help control quality and collaboration in software development....

  3. CONFU: Configuration Fuzzing Testing Framework for Software Vulnerability Detection.

    Science.gov (United States)

    Dai, Huning; Murphy, Christian; Kaiser, Gail

    2010-01-01

    Many software security vulnerabilities only reveal themselves under certain conditions, i.e., particular configurations and inputs together with a certain runtime environment. One approach to detecting these vulnerabilities is fuzz testing. However, typical fuzz testing makes no guarantees regarding the syntactic and semantic validity of the input, or of how much of the input space will be explored. To address these problems, we present a new testing methodology called Configuration Fuzzing. Configuration Fuzzing is a technique whereby the configuration of the running application is mutated at certain execution points, in order to check for vulnerabilities that only arise in certain conditions. As the application runs in the deployment environment, this testing technique continuously fuzzes the configuration and checks "security invariants" that, if violated, indicate a vulnerability. We discuss the approach and introduce a prototype framework called ConFu (CONfiguration FUzzing testing framework) for implementation. We also present the results of case studies that demonstrate the approach's feasibility and evaluate its performance.

  4. Distributed caching mechanism for various MPE software services

    CERN Document Server

    Svec, Andrej

    2017-01-01

    The MPE Software Section provides multiple software services to facilitate the testing and the operation of the CERN Accelerator complex. Continuous growth in the number of users and the amount of processed data result in the requirement of high scalability. Our current priority is to move towards a distributed and properly load balanced set of services based on containers. The aim of this project is to implement the generic caching mechanism applicable to our services and chosen architecture. The project will at first require research about the different aspects of distributed caching (persistence, no gc-caching, cache consistency etc.) and the available technologies followed by the implementation of the chosen solution. In order to validate the correctness and performance of the implementation in the last phase of the project it will be required to implement a monitoring layer and integrate it with the current ELK stack.

  5. Trust in agile teams in distributed software development

    DEFF Research Database (Denmark)

    Tjørnehøj, Gitte; Fransgård, Mette; Skalkam, Signe

    2012-01-01

    Distributed software development (DSD) is becoming everyday practice in the software market. Difficult challenges and difficulty reaching the expected benefits are well documented. Recently agile software development has become common in DSD, even though important incompatibilities between...... that leads to team success. This article reports from a study of two agile DSD teams with very different organization and collaboration patterns. It addresses the role of trust and distrust in DSD by analyzing how the team members’ trust developed and erode through the lifetime of the two collaborations...... and how management actions influenced this. We find that some agile practice can empower teams to take over responsibility for managing their own trust building and sustaining and that management neglect of trust-building in other situations can hinder the development of beneficial balanced agile DSD...

  6. A distributed framework for inter-domain virtual network embedding

    Science.gov (United States)

    Wang, Zihua; Han, Yanni; Lin, Tao; Tang, Hui

    2013-03-01

    Network virtualization has been a promising technology for overcoming the Internet impasse. A main challenge in network virtualization is the efficient assignment of virtual resources. Existing work focused on intra-domain solutions whereas inter-domain situation is more practical in realistic setting. In this paper, we present a distributed inter-domain framework for mapping virtual networks to physical networks which can ameliorate the performance of the virtual network embedding. The distributed framework is based on a Multi-agent approach. A set of messages for information exchange is defined. We design different operations and IPTV use scenarios to validate the advantages of our framework. Use cases shows that our framework can solve the inter-domain problem efficiently.

  7. Employing peer-to-peer software distribution in ALICE Grid Services to enable opportunistic use of OSG resources

    CERN Multimedia

    CERN. Geneva; Sakrejda, Iwona

    2012-01-01

    The ALICE Grid infrastructure is based on AliEn, a lightweight open source framework built on Web Services and a Distributed Agent Model in which job agents are submitted onto a grid site to prepare the environment and pull work from a central task queue located at CERN. In the standard configuration, each ALICE grid site supports an ALICE-specific VO box as a single point of contact between the site and the ALICE central services. VO box processes monitor site utilization and job requests (ClusterMonitor), monitor dynamic job and site properties (MonaLisa), perform job agent submission (CE) and deploy job-specific software (PackMan). In particular, requiring a VO box at each site simplifies deployment of job software, done onto a shared file system at the site, and adds redundancy to the overall Grid system. ALICE offline computing, however, has also implemented a peer-to-peer method (based on BitTorrent) for downloading job software directly onto each worker node as needed. By utilizing both this peer-...

  8. Implementation of Grid-computing Framework for Simulation in Multi-scale Structural Analysis

    Directory of Open Access Journals (Sweden)

    Data Iranata

    2010-05-01

    Full Text Available A new grid-computing framework for simulation in multi-scale structural analysis is presented. Two levels of parallel processing will be involved in this framework: multiple local distributed computing environments connected by local network to form a grid-based cluster-to-cluster distributed computing environment. To successfully perform the simulation, a large-scale structural system task is decomposed into the simulations of a simplified global model and several detailed component models using various scales. These correlated multi-scale structural system tasks are distributed among clusters and connected together in a multi-level hierarchy and then coordinated over the internet. The software framework for supporting the multi-scale structural simulation approach is also presented. The program architecture design allows the integration of several multi-scale models as clients and servers under a single platform. To check its feasibility, a prototype software system has been designed and implemented to perform the proposed concept. The simulation results show that the software framework can increase the speedup performance of the structural analysis. Based on this result, the proposed grid-computing framework is suitable to perform the simulation of the multi-scale structural analysis.

  9. An analysis software of tritium distribution in food and environmental water in China

    International Nuclear Information System (INIS)

    Li Wenhong; Xu Cuihua; Ren Tianshan; Deng Guilong

    2006-01-01

    Objective: The purpose of developing this analysis-software of tritium distribution in food and environmental water is to collect tritium monitoring data, to analyze the data, both automatically, statistically and graphically, and to study and share the data. Methods: Based on the data obtained before, analysis-software is wrote by using VC++. NET as tool software. The software first transfers data from EXCEL into a database. It has additive function of data-append, so operators can embody new monitoring data easily. Results: After turning the monitoring data saved as EXCEL file by original researchers into a database, people can easily access them. The software provides a tool of distributing-analysis of tritium. Conclusion: This software is a first attempt of data-analysis about tritium level in food and environmental water in China. Data achieving, searching and analyzing become easily and directly with the software. (authors)

  10. Generación Automática de Software para Sistemas de Tiempo Real: Un Enfoque basado en Componentes, Modelos y Frameworks

    Directory of Open Access Journals (Sweden)

    Diego Alonso

    2012-04-01

    Full Text Available Resumen: Los Sistemas de Tiempo-Real poseen características que los hacen particularmente sensibles a las decisiones arquitectónicas que se adopten. El uso de Frameworks y Componentes ha demostrado ser eficaz en la mejora de la productividad y calidad del software, sobre todo si se combina con enfoques de Líneas de Productos. Sin embargo, los resultados en cuanto a reutilización y estandarización dejan patente la ausencia de portabilidad tanto de los diseños como las implementaciones basadas en componentes. Este artículo, fundamentado en el Desarrollo de Software Dirigido por Modelos, presenta un enfoque que separa la descripción de aplicaciones de tiempo–real basadas en componentes de sus posibles implementaciones para distintas plataformas. Esta separación viene soportada por la integración automática del código obtenido a partir de los modelos de entrada en frameworks implementados usando tecnología orientada a objetos. Asimismo, se detallan las decisiones arquitectónicas adoptadas en la implementación de uno de estos frameworks, que se utilizará como caso de estudio para ilustrar los beneficios derivados del enfoque propuesto. Por último, se realiza una comparativa en términos de coste de desarrollo con otros enfoques alternativos. Abstract: Real-Time Systems have some characteristics that make them particularly sensitive to architectural decisions. The use of Frameworks and Components has proven effective in improving productivity and software quality, especially when combined with Software Product Line approaches. However, the results in terms of software reuse and standardization make the lack of portability of both the design and componentbased implementations clear. This article, based on the Model- Driven Software Development paradigm, presents an approach that separates the component-based description of real-time applications from their possible implementations on different

  11. GeoFramework: A Modeling Framework for Solid Earth Geophysics

    Science.gov (United States)

    Gurnis, M.; Aivazis, M.; Tromp, J.; Tan, E.; Thoutireddy, P.; Liu, Q.; Choi, E.; Dicaprio, C.; Chen, M.; Simons, M.; Quenette, S.; Appelbe, B.; Aagaard, B.; Williams, C.; Lavier, L.; Moresi, L.; Law, H.

    2003-12-01

    earthquake rupture; SNAC, a developing 3-D coded based on the FLAC method for visco-elastoplastic deformation; SNARK, a 3-D FE-PIC method for viscoplastic deformation; and gPLATES an open source paleogeographic/plate tectonics modeling package. We will demonstrate how codes can be linked with themselves, such as a regional and global model of mantle convection and a visco-elastoplastic representation of the crust within viscous mantle flow. Finally, we will describe how http://GeoFramework.org has become a distribution site for a suite of modeling software in geophysics.

  12. The IceCube Data Acquisition Software: Lessons Learned during Distributed, Collaborative, Multi-Disciplined Software Development.

    Energy Technology Data Exchange (ETDEWEB)

    Beattie, Keith S; Beattie, Keith; Day Ph.D., Christopher; Glowacki, Dave; Hanson Ph.D., Kael; Jacobsen Ph.D., John; McParland, Charles; Patton Ph.D., Simon

    2007-09-21

    In this experiential paper we report on lessons learned during the development ofthe data acquisition software for the IceCube project - specifically, how to effectively address the unique challenges presented by a distributed, collaborative, multi-institutional, multi-disciplined project such as this. While development progress in software projects is often described solely in terms of technical issues, our experience indicates that non- and quasi-technical interactions play a substantial role in the effectiveness of large software development efforts. These include: selection and management of multiple software development methodologies, the effective useof various collaborative communication tools, project management structure and roles, and the impact and apparent importance of these elements when viewed through the differing perspectives of hardware, software, scientific and project office roles. Even in areas clearly technical in nature, success is still influenced by non-technical issues that can escape close attention. In particular we describe our experiences on software requirements specification, development methodologies and communication tools. We make observations on what tools and techniques have and have not been effective in this geographically disperse (including the South Pole) collaboration and offer suggestions on how similarly structured future projects may build upon our experiences.

  13. The ALMA Common Software as a Basis for a Distributed Software Development

    Science.gov (United States)

    Raffi, Gianni; Chiozzi, Gianluca; Glendenning, Brian

    The Atacama Large Millimeter Array (ALMA) is a joint project involving astronomical organizations in Europe, North America and Japan. ALMA will consist of 64 12-m antennas operating in the millimetre and sub-millimetre wavelength range, with baselines of more than 10 km. It will be located at an altitude above 5000 m in the Chilean Atacama desert. The ALMA Computing group is a joint group with staff scattered on 3 continents and is responsible for all the control and data flow software related to ALMA, including tools ranging from support of proposal preparation to archive access of automatically created images. Early in the project it was decided that an ALMA Common Software (ACS) would be developed as a way to provide to all partners involved in the development a common software platform. The original assumption was that some key middleware like communication via CORBA and the use of XML and Java would be part of the project. It was intended from the beginning to develop this software in an incremental way based on releases, so that it would then evolve into an essential embedded part of all ALMA software applications. In this way we would build a basic unity and coherence into a system that will have been developed in a distributed fashion. This paper evaluates our progress after 1.5 year of work, following a few tests and preliminary releases. It analyzes the advantages and difficulties of such an ambitious approach, which creates an interface across all the various control and data flow applications.

  14. A software architectural framework specification for neutron activation analysis

    International Nuclear Information System (INIS)

    Preston, J.A.; Grant, C.N.

    2013-01-01

    Neutron Activation Analysis (NAA) is a sensitive multi-element nuclear analytical technique that has been routinely applied by research reactor (RR) facilities to environmental, nutritional, health related, geological and geochemical studies. As RR facilities face calls to increase their research output and impact, with existing or reducing budgets, automation of NAA offers a possible solution. However, automation has many challenges, not the least of which is a lack of system architecture standards to establish acceptable mechanisms for the various hardware/software and software/software interactions among data acquisition systems, specialised hardware such as sample changers, sample loaders, and data processing modules. This lack of standardization often results in automation hardware and software being incompatible with existing system components, in a facility looking to automate its NAA operations. This limits the availability of automation to a few RR facilities with adequate budgets or in-house engineering resources. What is needed is a modern open system architecture for NAA, that provides the required set of functionalities. This paper describes such an 'architectural framework' (OpenNAA), and portions of a reference implementation. As an example of the benefits, calculations indicate that applying this architecture to the compilation and QA steps associated with the analysis of 35 elements in 140 samples, with 14 SRM's, can reduce the time required by over 80 %. The adoption of open standards in the nuclear industry has been very successful over the years in promoting interchangeability and maximising the lifetime and output of nuclear measurement systems. OpenNAA will provide similar benefits within the NAA application space, safeguarding user investments in their current system, while providing a solid path for development into the future. (author)

  15. BEANS - a software package for distributed Big Data analysis

    Science.gov (United States)

    Hypki, Arkadiusz

    2018-03-01

    BEANS software is a web based, easy to install and maintain, new tool to store and analyse in a distributed way a massive amount of data. It provides a clear interface for querying, filtering, aggregating, and plotting data from an arbitrary number of datasets. Its main purpose is to simplify the process of storing, examining and finding new relations in huge datasets. The software is an answer to a growing need of the astronomical community to have a versatile tool to store, analyse and compare the complex astrophysical numerical simulations with observations (e.g. simulations of the Galaxy or star clusters with the Gaia archive). However, this software was built in a general form and it is ready to use in any other research field. It can be used as a building block for other open source software too.

  16. A cloud based model to facilitate software development uutsourcing to globally distributed locations

    OpenAIRE

    Hashmi, Sajid Ibrahim; Richardson, Ita

    2013-01-01

    peer-reviewed Outsourcing is an essential part of global software development and entails software development distributed across geographical borders. More specifically, it deals with software development teams dispersed across multiple geographical locations to carry out software development activities. By means of this business model, organizations expect to benefit from enhanced corporate value through advantages such as round the clock software development, availability of skills and ...

  17. Modular Algorithm Testbed Suite (MATS): A Software Framework for Automatic Target Recognition

    Science.gov (United States)

    2017-01-01

    NAVAL SURFACE WARFARE CENTER PANAMA CITY DIVISION PANAMA CITY, FL 32407-7001 TECHNICAL REPORT NSWC PCD TR-2017-004 MODULAR ...31-01-2017 Technical Modular Algorithm Testbed Suite (MATS): A Software Framework for Automatic Target Recognition DR...flexible platform to facilitate the development and testing of ATR algorithms. To that end, NSWC PCD has created the Modular Algorithm Testbed Suite

  18. PScan 1.0: flexible software framework for polygon based multiphoton microscopy

    Science.gov (United States)

    Li, Yongxiao; Lee, Woei Ming

    2016-12-01

    Multiphoton laser scanning microscopes exhibit highly localized nonlinear optical excitation and are powerful instruments for in-vivo deep tissue imaging. Customized multiphoton microscopy has a significantly superior performance for in-vivo imaging because of precise control over the scanning and detection system. To date, there have been several flexible software platforms catered to custom built microscopy systems i.e. ScanImage, HelioScan, MicroManager, that perform at imaging speeds of 30-100fps. In this paper, we describe a flexible software framework for high speed imaging systems capable of operating from 5 fps to 1600 fps. The software is based on the MATLAB image processing toolbox. It has the capability to communicate directly with a high performing imaging card (Matrox Solios eA/XA), thus retaining high speed acquisition. The program is also designed to communicate with LabVIEW and Fiji for instrument control and image processing. Pscan 1.0 can handle high imaging rates and contains sufficient flexibility for users to adapt to their high speed imaging systems.

  19. Hydra: a scalable proteomic search engine which utilizes the Hadoop distributed computing framework.

    Science.gov (United States)

    Lewis, Steven; Csordas, Attila; Killcoyne, Sarah; Hermjakob, Henning; Hoopmann, Michael R; Moritz, Robert L; Deutsch, Eric W; Boyle, John

    2012-12-05

    For shotgun mass spectrometry based proteomics the most computationally expensive step is in matching the spectra against an increasingly large database of sequences and their post-translational modifications with known masses. Each mass spectrometer can generate data at an astonishingly high rate, and the scope of what is searched for is continually increasing. Therefore solutions for improving our ability to perform these searches are needed. We present a sequence database search engine that is specifically designed to run efficiently on the Hadoop MapReduce distributed computing framework. The search engine implements the K-score algorithm, generating comparable output for the same input files as the original implementation. The scalability of the system is shown, and the architecture required for the development of such distributed processing is discussed. The software is scalable in its ability to handle a large peptide database, numerous modifications and large numbers of spectra. Performance scales with the number of processors in the cluster, allowing throughput to expand with the available resources.

  20. The use of software agents and distributed objects to integrate enterprises: Compatible or competing technologies?

    Energy Technology Data Exchange (ETDEWEB)

    Pancerella, C.M.

    1998-04-01

    Distributed object and software agent technologies are two integration methods for connecting enterprises. The two technologies have overlapping goals--interoperability and architectural support for integrating software components--though to date little or no integration of the two technologies has been made at the enterprise level. The primary difference between these two technologies is that distributed object technologies focus on the problems inherent in connecting distributed heterogeneous systems whereas software agent technologies focus on the problems involved with coordination and knowledge exchange across domain boundaries. This paper addresses the integration of these technologies in support of enterprise integration across organizational and geographic boundaries. The authors discuss enterprise integration issues, review their experiences with both technologies, and make recommendations for future work. Neither technology is a panacea. Good software engineering techniques must be applied to integrate an enterprise because scalability and a distributed software development team are realities.

  1. A combined Component-Based Approach for the Design of Distributed Software Systems

    NARCIS (Netherlands)

    Guareis de farias, Cléver; Ferreira Pires, Luis; van Sinderen, Marten J.; Quartel, Dick; Yang, H.; Gupta, S.

    2001-01-01

    Component-based software development enables the construction of software artefacts by assembling binary units of production, distribution and deployment, the so-called components. Several approaches to component-based development have been proposed recently. Most of these approaches are based on

  2. Documentación y análisis de los principales frameworks de arquitectura de software en aplicaciones empresariales

    OpenAIRE

    Sarasty España, Hugo Fernando

    2016-01-01

    Este documento se enfoca en un tema común hoy en día en el ambiente tecnológico y empresarial, el cual es la arquitectura de software y su aplicabilidad a través de frameworks a proyectos empresariales. Este documento de investigación servirá de base para obtener un conocimiento y entendimiento de los frameworks de arquitectura de software más usados en el desarrollo de aplicaciones empresariales, determinando su aplicabilidad según el proyecto que se esté abordando. Facultad de Informátic...

  3. HealthNode: Software Framework for Efficiently Designing and Developing Cloud-Based Healthcare Applications

    Directory of Open Access Journals (Sweden)

    Ho-Kyeong Ra

    2018-01-01

    Full Text Available With the exponential improvement of software technology during the past decade, many efforts have been made to design remote and personalized healthcare applications. Many of these applications are built on mobile devices connected to the cloud. Although appealing, however, prototyping and validating the feasibility of an application-level idea is yet challenging without a solid understanding of the cloud, mobile, and the interconnectivity infrastructure. In this paper, we provide a solution to this by proposing a framework called HealthNode, which is a general-purpose framework for developing healthcare applications on cloud platforms using Node.js. To fully exploit the potential of Node.js when developing cloud applications, we focus on the fact that the implementation process should be eased. HealthNode presents an explicit guideline while supporting necessary features to achieve quick and expandable cloud-based healthcare applications. A case study applying HealthNode to various real-world health applications suggests that HealthNode can express architectural structure effectively within an implementation and that the proposed platform can support system understanding and software evolution.

  4. DiSC: A Simulation Framework for Distribution System Voltage Control

    DEFF Research Database (Denmark)

    Pedersen, Rasmus; Sloth, Christoffer Eg; Andresen, Gorm

    2015-01-01

    This paper presents the MATLAB simulation framework, DiSC, for verifying voltage control approaches in power distribution systems. It consists of real consumption data, stochastic models of renewable resources, flexible assets, electrical grid, and models of the underlying communication channels....... The simulation framework makes it possible to validate control approaches, and thus advance realistic and robust control algorithms for distribution system voltage control. Two examples demonstrate the potential voltage issues from penetration of renewables in the distribution grid, along with simple control...

  5. Building a world-wide open source community around a software framework: progress, dos, and don'ts

    Science.gov (United States)

    Ibsen, Jorge; Antognini, Jonathan; Avarias, Jorge; Caproni, Alessandro; Fuessling, Matthias; Gimenez, Guillermo; Verma, Khushbu; Mora, Matias; Schwarz, Joseph; Staig, Tomás.

    2016-08-01

    As we all know too well, building up a collaborative community around a software infrastructure is not easy. Besides recruiting enthusiasts to work as part of it, mostly for free, to succeed you also need to overcome a number of technical, sociological, and, to our surprise, some political hurdles. The ALMA Common Software (ACS) was developed at ESO and partner institutions over the course of more than 10 years. While it was mainly intended for the ALMA Observatory, it was early on thought as a generic distributed control framework. ACS has been periodically released to the public through an LGPL license, which encouraged around a dozen non-ALMA institutions to make use of ACS for both industrial and educational applications. In recent years, the Cherenkov Telescope Array and the LLAMA Observatory have also decided to adopt the framework for their own control systems. The aim of the "ACS Community" is to support independent initiatives in making use of the ACS framework and to further contribute to its development. The Community provides access to a growing network of volunteers eager to develop ACS in areas that are not necessarily in ALMA's interests, and/or were not within the original system scope. Current examples are: support for additional OS platforms, extension of supported hardware interfaces, a public code repository and a build farm. The ACS Community makes use of existing collaborations with Chilean and Brazilian universities, reaching out to promising engineers in the making. At the same time, projects actively using ACS have committed valuable resources to assist the Community's work. Well established training programs like the ACS Workshops are also being continued through the Community's work. This paper aims to give a detailed account of the ongoing (second) journey towards establishing a world-wide open source collaboration around ACS. The ACS Community is growing into a horizontal partnership across a decentralized and diversified group of

  6. Concurrent and Distributed Applications with ActoDeS

    Directory of Open Access Journals (Sweden)

    Bergenti Federico

    2016-01-01

    Full Text Available ActoDeS is a software framework for the development of large concurrent and distributed systems. This software framework takes advantage of the actor model and of an its implementation that makes easy the development of the actor code by delegating the management of events (i.e., the reception of messages to the execution environment. Moreover, it allows the development of scalable and efficient applications through the possibility of using different implementations of the components that drive the execution of actors. In particular, the paper introduces the software framework and presents the results of its experimentation.

  7. Software/hardware distributed processing network supporting the Ada environment

    Science.gov (United States)

    Wood, Richard J.; Pryk, Zen

    1993-09-01

    A high-performance, fault-tolerant, distributed network has been developed, tested, and demonstrated. The network is based on the MIPS Computer Systems, Inc. R3000 Risc for processing, VHSIC ASICs for high speed, reliable, inter-node communications and compatible commercial memory and I/O boards. The network is an evolution of the Advanced Onboard Signal Processor (AOSP) architecture. It supports Ada application software with an Ada- implemented operating system. A six-node implementation (capable of expansion up to 256 nodes) of the RISC multiprocessor architecture provides 120 MIPS of scalar throughput, 96 Mbytes of RAM and 24 Mbytes of non-volatile memory. The network provides for all ground processing applications, has merit for space-qualified RISC-based network, and interfaces to advanced Computer Aided Software Engineering (CASE) tools for application software development.

  8. Open software architecture for east articulated maintenance arm

    International Nuclear Information System (INIS)

    Wu, Jing; Wu, Huapeng; Song, Yuntao; Li, Ming; Yang, Yang; Alcina, Daniel A.M.

    2016-01-01

    Highlights: • A software requirement of serial-articulated robot for EAST assembly and maintains is presented. • A open software architecture of the robot is developed. • A component-based model distribution system with real-time communication of the robot is constructed. - Abstract: For the inside inspection and the maintenance of vacuum vessel in the EAST, an articulated maintenance arm is developed. In this article, an open software architecture developed for the EAST articulated maintenance arm (EAMA) is described, which offers a robust and proper performance and easy-going experience based on standard open robotic platform OROCOS. The paper presents a component-based model software architecture using multi-layer structure: end layer, up layer, middle, and down layer. In the end layer the components are defined off-line in the task planner manner. The components in up layer complete the function of trajectory plan. The CORBA, as a communication framework, is adopted to exchange the data between the distributed components. The contributors use Real-Time Workshop from the MATLAB/Simulink to generate the components in the middle layer. Real-time Toolkit guarantees control applications running in the hard real-time mode. Ethernets and the CAN bus are used for data transfer in the down layer, where the components implement the hardware functions. The distributed architecture of control system associates each processing node with each joint, which is mapped to a component with all functioning features of the framework.

  9. Open software architecture for east articulated maintenance arm

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Jing, E-mail: wujing@ipp.ac.cn [Institute of Plasma Physics Chinese Academy of Sciences, 350 Shushanhu Rd Hefei Anhui (China); Lappeenranta University of Technology, Skinnarilankatu 34 Lappeenranta (Finland); Wu, Huapeng [Lappeenranta University of Technology, Skinnarilankatu 34 Lappeenranta (Finland); Song, Yuntao [Institute of Plasma Physics Chinese Academy of Sciences, 350 Shushanhu Rd Hefei Anhui (China); Li, Ming [Lappeenranta University of Technology, Skinnarilankatu 34 Lappeenranta (Finland); Yang, Yang [Institute of Plasma Physics Chinese Academy of Sciences, 350 Shushanhu Rd Hefei Anhui (China); Alcina, Daniel A.M. [Lappeenranta University of Technology, Skinnarilankatu 34 Lappeenranta (Finland)

    2016-11-01

    Highlights: • A software requirement of serial-articulated robot for EAST assembly and maintains is presented. • A open software architecture of the robot is developed. • A component-based model distribution system with real-time communication of the robot is constructed. - Abstract: For the inside inspection and the maintenance of vacuum vessel in the EAST, an articulated maintenance arm is developed. In this article, an open software architecture developed for the EAST articulated maintenance arm (EAMA) is described, which offers a robust and proper performance and easy-going experience based on standard open robotic platform OROCOS. The paper presents a component-based model software architecture using multi-layer structure: end layer, up layer, middle, and down layer. In the end layer the components are defined off-line in the task planner manner. The components in up layer complete the function of trajectory plan. The CORBA, as a communication framework, is adopted to exchange the data between the distributed components. The contributors use Real-Time Workshop from the MATLAB/Simulink to generate the components in the middle layer. Real-time Toolkit guarantees control applications running in the hard real-time mode. Ethernets and the CAN bus are used for data transfer in the down layer, where the components implement the hardware functions. The distributed architecture of control system associates each processing node with each joint, which is mapped to a component with all functioning features of the framework.

  10. Software for Distributed Computation on Medical Databases: A Demonstration Project

    Directory of Open Access Journals (Sweden)

    Balasubramanian Narasimhan

    2017-05-01

    Full Text Available Bringing together the information latent in distributed medical databases promises to personalize medical care by enabling reliable, stable modeling of outcomes with rich feature sets (including patient characteristics and treatments received. However, there are barriers to aggregation of medical data, due to lack of standardization of ontologies, privacy concerns, proprietary attitudes toward data, and a reluctance to give up control over end use. Aggregation of data is not always necessary for model fitting. In models based on maximizing a likelihood, the computations can be distributed, with aggregation limited to the intermediate results of calculations on local data, rather than raw data. Distributed fitting is also possible for singular value decomposition. There has been work on the technical aspects of shared computation for particular applications, but little has been published on the software needed to support the "social networking" aspect of shared computing, to reduce the barriers to collaboration. We describe a set of software tools that allow the rapid assembly of a collaborative computational project, based on the flexible and extensible R statistical software and other open source packages, that can work across a heterogeneous collection of database environments, with full transparency to allow local officials concerned with privacy protections to validate the safety of the method. We describe the principles, architecture, and successful test results for the site-stratified Cox model and rank-k singular value decomposition.

  11. Project Management Software for Distributed Industrial Companies

    Science.gov (United States)

    Dobrojević, M.; Medjo, B.; Rakin, M.; Sedmak, A.

    This paper gives an overview of the development of a new software solution for project management, intended mainly to use in industrial environment. The main concern of the proposed solution is application in everyday engineering practice in various, mainly distributed industrial companies. Having this in mind, special care has been devoted to development of appropriate tools for tracking, storing and analysis of the information about the project, and in-time delivering to the right team members or other responsible persons. The proposed solution is Internet-based and uses LAMP/WAMP (Linux or Windows - Apache - MySQL - PHP) platform, because of its stability, versatility, open source technology and simple maintenance. Modular structure of the software makes it easy for customization according to client specific needs, with a very short implementation period. Its main advantages are simple usage, quick implementation, easy system maintenance, short training and only basic computer skills needed for operators.

  12. HUMANITARIAN AID DISTRIBUTION FRAMEWORK FOR NATURAL DISASTER MANAGEMENT

    OpenAIRE

    Mohd, S.; Fathi, M. S.; Harun, A. N.

    2018-01-01

    Humanitarian aid distribution is associated with many activities, numerous disaster management stakeholders, enormous effort and different processes. For effective communication, humanitarian aid distribution activities require appropriate and up-to-date information to enhance collaboration, and improve integration. The purpose of this paper is to develop a humanitarian aid distribution framework for disaster management in Malaysia. The findings of this paper are based on a review of the huma...

  13. A general framework for updating belief distributions.

    Science.gov (United States)

    Bissiri, P G; Holmes, C C; Walker, S G

    2016-11-01

    We propose a framework for general Bayesian inference. We argue that a valid update of a prior belief distribution to a posterior can be made for parameters which are connected to observations through a loss function rather than the traditional likelihood function, which is recovered as a special case. Modern application areas make it increasingly challenging for Bayesians to attempt to model the true data-generating mechanism. For instance, when the object of interest is low dimensional, such as a mean or median, it is cumbersome to have to achieve this via a complete model for the whole data distribution. More importantly, there are settings where the parameter of interest does not directly index a family of density functions and thus the Bayesian approach to learning about such parameters is currently regarded as problematic. Our framework uses loss functions to connect information in the data to functionals of interest. The updating of beliefs then follows from a decision theoretic approach involving cumulative loss functions. Importantly, the procedure coincides with Bayesian updating when a true likelihood is known yet provides coherent subjective inference in much more general settings. Connections to other inference frameworks are highlighted.

  14. Minimizing communication cost among distributed controllers in software defined networks

    Science.gov (United States)

    Arlimatti, Shivaleela; Elbreiki, Walid; Hassan, Suhaidi; Habbal, Adib; Elshaikh, Mohamed

    2016-08-01

    Software Defined Networking (SDN) is a new paradigm to increase the flexibility of today's network by promising for a programmable network. The fundamental idea behind this new architecture is to simplify network complexity by decoupling control plane and data plane of the network devices, and by making the control plane centralized. Recently controllers have distributed to solve the problem of single point of failure, and to increase scalability and flexibility during workload distribution. Even though, controllers are flexible and scalable to accommodate more number of network switches, yet the problem of intercommunication cost between distributed controllers is still challenging issue in the Software Defined Network environment. This paper, aims to fill the gap by proposing a new mechanism, which minimizes intercommunication cost with graph partitioning algorithm, an NP hard problem. The methodology proposed in this paper is, swapping of network elements between controller domains to minimize communication cost by calculating communication gain. The swapping of elements minimizes inter and intra communication cost among network domains. We validate our work with the OMNeT++ simulation environment tool. Simulation results show that the proposed mechanism minimizes the inter domain communication cost among controllers compared to traditional distributed controllers.

  15. Software-Enabled Distributed Network Governance: The PopMedNet Experience.

    Science.gov (United States)

    Davies, Melanie; Erickson, Kyle; Wyner, Zachary; Malenfant, Jessica; Rosen, Rob; Brown, Jeffrey

    2016-01-01

    The expanded availability of electronic health information has led to increased interest in distributed health data research networks. The distributed research network model leaves data with and under the control of the data holder. Data holders, network coordinating centers, and researchers have distinct needs and challenges within this model. The concerns of network stakeholders are addressed in the design and governance models of the PopMedNet software platform. PopMedNet features include distributed querying, customizable workflows, and auditing and search capabilities. Its flexible role-based access control system enables the enforcement of varying governance policies. Four case studies describe how PopMedNet is used to enforce network governance models. Trust is an essential component of a distributed research network and must be built before data partners may be willing to participate further. The complexity of the PopMedNet system must be managed as networks grow and new data, analytic methods, and querying approaches are developed. The PopMedNet software platform supports a variety of network structures, governance models, and research activities through customizable features designed to meet the needs of network stakeholders.

  16. Data acquisition software for the CMS strip tracker

    International Nuclear Information System (INIS)

    Bainbridge, R; Cripps, N; Fulcher, J; Radicci, V; Wingham, M; Baulieu, G; Bel, S; Delaere, C; Drouhin, F; Gill, K; Mirabito, L; Cole, J; Jesus, A C A; Giassi, A; Giordano, D; Gross, L; Hahn, K; Mersi, S; Nikolic, M; Tkaczyk, S

    2008-01-01

    The CMS silicon strip tracker, providing a sensitive area of approximately 200 m 2 and comprising 10 million readout channels, has recently been completed at the tracker integration facility at CERN. The strip tracker community is currently working to develop and integrate the online and offline software frameworks, known as XDAQ and CMSSW respectively, for the purposes of data acquisition and detector commissioning and monitoring. Recent developments have seen the integration of many new services and tools within the online data acquisition system, such as event building, online distributed analysis, an online monitoring framework, and data storage management. We review the various software components that comprise the strip tracker data acquisition system, the software architectures used for stand-alone and global data-taking modes. Our experiences in commissioning and operating one of the largest ever silicon micro-strip tracking systems are also reviewed

  17. Optimizing Distribution Problems using WinQSB Software

    Directory of Open Access Journals (Sweden)

    Daniel Mihai Amariei

    2015-07-01

    Full Text Available In the present paper we are presenting a problem of distribution using the Network Modeling Module of the WinQSB software, were we have 5 athletes which we must assign the optimal sample, function of the obtained time, so as to obtain the maximum output of the athletes. Also we analyzed the case of an accident of 2 athletes, the coupling of 3 athletes with 5 various athletic events causing the maximum coupling, done using the Hungarian algorithm.

  18. Software reliability assessment

    International Nuclear Information System (INIS)

    Barnes, M.; Bradley, P.A.; Brewer, M.A.

    1994-01-01

    The increased usage and sophistication of computers applied to real time safety-related systems in the United Kingdom has spurred on the desire to provide a standard framework within which to assess dependable computing systems. Recent accidents and ensuing legislation have acted as a catalyst in this area. One particular aspect of dependable computing systems is that of software, which is usually designed to reduce risk at the system level, but which can increase risk if it is unreliable. Various organizations have recognized the problem of assessing the risk imposed to the system by unreliable software, and have taken initial steps to develop and use such assessment frameworks. This paper relates the approach of Consultancy Services of AEA Technology in developing a framework to assess the risk imposed by unreliable software. In addition, the paper discusses the experiences gained by Consultancy Services in applying the assessment framework to commercial and research projects. The framework is applicable to software used in safety applications, including proprietary software. Although the paper is written with Nuclear Reactor Safety applications in mind, the principles discussed can be applied to safety applications in all industries

  19. Dreams: a framework for distributed synchronous coordination

    NARCIS (Netherlands)

    Proença, J.; Clarke, D.; Vink, de E.P.; Arbab, F.

    2012-01-01

    Synchronous coordination systems, such as Reo, exchange data via indivisible actions, while distributed systems are typically asynchronous and assume that messages can be delayed or get lost. To combine these seemingly contradictory notions, we introduce the Dreams framework. Coordination patterns

  20. Large distributed control system using Ada in fusion research

    International Nuclear Information System (INIS)

    Van Arsdall, P J; Woodruff, J P.

    1998-01-01

    Construction of the National Ignition Facility laser at Lawrence Livermore National Laboratory features a distributed control system that uses object-oriented software engineering techniques. Control of 60,000 devices is effected using a network of some 500 computers. The software is being written in Ada and communicates through CORBA. Software controls are implemented in two layers: individual device controllers and a supervisory layer. The software architecture provides services in the form of frameworks that address issues common to event-driven control systems. Those services are allocated to levels that strictly prescribe their interdependency so the levels are separately reusable. The project has completed its final design review. The delivery of the first increment takes place in October 1998. Keywords Distributed control system, object-oriented development, CORBA, application frameworks, levels of abstraction

  1. An IMRT dose distribution study using commercial verification software

    International Nuclear Information System (INIS)

    Grace, M.; Liu, G.; Fernando, W.; Rykers, K.

    2004-01-01

    Full text: The introduction of IMRT requires users to confirm that the isodose distributions and relative doses calculated by their planning system match the doses delivered by their linear accelerators. To this end the commercially available software, VeriSoft TM (PTW-Freiburg, Germany) was trialled to determine if the tools and functions it offered would be of benefit to this process. The CMS Xio (Computer Medical System) treatment planning system was used to generate IMRT plans that were delivered with an upgraded Elekta SL15 linac. Kodak EDR2 film sandwiched in RW3 solid water (PTW-Freiburg, Germany) was used to measure the IMRT fields delivered with 6 MV photons. The isodose and profiles measured with the film generally agreed to within ± 3% or ± 3 mm with the planned doses, in some regions (outside the IMRT field) the match fell to within ± 5%. The isodose distributions of the planning system and the film could be compared on screen and allows for electronic records of the comparison to be kept if so desired. The features and versatility of this software has been of benefit to our IMRT QA program. Furthermore, the VeriSoft TM software allows for quick and accurate, automated planar film analysis.Copyright (2004) Australasian College of Physical Scientists and Engineers in Medicine

  2. A Development Framework for Software Security in Nuclear Safety Systems: Integrating Secure Development and System Security Activities

    Energy Technology Data Exchange (ETDEWEB)

    Park, Jaekwan; Suh, Yongsuk [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-02-15

    The protection of nuclear safety software is essential in that a failure can result in significant economic loss and physical damage to the public. However, software security has often been ignored in nuclear safety software development. To enforce security considerations, nuclear regulator commission recently issued and revised the security regulations for nuclear computer-based systems. It is a great challenge for nuclear developers to comply with the security requirements. However, there is still no clear software development process regarding security activities. This paper proposes an integrated development process suitable for the secure development requirements and system security requirements described by various regulatory bodies. It provides a three-stage framework with eight security activities as the software development process. Detailed descriptions are useful for software developers and licensees to understand the regulatory requirements and to establish a detailed activity plan for software design and engineering.

  3. Software life cycle methodologies and environments

    Science.gov (United States)

    Fridge, Ernest

    1991-01-01

    Products of this project will significantly improve the quality and productivity of Space Station Freedom Program software processes by: improving software reliability and safety; and broadening the range of problems that can be solved with computational solutions. Projects brings in Computer Aided Software Engineering (CASE) technology for: Environments such as Engineering Script Language/Parts Composition System (ESL/PCS) application generator, Intelligent User Interface for cost avoidance in setting up operational computer runs, Framework programmable platform for defining process and software development work flow control, Process for bringing CASE technology into an organization's culture, and CLIPS/CLIPS Ada language for developing expert systems; and methodologies such as Method for developing fault tolerant, distributed systems and a method for developing systems for common sense reasoning and for solving expert systems problems when only approximate truths are known.

  4. Distributed team innovation - a framework for distributed product development

    OpenAIRE

    Larsson, Andreas; Törlind, Peter; Karlsson, Lennart; Mabogunje, Ade; Leifer, Larry; Larsson, Tobias; Elfström, Bengt-Olof

    2003-01-01

    In response to the need for increased effectivity in global product development, the Polhem Laboratory at Luleå University of Technology, Sweden, and the Center for Design Research at Stanford University, USA, have created the concept of Distributed Team Innovation (DTI). The overall aim of the DTI framework is to decrease the negative impact of geographic distance on product development efforts and to further enhance current advantages of worldwide, multidisciplinary collaboration. The DTI ...

  5. Automated tools and techniques for distributed Grid Software Development of the testbed infrastructure

    CERN Document Server

    Aguado Sanchez, C

    2007-01-01

    Grid technology is becoming more and more important as the new paradigm for sharing computational resources across different organizations in a secure way. The great powerfulness of this solution, requires the definition of a generic stack of services and protocols and this is the scope of the different Grid initiatives. As a result of international collaborations for its development, the Open Grid Forum created the Open Grid Services Architecture (OGSA) which aims to define the common set of services that will enable interoperability across the different implementations. This master thesis has been developed in this framework, as part of the two European-funded projects ETICS and OMII-Europe. The main objective is to contribute to the design and maintenance of large distributed development projects with the automated tool that enables to implement Software Engineering techniques oriented to achieve an acceptable level of quality at the release process. Specifically, this thesis develops the testbed concept a...

  6. DISCRN: A Distributed Storytelling Framework for Intelligence Analysis.

    Science.gov (United States)

    Shukla, Manu; Dos Santos, Raimundo; Chen, Feng; Lu, Chang-Tien

    2017-09-01

    Storytelling connects entities (people, organizations) using their observed relationships to establish meaningful storylines. This can be extended to spatiotemporal storytelling that incorporates locations, time, and graph computations to enhance coherence and meaning. But when performed sequentially these computations become a bottleneck because the massive number of entities make space and time complexity untenable. This article presents DISCRN, or distributed spatiotemporal ConceptSearch-based storytelling, a distributed framework for performing spatiotemporal storytelling. The framework extracts entities from microblogs and event data, and links these entities using a novel ConceptSearch to derive storylines in a distributed fashion utilizing key-value pair paradigm. Performing these operations at scale allows deeper and broader analysis of storylines. The novel parallelization techniques speed up the generation and filtering of storylines on massive datasets. Experiments with microblog posts such as Twitter data and Global Database of Events, Language, and Tone events show the efficiency of the techniques in DISCRN.

  7. A planning and analysis framework for evaluating distributed generation and utility strategies

    International Nuclear Information System (INIS)

    Ault, Graham W.

    2000-01-01

    The numbers of smaller scale distributed power generation units connected to the distribution networks of electricity utilities in the UK and elsewhere have grown significantly in recent years. Numerous economic and political drivers have stimulated this growth and continue to provide the environment for future growth in distributed generation. The simple fact that distributed generation is independent from the distribution utility complicates planning and operational tasks for the distribution network. The uncertainty relating to the number, location and type of distributed generating units to connect to the distribution network in the future makes distribution planning a particularly difficult activity. This thesis concerns the problem of distribution network and business planning in the era of distributed generation. A distributed generation strategic analysis framework is proposed to provide the required analytical capability and planning and decision making framework to enable distribution utilities to deal effectively with the challenges and opportunities presented to them by distributed generation. The distributed generation strategic analysis framework is based on the best features of modern planning and decision making methodologies and facilitates scenario based analysis across many utility strategic options and uncertainties. Case studies are presented and assessed to clearly illustrate the potential benefits of such an approach to distributed generation planning in the UK electricity supply industry. (author)

  8. Distributed Sensor Network Software Development Testing through Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Brennan, Sean M. [Univ. of New Mexico, Albuquerque, NM (United States)

    2003-12-01

    The distributed sensor network (DSN) presents a novel and highly complex computing platform with dif culties and opportunities that are just beginning to be explored. The potential of sensor networks extends from monitoring for threat reduction, to conducting instant and remote inventories, to ecological surveys. Developing and testing for robust and scalable applications is currently practiced almost exclusively in hardware. The Distributed Sensors Simulator (DSS) is an infrastructure that allows the user to debug and test software for DSNs independent of hardware constraints. The exibility of DSS allows developers and researchers to investigate topological, phenomenological, networking, robustness and scaling issues, to explore arbitrary algorithms for distributed sensors, and to defeat those algorithms through simulated failure. The user speci es the topology, the environment, the application, and any number of arbitrary failures; DSS provides the virtual environmental embedding.

  9. Software cost/resource modeling: Software quality tradeoff measurement

    Science.gov (United States)

    Lawler, R. W.

    1980-01-01

    A conceptual framework for treating software quality from a total system perspective is developed. Examples are given to show how system quality objectives may be allocated to hardware and software; to illustrate trades among quality factors, both hardware and software, to achieve system performance objectives; and to illustrate the impact of certain design choices on software functionality.

  10. Experimental research control software system

    International Nuclear Information System (INIS)

    Cohn, I A; Kovalenko, A G; Vystavkin, A N

    2014-01-01

    A software system, intended for automation of a small scale research, has been developed. The software allows one to control equipment, acquire and process data by means of simple scripts. The main purpose of that development is to increase experiment automation easiness, thus significantly reducing experimental setup automation efforts. In particular, minimal programming skills are required and supervisors have no reviewing troubles. Interactions between scripts and equipment are managed automatically, thus allowing to run multiple scripts simultaneously. Unlike well-known data acquisition commercial software systems, the control is performed by an imperative scripting language. This approach eases complex control and data acquisition algorithms implementation. A modular interface library performs interaction with external interfaces. While most widely used interfaces are already implemented, a simple framework is developed for fast implementations of new software and hardware interfaces. While the software is in continuous development with new features being implemented, it is already used in our laboratory for automation of a helium-3 cryostat control and data acquisition. The software is open source and distributed under Gnu Public License.

  11. Experimental research control software system

    Science.gov (United States)

    Cohn, I. A.; Kovalenko, A. G.; Vystavkin, A. N.

    2014-05-01

    A software system, intended for automation of a small scale research, has been developed. The software allows one to control equipment, acquire and process data by means of simple scripts. The main purpose of that development is to increase experiment automation easiness, thus significantly reducing experimental setup automation efforts. In particular, minimal programming skills are required and supervisors have no reviewing troubles. Interactions between scripts and equipment are managed automatically, thus allowing to run multiple scripts simultaneously. Unlike well-known data acquisition commercial software systems, the control is performed by an imperative scripting language. This approach eases complex control and data acquisition algorithms implementation. A modular interface library performs interaction with external interfaces. While most widely used interfaces are already implemented, a simple framework is developed for fast implementations of new software and hardware interfaces. While the software is in continuous development with new features being implemented, it is already used in our laboratory for automation of a helium-3 cryostat control and data acquisition. The software is open source and distributed under Gnu Public License.

  12. Development and use of mathematical models and software frameworks for integrated analysis of agricultural systems and associated water use impacts

    Science.gov (United States)

    Fowler, K. R.; Jenkins, E.W.; Parno, M.; Chrispell, J.C.; Colón, A. I.; Hanson, Randall T.

    2016-01-01

    The development of appropriate water management strategies requires, in part, a methodology for quantifying and evaluating the impact of water policy decisions on regional stakeholders. In this work, we describe the framework we are developing to enhance the body of resources available to policy makers, farmers, and other community members in their e orts to understand, quantify, and assess the often competing objectives water consumers have with respect to usage. The foundation for the framework is the construction of a simulation-based optimization software tool using two existing software packages. In particular, we couple a robust optimization software suite (DAKOTA) with the USGS MF-OWHM water management simulation tool to provide a flexible software environment that will enable the evaluation of one or multiple (possibly competing) user-defined (or stakeholder) objectives. We introduce the individual software components and outline the communication strategy we defined for the coupled development. We present numerical results for case studies related to crop portfolio management with several defined objectives. The objectives are not optimally satisfied for any single user class, demonstrating the capability of the software tool to aid in the evaluation of a variety of competing interests.

  13. Distributed Software Development with One Hand Tied Behind the Back

    DEFF Research Database (Denmark)

    Kuhrmann, Marco; Münch, Jürgen

    2016-01-01

    Software development consists to a large extent of human-based processes with continuously increasing demands regarding interdisciplinary team work. Understanding the dynamics of software teams can be seen as highly important to successful project execution. Hence, for future project managers......, knowledge about non-technical processes in teams is significant. In this paper, we present a course unit that provides an environment in which students can learn and experience the role of different communication patterns in distributed agile software development. In particular, students gain awareness...... in virtual teams. We provide a detailed design of the course unit to allow for implementation in further courses. Furthermore, we provide experiences obtained from implementing this course unit with 16 graduate students. We observed students struggling with technical aspects and team coordination in general...

  14. Modeling a distributed environment for a petroleum reservoir engineering application with software product line

    International Nuclear Information System (INIS)

    Scheidt, Rafael de Faria; Vilain, Patrícia; Dantas, M A R

    2014-01-01

    Petroleum reservoir engineering is a complex and interesting field that requires large amount of computational facilities to achieve successful results. Usually, software environments for this field are developed without taking care out of possible interactions and extensibilities required by reservoir engineers. In this paper, we present a research work which it is characterized by the design and implementation based on a software product line model for a real distributed reservoir engineering environment. Experimental results indicate successfully the utilization of this approach for the design of distributed software architecture. In addition, all components from the proposal provided greater visibility of the organization and processes for the reservoir engineers

  15. Modeling a distributed environment for a petroleum reservoir engineering application with software product line

    Science.gov (United States)

    de Faria Scheidt, Rafael; Vilain, Patrícia; Dantas, M. A. R.

    2014-10-01

    Petroleum reservoir engineering is a complex and interesting field that requires large amount of computational facilities to achieve successful results. Usually, software environments for this field are developed without taking care out of possible interactions and extensibilities required by reservoir engineers. In this paper, we present a research work which it is characterized by the design and implementation based on a software product line model for a real distributed reservoir engineering environment. Experimental results indicate successfully the utilization of this approach for the design of distributed software architecture. In addition, all components from the proposal provided greater visibility of the organization and processes for the reservoir engineers.

  16. A QDWH-Based SVD Software Framework on Distributed-Memory Manycore Systems

    KAUST Repository

    Sukkari, Dalal; Ltaief, Hatem; Esposito, Aniello; Keyes, David E.

    2017-01-01

    , the inherent high level of concurrency associated with Level 3 BLAS compute-bound kernels ultimately compensates for the arithmetic complexity overhead. Using the ScaLAPACK two-dimensional block cyclic data distribution with a rectangular processor topology

  17. Towards an Evaluation Framework for Software Process Improvement

    OpenAIRE

    Cheng, Chow Kian; Permadi, Rahadian Bayu

    2009-01-01

    Software has gained an essential role in our daily life in the last decades. This condition demands high quality software. To produce high quality software many practitioners and researchers put more attention on the software development process. Large investments are poured to improve the software development process. Software Process Improvement (SPI) is a research area which is aimed to address the assessment and improvement issues in the software development process. One of the most impor...

  18. Integrating Visualization Applications, such as ParaView, into HEP Software Frameworks for In-situ Event Displays

    Science.gov (United States)

    Lyon, A. L.; Kowalkowski, J. B.; Jones, C. D.

    2017-10-01

    ParaView is a high performance visualization application not widely used in High Energy Physics (HEP). It is a long standing open source project led by Kitware and involves several Department of Energy (DOE) and Department of Defense (DOD) laboratories. Futhermore, it has been adopted by many DOE supercomputing centers and other sites. ParaView is unique in speed and efficiency by using state-of-the-art techniques developed by the academic visualization community that are often not found in applications written by the HEP community. In-situ visualization of events, where event details are visualized during processing/analysis, is a common task for experiment software frameworks. Kitware supplies Catalyst, a library that enables scientific software to serve visualization objects to client ParaView viewers yielding a real-time event display. Connecting ParaView to the Fermilab art framework will be described and the capabilities it brings discussed.

  19. Integrating Visualization Applications, such as ParaView, into HEP Software Frameworks for In-situ Event Displays

    Energy Technology Data Exchange (ETDEWEB)

    Lyon, A. L. [Fermilab; Kowalkowski, J. B. [Fermilab; Jones, C. D. [Fermilab

    2017-11-22

    ParaView is a high performance visualization application not widely used in High Energy Physics (HEP). It is a long standing open source project led by Kitware and involves several Department of Energy (DOE) and Department of Defense (DOD) laboratories. Futhermore, it has been adopted by many DOE supercomputing centers and other sites. ParaView is unique in speed and efficiency by using state-of-the-art techniques developed by the academic visualization community that are often not found in applications written by the HEP community. In-situ visualization of events, where event details are visualized during processing/analysis, is a common task for experiment software frameworks. Kitware supplies Catalyst, a library that enables scientific software to serve visualization objects to client ParaView viewers yielding a real-time event display. Connecting ParaView to the Fermilab art framework will be described and the capabilities it brings discussed.

  20. Creating a Framework for Applying OAIS to Distributed Digital Preservation

    DEFF Research Database (Denmark)

    Zierau, Eld; Schultz, Matt

    2013-01-01

    This paper describes work being done towards a Framework for Applying the Reference Model for an Open Archival Information System (OAIS) to Distributed Digital Preservation (DDP). Such a Framework will be helpful for future analyses and/or audits of repositories that are performing digital...

  1. A software framework for pipelined arithmetic algorithms in field programmable gate arrays

    Science.gov (United States)

    Kim, J. B.; Won, E.

    2018-03-01

    Pipelined algorithms implemented in field programmable gate arrays are extensively used for hardware triggers in the modern experimental high energy physics field and the complexity of such algorithms increases rapidly. For development of such hardware triggers, algorithms are developed in C++, ported to hardware description language for synthesizing firmware, and then ported back to C++ for simulating the firmware response down to the single bit level. We present a C++ software framework which automatically simulates and generates hardware description language code for pipelined arithmetic algorithms.

  2. Distributed security framework for modern workforce

    Energy Technology Data Exchange (ETDEWEB)

    Balatsky, G.; Scherer, C. P., E-mail: gbalatsky@lanl.gov, E-mail: scherer@lanl.gov [Los Alamos National Laboratory, Los Alamos, NM (United States)

    2014-07-01

    Safe and sustainable nuclear power production depends on strict adherence to nuclear security as a necessary prerequisite for nuclear power. This paper considers the current challenges for nuclear security, and proposes a conceptual framework to address those challenges. We identify several emerging factors that affect nuclear security: 1. Relatively high turnover rates in the nuclear workforce compared to the earlier years of the nuclear industry, when nuclear workers were more likely to have secure employment, a lifelong career at one company, and retirement on a pension plan. 2. Vulnerabilities stemming from the ubiquitous presence of modern electronics and their patterns of use by the younger workforce. 3. Modern management practices, including outsourcing and short-term contracting (which relates to number 1 above). In such a dynamic and complex environment, nuclear security personnel alone cannot effectively guarantee adequate security. We propose that one solution to this emerging situation is a distributed security model in which the components of nuclear security become the responsibility of each and every worker at a nuclear facility. To implement this model, there needs to be a refurbishment of current workforce training and mentoring practices. The paper will present an example of distributed security framework model, and how it may look in practice. (author)

  3. Distributed security framework for modern workforce

    International Nuclear Information System (INIS)

    Balatsky, G.; Scherer, C. P.

    2014-01-01

    Safe and sustainable nuclear power production depends on strict adherence to nuclear security as a necessary prerequisite for nuclear power. This paper considers the current challenges for nuclear security, and proposes a conceptual framework to address those challenges. We identify several emerging factors that affect nuclear security: 1. Relatively high turnover rates in the nuclear workforce compared to the earlier years of the nuclear industry, when nuclear workers were more likely to have secure employment, a lifelong career at one company, and retirement on a pension plan. 2. Vulnerabilities stemming from the ubiquitous presence of modern electronics and their patterns of use by the younger workforce. 3. Modern management practices, including outsourcing and short-term contracting (which relates to number 1 above). In such a dynamic and complex environment, nuclear security personnel alone cannot effectively guarantee adequate security. We propose that one solution to this emerging situation is a distributed security model in which the components of nuclear security become the responsibility of each and every worker at a nuclear facility. To implement this model, there needs to be a refurbishment of current workforce training and mentoring practices. The paper will present an example of distributed security framework model, and how it may look in practice. (author)

  4. Statistical analysis of water-quality data containing multiple detection limits II: S-language software for nonparametric distribution modeling and hypothesis testing

    Science.gov (United States)

    Lee, L.; Helsel, D.

    2007-01-01

    Analysis of low concentrations of trace contaminants in environmental media often results in left-censored data that are below some limit of analytical precision. Interpretation of values becomes complicated when there are multiple detection limits in the data-perhaps as a result of changing analytical precision over time. Parametric and semi-parametric methods, such as maximum likelihood estimation and robust regression on order statistics, can be employed to model distributions of multiply censored data and provide estimates of summary statistics. However, these methods are based on assumptions about the underlying distribution of data. Nonparametric methods provide an alternative that does not require such assumptions. A standard nonparametric method for estimating summary statistics of multiply-censored data is the Kaplan-Meier (K-M) method. This method has seen widespread usage in the medical sciences within a general framework termed "survival analysis" where it is employed with right-censored time-to-failure data. However, K-M methods are equally valid for the left-censored data common in the geosciences. Our S-language software provides an analytical framework based on K-M methods that is tailored to the needs of the earth and environmental sciences community. This includes routines for the generation of empirical cumulative distribution functions, prediction or exceedance probabilities, and related confidence limits computation. Additionally, our software contains K-M-based routines for nonparametric hypothesis testing among an unlimited number of grouping variables. A primary characteristic of K-M methods is that they do not perform extrapolation and interpolation. Thus, these routines cannot be used to model statistics beyond the observed data range or when linear interpolation is desired. For such applications, the aforementioned parametric and semi-parametric methods must be used.

  5. Multi-Agent Framework in Visual Sensor Networks

    Directory of Open Access Journals (Sweden)

    J. M. Molina

    2007-01-01

    Full Text Available The recent interest in the surveillance of public, military, and commercial scenarios is increasing the need to develop and deploy intelligent and/or automated distributed visual surveillance systems. Many applications based on distributed resources use the so-called software agent technology. In this paper, a multi-agent framework is applied to coordinate videocamera-based surveillance. The ability to coordinate agents improves the global image and task distribution efficiency. In our proposal, a software agent is embedded in each camera and controls the capture parameters. Then coordination is based on the exchange of high-level messages among agents. Agents use an internal symbolic model to interpret the current situation from the messages from all other agents to improve global coordination.

  6. An integrated development framework for rapid development of platform-independent and reusable satellite on-board software

    Science.gov (United States)

    Ziemke, Claas; Kuwahara, Toshinori; Kossev, Ivan

    2011-09-01

    Even in the field of small satellites, the on-board data handling subsystem has become complex and powerful. With the introduction of powerful CPUs and the availability of considerable amounts of memory on-board a small satellite it has become possible to utilize the flexibility and power of contemporary platform-independent real-time operating systems. Especially the non-commercial sector such like university institutes and community projects such as AMSAT or SSETI are characterized by the inherent lack of financial as well as manpower resources. The opportunity to utilize such real-time operating systems will contribute significantly to achieve a successful mission. Nevertheless the on-board software of a satellite is much more than just an operating system. It has to fulfill a multitude of functional requirements such as: Telecommand interpretation and execution, execution of control loops, generation of telemetry data and frames, failure detection isolation and recovery, the communication with peripherals and so on. Most of the aforementioned tasks are of generic nature and have to be conducted on any satellite with only minor modifications. A general set of functional requirements as well as a protocol for communication is defined in the SA ECSS-E-70-41A standard "Telemetry and telecommand packet utilization". This standard not only defines the communication protocol of the satellite-ground link but also defines a set of so called services which have to be available on-board of every compliant satellite and which are of generic nature. In this paper, a platform-independent and reusable framework is described which is implementing not only the ECSS-E-70-41A standard but also functionalities for interprocess communication, scheduling and a multitude of tasks commonly performed on-board of a satellite. By making use of the capabilities of the high-level programming language C/C++, the powerful open source library BOOST, the real-time operating system RTEMS and

  7. The SSCL framework software plans

    International Nuclear Information System (INIS)

    Frederiksen, S.

    1993-12-01

    In about ten years the Superconducting Super Collider Laboratory (SSCL) will be Producing 40 TeV proton-proton interactions. The size and scale of the effort demands new approaches to design and develop software used by the experimental collaborations. The Physics Research Division Computing Department (PRCD) of the SSCL is developing (in collaboration with the Solenoidal Detector Collaboration (SDC) and Gamma, Electron and Muon (GEM) collaborations a support system which will be used to build and run the collaboration software. It will be used for simulating the events needed for detector development and for the analysis of these complicated events. The plans status of this program will be discussed

  8. A Universal Communication Framework and Navigation Control Software for Mobile Prototyping Platforms

    Directory of Open Access Journals (Sweden)

    Andreas Mitschele-Thiel

    2010-09-01

    Full Text Available In our contribution we would like to describe two new aspects of our low-cost mobile prototyping platform concept: a new hardware communication framework as well as new software features for navigation and control of our mobile platform. The paper is an extension of the ideas proposed in REV2009 [1] and is based on the therein used hardware platform and the monitoring and management software. This platform is based on the Quadrocopter concept – autonomous flying helicopter-style robots – and includes additional off-the-shelf parts. This leads to a universal mobile prototyping platform for communication tasks providing both mobile phone and WiFi access. However, the platform can provide these functions far more quickly than a technician on the ground might be able to. We will show that with our concept we can easily adapt the platform to the individual needs of the user, which leads to a very flexible and semi-autonomous system.

  9. A Generalized Cauchy Distribution Framework for Problems Requiring Robust Behavior

    Directory of Open Access Journals (Sweden)

    Carrillo RafaelE

    2010-01-01

    Full Text Available Statistical modeling is at the heart of many engineering problems. The importance of statistical modeling emanates not only from the desire to accurately characterize stochastic events, but also from the fact that distributions are the central models utilized to derive sample processing theories and methods. The generalized Cauchy distribution (GCD family has a closed-form pdf expression across the whole family as well as algebraic tails, which makes it suitable for modeling many real-life impulsive processes. This paper develops a GCD theory-based approach that allows challenging problems to be formulated in a robust fashion. Notably, the proposed framework subsumes generalized Gaussian distribution (GGD family-based developments, thereby guaranteeing performance improvements over traditional GCD-based problem formulation techniques. This robust framework can be adapted to a variety of applications in signal processing. As examples, we formulate four practical applications under this framework: (1 filtering for power line communications, (2 estimation in sensor networks with noisy channels, (3 reconstruction methods for compressed sensing, and (4 fuzzy clustering.

  10. NASA software documentation standard software engineering program

    Science.gov (United States)

    1991-01-01

    The NASA Software Documentation Standard (hereinafter referred to as Standard) can be applied to the documentation of all NASA software. This Standard is limited to documentation format and content requirements. It does not mandate specific management, engineering, or assurance standards or techniques. This Standard defines the format and content of documentation for software acquisition, development, and sustaining engineering. Format requirements address where information shall be recorded and content requirements address what information shall be recorded. This Standard provides a framework to allow consistency of documentation across NASA and visibility into the completeness of project documentation. This basic framework consists of four major sections (or volumes). The Management Plan contains all planning and business aspects of a software project, including engineering and assurance planning. The Product Specification contains all technical engineering information, including software requirements and design. The Assurance and Test Procedures contains all technical assurance information, including Test, Quality Assurance (QA), and Verification and Validation (V&V). The Management, Engineering, and Assurance Reports is the library and/or listing of all project reports.

  11. Proceedings of the Workshop on software tools for distributed intelligent control systems

    Energy Technology Data Exchange (ETDEWEB)

    Herget, C.J. (ed.)

    1990-09-01

    The Workshop on Software Tools for Distributed Intelligent Control Systems was organized by Lawrence Livermore National Laboratory for the United States Army Headquarters Training and Doctrine Command and the Defense Advanced Research Projects Agency. The goals of the workshop were to the identify the current state of the art in tools which support control systems engineering design and implementation, identify research issues associated with writing software tools which would provide a design environment to assist engineers in multidisciplinary control design and implementation, formulate a potential investment strategy to resolve the research issues and develop public domain code which can form the core of more powerful engineering design tools, and recommend test cases to focus the software development process and test associated performance metrics. Recognizing that the development of software tools for distributed intelligent control systems will require a multidisciplinary effort, experts in systems engineering, control systems engineering, and compute science were invited to participate in the workshop. In particular, experts who could address the following topics were selected: operating systems, engineering data representation and manipulation, emerging standards for manufacturing data, mathematical foundations, coupling of symbolic and numerical computation, user interface, system identification, system representation at different levels of abstraction, system specification, system design, verification and validation, automatic code generation, and integration of modular, reusable code.

  12. Portable software for distributed readout controllers and event builders in FASTBUS and VME

    International Nuclear Information System (INIS)

    Pordes, R.; Berg, D.; Berman, E.; Bernett, M.; Brown, D.; Constanta-Fanourakis, P.; Dorries, T.; Haire, M.; Joshi, U.; Kaczar, K.; Mackinnon, B.; Moore, C.; Nicinski, T.; Oleynik, G.; Petravick, D.; Sergey, G.; Slimmer, D.; Streets, J.; Votava, M.; White, V.

    1989-12-01

    We report on software developed as part of the PAN-DA system to support the functions of front end readout controllers and event builders in multiprocessor, multilevel, distributed data acquisition systems. For the next generation data acquisition system we have undertaken to design and implement software tools that are easily transportable to new modules. The first implementation of this software is for Motorola 68K series processor boards in FASTBUS and VME and will be used in the Fermilab accelerator run at the beginning of 1990. We use a Real Time Kernel Operating System. The software provides general connectivity tools for control, diagnosis and monitoring. 17 refs., 7 figs

  13. Software for virtual accelerator designing

    International Nuclear Information System (INIS)

    Kulabukhova, N.; Ivanov, A.; Korkhov, V.; Lazarev, A.

    2012-01-01

    The article discusses appropriate technologies for software implementation of the Virtual Accelerator. The Virtual Accelerator is considered as a set of services and tools enabling transparent execution of computational software for modeling beam dynamics in accelerators on distributed computing resources. Distributed storage and information processing facilities utilized by the Virtual Accelerator make use of the Service-Oriented Architecture (SOA) according to a cloud computing paradigm. Control system tool-kits (such as EPICS, TANGO), computing modules (including high-performance computing), realization of the GUI with existing frameworks and visualization of the data are discussed in the paper. The presented research consists of software analysis for realization of interaction between all levels of the Virtual Accelerator and some samples of middle-ware implementation. A set of the servers and clusters at St.-Petersburg State University form the infrastructure of the computing environment for Virtual Accelerator design. Usage of component-oriented technology for realization of Virtual Accelerator levels interaction is proposed. The article concludes with an overview and substantiation of a choice of technologies that will be used for design and implementation of the Virtual Accelerator. (authors)

  14. Arcade: A Web-Java Based Framework for Distributed Computing

    Science.gov (United States)

    Chen, Zhikai; Maly, Kurt; Mehrotra, Piyush; Zubair, Mohammad; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    Distributed heterogeneous environments are being increasingly used to execute a variety of large size simulations and computational problems. We are developing Arcade, a web-based environment to design, execute, monitor, and control distributed applications. These targeted applications consist of independent heterogeneous modules which can be executed on a distributed heterogeneous environment. In this paper we describe the overall design of the system and discuss the prototype implementation of the core functionalities required to support such a framework.

  15. A Real-Time Fault Management Software System for Distributed Environments, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — DyMA-FM (Dynamic Multivariate Assessment for Fault Management) is a software architecture for real-time fault management. Designed to run in a distributed...

  16. ActionMap: A web-based software that automates loci assignments to framework maps.

    Science.gov (United States)

    Albini, Guillaume; Falque, Matthieu; Joets, Johann

    2003-07-01

    Genetic linkage computation may be a repetitive and time consuming task, especially when numerous loci are assigned to a framework map. We thus developed ActionMap, a web-based software that automates genetic mapping on a fixed framework map without adding the new markers to the map. Using this tool, hundreds of loci may be automatically assigned to the framework in a single process. ActionMap was initially developed to map numerous ESTs with a small plant mapping population and is limited to inbred lines and backcrosses. ActionMap is highly configurable and consists of Perl and PHP scripts that automate command steps for the MapMaker program. A set of web forms were designed for data import and mapping settings. Results of automatic mapping can be displayed as tables or drawings of maps and may be exported. The user may create personal access-restricted projects to store raw data, settings and mapping results. All data may be edited, updated or deleted. ActionMap may be used either online or downloaded for free (http://moulon.inra.fr/~bioinfo/).

  17. Distributed Framework for Dynamic Telescope and Instrument Control

    Science.gov (United States)

    Ames, Troy J.; Case, Lynne

    2002-01-01

    Traditionally, instrument command and control systems have been developed specifically for a single instrument. Such solutions are frequently expensive and are inflexible to support the next instrument development effort. NASA Goddard Space Flight Center is developing an extensible framework, known as Instrument Remote Control (IRC) that applies to any kind of instrument that can be controlled by a computer. IRC combines the platform independent processing capabilities of Java with the power of the Extensible Markup Language (XML). A key aspect of the architecture is software that is driven by an instrument description, written using the Instrument Markup Language (IML). IML is an XML dialect used to describe graphical user interfaces to control and monitor the instrument, command sets and command formats, data streams, communication mechanisms, and data processing algorithms. The IRC framework provides the ability to communicate to components anywhere on a network using the JXTA protocol for dynamic discovery of distributed components. JXTA (see httD://www.jxta.org,) is a generalized protocol that allows any devices connected by a network to communicate in a peer-to-peer manner. IRC uses JXTA to advertise a device's IML and discover devices of interest on the network. Devices can join or leave the network and thus join or leave the instrument control environment of IRC. Currently, several astronomical instruments are working with the IRC development team to develop custom components for IRC to control their instruments. These instruments include: High resolution Airborne Wideband Camera (HAWC), a first light instrument for the Stratospheric Observatory for Infrared Astronomy (SOFIA); Submillimeter And Far Infrared Experiment (SAFIRE), a Principal Investigator instrument for SOFIA; and Fabry-Perot Interferometer Bolometer Research Experiment (FIBRE), a prototype of the SAFIRE instrument, used at the Caltech Submillimeter Observatory (CSO). Most recently, we have

  18. A Modeling Framework for Schedulability Analysis of Distributed Avionics Systems

    DEFF Research Database (Denmark)

    Han, Pujie; Zhai, Zhengjun; Nielsen, Brian

    2018-01-01

    This paper presents a modeling framework for schedulability analysis of distributed integrated modular avionics (DIMA) systems that consist of spatially distributed ARINC-653 modules connected by a unified AFDX network. We model a DIMA system as a set of stopwatch automata (SWA) in UPPAAL...

  19. Distributed Framework for Prototyping of Observability Concepts in Smart Grids

    DEFF Research Database (Denmark)

    Prostejovsky, Alexander; Gehrke, Oliver; Kosek, Anna Magdalena

    2015-01-01

    —Development and testing of distributed monitoring, visualisation, and decision support concepts for future power systems require appropriate modelling tools that represent both the electrical side of the grid, as well as the communication and logical relations between the acting entities....... This work presents an Observability Framework for distributed data acquisition and knowledge inference that aims to facilitate the development of these distributed concepts. They are realised as applications that run within the framework and are able to access the information on the grid topology and states...... via an abstract information model. Data is acquired dynamically over low-level data interfaces that allow for easy integration within heterogeneous environments. A Multi-Agent System platform was chosen for implementation, where agents represent the different electrical and logical grid elements...

  20. DYNAMIC SOFTWARE TESTING MODELS WITH PROBABILISTIC PARAMETERS FOR FAULT DETECTION AND ERLANG DISTRIBUTION FOR FAULT RESOLUTION DURATION

    Directory of Open Access Journals (Sweden)

    A. D. Khomonenko

    2016-07-01

    Full Text Available Subject of Research.Software reliability and test planning models are studied taking into account the probabilistic nature of error detection and discovering. Modeling of software testing enables to plan the resources and final quality at early stages of project execution. Methods. Two dynamic models of processes (strategies are suggested for software testing, using error detection probability for each software module. The Erlang distribution is used for arbitrary distribution approximation of fault resolution duration. The exponential distribution is used for approximation of fault resolution discovering. For each strategy, modified labeled graphs are built, along with differential equation systems and their numerical solutions. The latter makes it possible to compute probabilistic characteristics of the test processes and states: probability states, distribution functions for fault detection and elimination, mathematical expectations of random variables, amount of detected or fixed errors. Evaluation of Results. Probabilistic characteristics for software development projects were calculated using suggested models. The strategies have been compared by their quality indexes. Required debugging time to achieve the specified quality goals was calculated. The calculation results are used for time and resources planning for new projects. Practical Relevance. The proposed models give the possibility to use the reliability estimates for each individual module. The Erlang approximation removes restrictions on the use of arbitrary time distribution for fault resolution duration. It improves the accuracy of software test process modeling and helps to take into account the viability (power of the tests. With the use of these models we can search for ways to improve software reliability by generating tests which detect errors with the highest probability.

  1. Framework for Small-Scale Experiments in Software Engineering: Guidance and Control Software Project: Software Engineering Case Study

    Science.gov (United States)

    Hayhurst, Kelly J.

    1998-01-01

    Software is becoming increasingly significant in today's critical avionics systems. To achieve safe, reliable software, government regulatory agencies such as the Federal Aviation Administration (FAA) and the Department of Defense mandate the use of certain software development methods. However, little scientific evidence exists to show a correlation between software development methods and product quality. Given this lack of evidence, a series of experiments has been conducted to understand why and how software fails. The Guidance and Control Software (GCS) project is the latest in this series. The GCS project is a case study of the Requirements and Technical Concepts for Aviation RTCA/DO-178B guidelines, Software Considerations in Airborne Systems and Equipment Certification. All civil transport airframe and equipment vendors are expected to comply with these guidelines in building systems to be certified by the FAA for use in commercial aircraft. For the case study, two implementations of a guidance and control application were developed to comply with the DO-178B guidelines for Level A (critical) software. The development included the requirements, design, coding, verification, configuration management, and quality assurance processes. This paper discusses the details of the GCS project and presents the results of the case study.

  2. Development of Ada language control software for the NASA power management and distribution test bed

    Science.gov (United States)

    Wright, Ted; Mackin, Michael; Gantose, Dave

    1989-01-01

    The Ada language software developed to control the NASA Lewis Research Center's Power Management and Distribution testbed is described. The testbed is a reduced-scale prototype of the electric power system to be used on space station Freedom. It is designed to develop and test hardware and software for a 20-kHz power distribution system. The distributed, multiprocessor, testbed control system has an easy-to-use operator interface with an understandable English-text format. A simple interface for algorithm writers that uses the same commands as the operator interface is provided, encouraging interactive exploration of the system.

  3. Parallel and Distributed Data Processing Using Autonomous ...

    African Journals Online (AJOL)

    Looking at the distributed nature of these networks, data is processed by remote login or Remote Procedure Calls (RPC), this causes congestion in the network bandwidth. This paper proposes a framework where software agents are assigned duties to be processing the distributed data concurrently and assembling the ...

  4. Kepler Science Operations Center Pipeline Framework

    Science.gov (United States)

    Klaus, Todd C.; McCauliff, Sean; Cote, Miles T.; Girouard, Forrest R.; Wohler, Bill; Allen, Christopher; Middour, Christopher; Caldwell, Douglas A.; Jenkins, Jon M.

    2010-01-01

    The Kepler mission is designed to continuously monitor up to 170,000 stars at a 30 minute cadence for 3.5 years searching for Earth-size planets. The data are processed at the Science Operations Center (SOC) at NASA Ames Research Center. Because of the large volume of data and the memory and CPU-intensive nature of the analysis, significant computing hardware is required. We have developed generic pipeline framework software that is used to distribute and synchronize the processing across a cluster of CPUs and to manage the resulting products. The framework is written in Java and is therefore platform-independent, and scales from a single, standalone workstation (for development and research on small data sets) to a full cluster of homogeneous or heterogeneous hardware with minimal configuration changes. A plug-in architecture provides customized control of the unit of work without the need to modify the framework itself. Distributed transaction services provide for atomic storage of pipeline products for a unit of work across a relational database and the custom Kepler DB. Generic parameter management and data accountability services are provided to record the parameter values, software versions, and other meta-data used for each pipeline execution. A graphical console allows for the configuration, execution, and monitoring of pipelines. An alert and metrics subsystem is used to monitor the health and performance of the pipeline. The framework was developed for the Kepler project based on Kepler requirements, but the framework itself is generic and could be used for a variety of applications where these features are needed.

  5. Optimization of traffic distribution control in software-configurable infrastructure of virtual data center based on a simulation model

    Directory of Open Access Journals (Sweden)

    I. P. Bolodurina

    2017-01-01

    Full Text Available Currently, the proportion of use of cloud computing technology in today's business processes of companies is growing steadily. Despite the fact that it allows you to reduce the cost of ownership and operation of IT infrastructure, there are a number of problems related to the control of data centers. One such problem is the efficiency of the use of available companies compute and network resources. One of the directions of optimization is the process of traffic control of cloud applications and services in data centers. Given the multi-tier architecture of modern data center, this problem does not quite trivial. The advantage of modern virtual infrastructure is the ability to use software-configurable networks and software-configurable data storages. However, existing solutions with algorithmic optimization does not take into account a number of features forming network traffic with multiple classes of applications. Within the framework of the exploration solved the problem of optimizing the distribution of traffic cloud applications and services for the software-controlled virtual data center infrastructure. A simulation model describing the traffic in data center and software-configurable network segments involved in the processing of user requests for applications and services located network environment that includes a heterogeneous cloud platform and software-configurable data storages. The developed model has allowed to implement cloud applications traffic management algorithm and optimize access to the storage system through the effective use of the channel for data transmission. In experimental studies found that the application of the developed algorithm can reduce the response time of cloud applications and services, and as a result improve the performance of processing user requests and to reduce the number of failures.

  6. Web based parallel/distributed medical data mining using software agents

    Energy Technology Data Exchange (ETDEWEB)

    Kargupta, H.; Stafford, B.; Hamzaoglu, I.

    1997-12-31

    This paper describes an experimental parallel/distributed data mining system PADMA (PArallel Data Mining Agents) that uses software agents for local data accessing and analysis and a web based interface for interactive data visualization. It also presents the results of applying PADMA for detecting patterns in unstructured texts of postmortem reports and laboratory test data for Hepatitis C patients.

  7. Analysis of lipid experiments (ALEX: a software framework for analysis of high-resolution shotgun lipidomics data.

    Directory of Open Access Journals (Sweden)

    Peter Husen

    Full Text Available Global lipidomics analysis across large sample sizes produces high-content datasets that require dedicated software tools supporting lipid identification and quantification, efficient data management and lipidome visualization. Here we present a novel software-based platform for streamlined data processing, management and visualization of shotgun lipidomics data acquired using high-resolution Orbitrap mass spectrometry. The platform features the ALEX framework designed for automated identification and export of lipid species intensity directly from proprietary mass spectral data files, and an auxiliary workflow using database exploration tools for integration of sample information, computation of lipid abundance and lipidome visualization. A key feature of the platform is the organization of lipidomics data in "database table format" which provides the user with an unsurpassed flexibility for rapid lipidome navigation using selected features within the dataset. To demonstrate the efficacy of the platform, we present a comparative neurolipidomics study of cerebellum, hippocampus and somatosensory barrel cortex (S1BF from wild-type and knockout mice devoid of the putative lipid phosphate phosphatase PRG-1 (plasticity related gene-1. The presented framework is generic, extendable to processing and integration of other lipidomic data structures, can be interfaced with post-processing protocols supporting statistical testing and multivariate analysis, and can serve as an avenue for disseminating lipidomics data within the scientific community. The ALEX software is available at www.msLipidomics.info.

  8. Analysis of lipid experiments (ALEX): a software framework for analysis of high-resolution shotgun lipidomics data.

    Science.gov (United States)

    Husen, Peter; Tarasov, Kirill; Katafiasz, Maciej; Sokol, Elena; Vogt, Johannes; Baumgart, Jan; Nitsch, Robert; Ekroos, Kim; Ejsing, Christer S

    2013-01-01

    Global lipidomics analysis across large sample sizes produces high-content datasets that require dedicated software tools supporting lipid identification and quantification, efficient data management and lipidome visualization. Here we present a novel software-based platform for streamlined data processing, management and visualization of shotgun lipidomics data acquired using high-resolution Orbitrap mass spectrometry. The platform features the ALEX framework designed for automated identification and export of lipid species intensity directly from proprietary mass spectral data files, and an auxiliary workflow using database exploration tools for integration of sample information, computation of lipid abundance and lipidome visualization. A key feature of the platform is the organization of lipidomics data in "database table format" which provides the user with an unsurpassed flexibility for rapid lipidome navigation using selected features within the dataset. To demonstrate the efficacy of the platform, we present a comparative neurolipidomics study of cerebellum, hippocampus and somatosensory barrel cortex (S1BF) from wild-type and knockout mice devoid of the putative lipid phosphate phosphatase PRG-1 (plasticity related gene-1). The presented framework is generic, extendable to processing and integration of other lipidomic data structures, can be interfaced with post-processing protocols supporting statistical testing and multivariate analysis, and can serve as an avenue for disseminating lipidomics data within the scientific community. The ALEX software is available at www.msLipidomics.info.

  9. The dBoard: a Digital Scrum Board for Distributed Software Development

    DEFF Research Database (Denmark)

    Esbensen, Morten; Tell, Paolo; Cholewa, Jacob Benjamin

    2015-01-01

    In this paper we present the dBoard - a digital Scrum Board for distributed Agile software development teams. The dBoard is designed as a 'virtual window' between two Scrum team spaces. It connects two locations with live video and audio, which is overlaid with a synchronized and interactive...... digital Scrum board, and it adapts the fidelity of the video/audio to the presence of people in front of it. The dBoard is designed to work (i) as a passive information radiator from which it is easy to get an overview of the status of work, (ii) as a media space providing awareness about the presence...... of remote co-workers, and (iii) as an active meeting support tool. The paper presents a case study of distributed Scrum in a large software company that motivates the design of the dBoard, and details the design and technical implementation of the dBoard. The paper also reports on an initial user study...

  10. Collaborative Windows – A User Interface Concept for Distributed Collaboration

    DEFF Research Database (Denmark)

    Esbensen, Morten

    2016-01-01

    where close collaboration and frequent meetings drive the work. One way to achieve this way of working is to implement the Scrum software development framework. Implementing Scrum in globalized context however, requires transforming the Scrum development methods to a distributed setup and extensive use...... of collaboration technologies. In this dissertation, I explore how novel collaboration technologies can support closely coupled distributed work such as that in distributed Scrum. This research is based on three different studies: an ethnographic field study of distributed Scrum between Danish and Indian software...

  11. Mapserver – Information Flow Management Software for The Border Guard Distributed Data Exchange System

    OpenAIRE

    Blok Marek; Kaczmarek Sylwester; Młynarczuk Magdalena; Narloch Marcin

    2016-01-01

    In this paper the architecture of the software designed for management of position and identification data of floating and flying objects in Maritime areas controlled by Polish Border Guard is presented. The software was designed for managing information stored in a distributed system with two variants of the software, one for a mobile device installed on a vessel, an airplane or a car and second for a central server. The details of implementation of all functionalities of the MapServer in bo...

  12. A Framework for Federated Two-Factor Authentication Enabling Cost-Effective Secure Access to Distributed Cyberinfrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Ezell, Matthew A [ORNL; Rogers, Gary L [University of Tennessee, Knoxville (UTK); Peterson, Gregory D. [University of Tennessee, Knoxville (UTK)

    2012-01-01

    As cyber attacks become increasingly sophisticated, the security measures used to mitigate the risks must also increase in sophistication. One time password (OTP) systems provide strong authentication because security credentials are not reusable, thus thwarting credential replay attacks. The credential changes regularly, making brute-force attacks significantly more difficult. In high performance computing, end users may require access to resources housed at several different service provider locations. The ability to share a strong token between multiple computing resources reduces cost and complexity. The National Science Foundation (NSF) Extreme Science and Engineering Discovery Environment (XSEDE) provides access to digital resources, including supercomputers, data resources, and software tools. XSEDE will offer centralized strong authentication for services amongst service providers that leverage their own user databases and security profiles. This work implements a scalable framework built on standards to provide federated secure access to distributed cyberinfrastructure.

  13. ALMA software architecture

    Science.gov (United States)

    Schwarz, Joseph; Raffi, Gianni

    2002-12-01

    The Atacama Large Millimeter Array (ALMA) is a joint project involving astronomical organizations in Europe and North America. ALMA will consist of at least 64 12-meter antennas operating in the millimeter and sub-millimeter range. It will be located at an altitude of about 5000m in the Chilean Atacama desert. The primary challenge to the development of the software architecture is the fact that both its development and runtime environments will be distributed. Groups at different institutes will develop the key elements such as Proposal Preparation tools, Instrument operation, On-line calibration and reduction, and Archiving. The Proposal Preparation software will be used primarily at scientists' home institutions (or on their laptops), while Instrument Operations will execute on a set of networked computers at the ALMA Operations Support Facility. The ALMA Science Archive, itself to be replicated at several sites, will serve astronomers worldwide. Building upon the existing ALMA Common Software (ACS), the system architects will prepare a robust framework that will use XML-encoded entity objects to provide an effective solution to the persistence needs of this system, while remaining largely independent of any underlying DBMS technology. Independence of distributed subsystems will be facilitated by an XML- and CORBA-based pass-by-value mechanism for exchange of objects. Proof of concept (as well as a guide to subsystem developers) will come from a prototype whose details will be presented.

  14. LANDSAFE: LANDING SITE RISK ANALYSIS SOFTWARE FRAMEWORK

    Directory of Open Access Journals (Sweden)

    R. Schmidt

    2012-08-01

    Full Text Available The European Space Agency (ESA is planning a Lunar Lander mission in the 2018 timeframe that will demonstrate precise soft landing at the polar regions of the Moon. To ensure a safe and successful landing a careful risk analysis has to be carried out. This is comprised of identifying favorable target areas and evaluating the surface conditions in these areas. Features like craters, boulders, steep slopes, rough surfaces and shadow areas have to be identified in order to assess the risk associated to a landing site in terms of a successful touchdown and subsequent surface operation of the lander. In addition, global illumination conditions at the landing site have to be simulated and analyzed. The Landing Site Risk Analysis software framework (LandSAfe is a system for the analysis, selection and certification of safe landing sites on the lunar surface. LandSAfe generates several data products including high resolution digital terrain models (DTMs, hazard maps, illumination maps, temperature maps and surface reflectance maps which assist the user in evaluating potential landing site candidates. This paper presents the LandSAfe system and describes the methods and products of the different modules. For one candidate landing site on the rim of Shackleton crater at the south pole of the Moon a high resolution DTM is showcased.

  15. Distributed control software of high-performance control-loop algorithm

    CERN Document Server

    Blanc, D

    1999-01-01

    The majority of industrial cooling and ventilation plants require the control of complex processes. All these processes are highly important for the operation of the machines. The stability and reliability of these processes are leading factors identifying the quality of the service provided. The control system architecture and software structure, as well, are required to have high dynamical performance and robust behaviour. The intelligent systems based on PID or RST controllers are used for their high level of stability and accuracy. The design and tuning of these complex controllers require the dynamic model of the plant to be known (generally obtained by identification) and the desired performance of the various control loops to be specified for achieving good performances. The concept of having a distributed control algorithm software provides full automation facilities with well-adapted functionality and good performances, giving methodology, means and tools to master the dynamic process optimization an...

  16. Harmonic Domain Modeling of a Distribution System Using the DIgSILENT PowerFactory Software

    DEFF Research Database (Denmark)

    Wasilewski, J.; Wiechowski, Wojciech Tomasz; Bak, Claus Leth

    The first part of this paper presents the comparison between two models of distribution system created in computer simulation software PowerFactory (PF). Model A is an exciting simplified equivalent model of the distribution system used by Transmission System Operator (TSO) Eltra for balenced load...

  17. GSIMF: a web service based software and database management system for the next generation grids

    International Nuclear Information System (INIS)

    Wang, N; Ananthan, B; Gieraltowski, G; May, E; Vaniachine, A

    2008-01-01

    To process the vast amount of data from high energy physics experiments, physicists rely on Computational and Data Grids; yet, the distribution, installation, and updating of a myriad of different versions of different programs over the Grid environment is complicated, time-consuming, and error-prone. Our Grid Software Installation Management Framework (GSIMF) is a set of Grid Services that has been developed for managing versioned and interdependent software applications and file-based databases over the Grid infrastructure. This set of Grid services provide a mechanism to install software packages on distributed Grid computing elements, thus automating the software and database installation management process on behalf of the users. This enables users to remotely install programs and tap into the computing power provided by Grids

  18. The LabVIEW RADE framework distributed architecture

    International Nuclear Information System (INIS)

    Andreassen, O.O.; Kudryavtsev, D.; Raimondo, A.; Rijllart, A.; Shaipov, V.; Sorokoletov, R.

    2012-01-01

    For accelerator GUI (Graphical User Interface) applications there is a need for a rapid development environment (RADE) to create expert tools or to prototype operator applications. Typically a variety of tools are being used, such as Matlab or Excel, but their scope is limited, either because of their low flexibility or limited integration into the accelerator infrastructure. In addition, having several tools obliges users to deal with different programming techniques and data structures. We have addressed these limitations by using LabVIEW, extending it with interfaces to C++ and Java. In this way it fulfills requirements of ease of use, flexibility and connectivity, which makes up what we refer to as the RADE framework. Recent application requirements could only be met by implementing a distributed architecture with multiple servers running multiple services. This brought us the additional advantage to implement redundant services, to increase the availability and to make transparent updates. We will present two applications requiring high availability. We also report on issues encountered with such a distributed architecture and how we have addressed them. The latest extension of the framework is to industrial equipment, with program templates and drivers for PLCs (Siemens and Schneider) and PXI with LabVIEW-Real Time. (authors)

  19. DOOCS patterns, reusable software components for FPGA based RF GUN field controller

    Energy Technology Data Exchange (ETDEWEB)

    Pucyk, P. [Institute of Electronic Systems, Warsaw (Poland)

    2006-07-01

    Modern accelerator technology combines software and hardware solutions to provide distributed, high efficiency digital systems for High Energy Physics experiments. Providing flexible, maintainable software is crucial for ensuring high availability of the whole system. In order to fulfil all these requirements, appropriate design and development techniques have to be used. Software patterns are well known solution for common programming issues, providing proven development paradigms, which can help to avoid many design issues. DOOCS patterns introduces new concepts of reusable software components for control system algorithms development and implementation in DOOCS framework. Chosen patterns have been described and usage examples have been presented in this paper. (orig.)

  20. DOOCS patterns, reusable software components for FPGA based RF GUN field controller

    International Nuclear Information System (INIS)

    Pucyk, P.

    2006-01-01

    Modern accelerator technology combines software and hardware solutions to provide distributed, high efficiency digital systems for High Energy Physics experiments. Providing flexible, maintainable software is crucial for ensuring high availability of the whole system. In order to fulfil all these requirements, appropriate design and development techniques have to be used. Software patterns are well known solution for common programming issues, providing proven development paradigms, which can help to avoid many design issues. DOOCS patterns introduces new concepts of reusable software components for control system algorithms development and implementation in DOOCS framework. Chosen patterns have been described and usage examples have been presented in this paper. (orig.)

  1. WWW-based remote analysis framework for UniSampo and Shaman analysis software

    International Nuclear Information System (INIS)

    Aarnio, P.A.; Ala-Heikkilae, J.J.; Routti, J.T.; Nikkinen, M.T.

    2005-01-01

    UniSampo and Shaman are well-established analytical tools for gamma-ray spectrum analysis and the subsequent radionuclide identification. These tools are normally run locally on a Unix or Linux workstation in interactive mode. However, it is also possible to run them in batch/non-interactive mode by starting them with the correct parameters. This is how they are used in the standard analysis pipeline operation. This functionality also makes it possible to use them for remote operation over the network. Framework for running UniSampo and Shaman analysis using the standard WWW-protocol has been developed. A WWW-server receives requests from the client WWW-browser and runs the analysis software via a set of CGI-scripts. Authentication, input data transfer, and output and display of the final analysis results is all carried out using standard WWW-mechanisms. This WWW-framework can be utilized, for example, by organizations that have radioactivity surveillance stations in a wide area. A computer with a standard internet/intranet connection suffices for on-site analyses. (author)

  2. A resilient and secure software platform and architecture for distributed spacecraft

    Science.gov (United States)

    Otte, William R.; Dubey, Abhishek; Karsai, Gabor

    2014-06-01

    A distributed spacecraft is a cluster of independent satellite modules flying in formation that communicate via ad-hoc wireless networks. This system in space is a cloud platform that facilitates sharing sensors and other computing and communication resources across multiple applications, potentially developed and maintained by different organizations. Effectively, such architecture can realize the functions of monolithic satellites at a reduced cost and with improved adaptivity and robustness. Openness of these architectures pose special challenges because the distributed software platform has to support applications from different security domains and organizations, and where information flows have to be carefully managed and compartmentalized. If the platform is used as a robust shared resource its management, configuration, and resilience becomes a challenge in itself. We have designed and prototyped a distributed software platform for such architectures. The core element of the platform is a new operating system whose services were designed to restrict access to the network and the file system, and to enforce resource management constraints for all non-privileged processes Mixed-criticality applications operating at different security labels are deployed and controlled by a privileged management process that is also pre-configuring all information flows. This paper describes the design and objective of this layer.

  3. Designing a Software Test Automation Framework

    Directory of Open Access Journals (Sweden)

    Sabina AMARICAI

    2014-01-01

    Full Text Available Testing is an art and science that should ultimately lead to lower cost businesses through increasing control and reducing risk. Testing specialists should thoroughly understand the system or application from both the technical and the business perspective, and then design, build and implement the minimum-cost, maximum-coverage validation framework. Test Automation is an important ingredient for testing large scale applications. In this paper we discuss several test automation frameworks, their advantages and disadvantages. We also propose a custom automation framework model that is suited for applications with very complex business requirements and numerous interfaces.

  4. A Multi-Functional Fully Distributed Control Framework for AC Microgrids

    DEFF Research Database (Denmark)

    Shafiee, Qobad; Nasirian, Vahidreza; Quintero, Juan Carlos Vasquez

    2018-01-01

    This paper proposes a fully distributed control methodology for secondary control of AC microgrids. The control framework includes three modules: voltage regulator, reactive power regulator, and active power/frequency regulator. The voltage regulator module maintains the average voltage of the mi......This paper proposes a fully distributed control methodology for secondary control of AC microgrids. The control framework includes three modules: voltage regulator, reactive power regulator, and active power/frequency regulator. The voltage regulator module maintains the average voltage...... of the microgrid distribution line at the rated value. The reactive power regulator compares the local normalized reactive power of an inverter with its neighbors’ powers on a communication graph and, accordingly, fine-tunes Q-V droop coefficients to mitigate any reactive power mismatch. Collectively, these two....../reactive power sharing. An AC microgrid is prototyped to experimentally validate the proposed control methodology against the load change, plug-and-play operation, and communication constraints such as delay, packet loss, and limited bandwidth....

  5. AthenaMT: upgrading the ATLAS software framework for the many-core world with multi-threading

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00100895; The ATLAS collaboration; Baines, John; Bold, Tomasz; Calafiura, Paolo; Farrell, Steven; Malon, David; Ritsch, Elmar; Stewart, Graeme; Snyder, Scott; Tsulaia, Vakhtang; Wynne, Benjamin; van Gemmeren, Peter

    2017-01-01

    ATLAS’s current software framework, Gaudi/Athena, has been very successful for the experiment in LHC Runs 1 and 2. However, its single threaded design has been recognized for some time to be increasingly problematic as CPUs have increased core counts and decreased available memory per core. Even the multi-process version of Athena, AthenaMP, will not scale to the range of architectures we expect to use beyond Run2. After concluding a rigorous requirements phase, where many design components were examined in detail, ATLAS has begun the migration to a new data-flow driven, multi-threaded framework, which enables the simultaneous processing of singleton, thread unsafe legacy Algorithms, cloned Algorithms that execute concurrently in their own threads with different Event contexts, and fully re-entrant, thread safe Algorithms. In this paper we report on the process of modifying the framework to safely process multiple concurrent events in different threads, which entails significant changes in the underlying ha...

  6. AthenaMT: Upgrading the ATLAS Software Framework for the Many-Core World with Multi-Threading

    CERN Document Server

    Leggett, Charles; The ATLAS collaboration; Bold, Tomasz; Calafiura, Paolo; Farrell, Steven; Malon, David; Ritsch, Elmar; Stewart, Graeme; Snyder, Scott; Tsulaia, Vakhtang; Wynne, Benjamin; van Gemmeren, Peter

    2016-01-01

    ATLAS's current software framework, Gaudi/Athena, has been very successful for the experiment in LHC Runs 1 and 2. However, its single threaded design has been recognised for some time to be increasingly problematic as CPUs have increased core counts and decreased available memory per core. Even the multi-process version of Athena, AthenaMP, will not scale to the range of architectures we expect to use beyond Run2. After concluding a rigorous requirements phase, where many design components were examined in detail, ATLAS has begun the migration to a new data-flow driven, multi-threaded framework, which enables the simultaneous processing of singleton, thread unsafe legacy Algorithms, cloned Algorithms that execute concurrently in their own threads with different Event contexts, and fully re-entrant, thread safe Algorithms. In this paper we will report on the process of modifying the framework to safely process multiple concurrent events in different threads, which entails significant changes in the underlying...

  7. Mobile agent-enabled framework for structuring and building distributed systems on the internet

    Institute of Scientific and Technical Information of China (English)

    CAO Jiannong; ZHOU Jingyang; ZHU Weiwei; LI Xuhui

    2006-01-01

    Mobile agent has shown its promise as a powerful means to complement and enhance existing technology in various application areas. In particular, existing work has demonstrated that MA can simplify the development and improve the performance of certain classes of distributed applications, especially for those running on a wide-area, heterogeneous, and dynamic networking environment like the Internet. In our previous work, we extended the application of MA to the design of distributed control functions, which require the maintenance of logical relationship among and/or coordination of processing entities in a distributed system. A novel framework is presented for structuring and building distributed systems, which use cooperating mobile agents as an aid to carry out coordination and cooperation tasks in distributed systems. The framework has been used for designing various distributed control functions such as load balancing and mutual exclusion in our previous work. In this paper, we use the framework to propose a novel approach to detecting deadlocks in distributed system by using mobile agents, which demonstrates the advantage of being adaptive and flexible of mobile agents. We first describe the MAEDD (Mobile Agent Enabled Deadlock Detection) scheme, in which mobile agents are dispatched to collect and analyze deadlock information distributed across the network sites and, based on the analysis, to detect and resolve deadlocks. Then the design of an adaptive hybrid algorithm derived from the framework is presented. The algorithm can dynamically adapt itself to the changes in system state by using different deadlock detection strategies. The performance of the proposed algorithm has been evaluated using simulations. The results show that the algorithm can outperform existing algorithms that use a fixed deadlock detection strategy.

  8. Implementation of the ATLAS trigger within the multi-threaded software framework AthenaMT

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00225867; The ATLAS collaboration

    2017-01-01

    We present an implementation of the ATLAS High Level Trigger, HLT, that provides parallel execution of trigger algorithms within the ATLAS multithreaded software framework, AthenaMT. This development will enable the ATLAS HLT to meet future challenges due to the evolution of computing hardware and upgrades of the Large Hadron Collider, LHC, and ATLAS Detector. During the LHC data-taking period starting in 2021, luminosity will reach up to three times the original design value. Luminosity will increase further, to up to 7.5 times the design value, in 2026 following LHC and ATLAS upgrades. This includes an upgrade of the ATLAS trigger architecture that will result in an increase in the HLT input rate by a factor of 4 to 10 compared to the current maximum rate of 100 kHz. The current ATLAS multiprocess framework, AthenaMP, manages a number of processes that each execute algorithms sequentially for different events. AthenaMT will provide a fully multi-threaded environment that will additionally enable concurrent ...

  9. Developing Distributed System With Service Resource Oriented Architecture

    Directory of Open Access Journals (Sweden)

    Hermawan Hermawan

    2012-06-01

    Full Text Available Service Oriented Architecture is a design paradigm in software engineering with which a distributed system is built for an enterprise. This paradigm aims at providing the system as a service through a protocol in web service technology, namely Simple Object Access Protocol (SOAP. However, SOA is service level agreements of webservice. For this reason, this reasearch aims at combining SOA with Resource Oriented Architecture in order to expand scalability of services. This combination creates Sevice Resource Oriented Architecture (SROA with which a distributed system is developed that integrates services within project management software. Following this design, the software is developed according to a framework of Agile Model Driven Development which can reduce complexities of the whole process of software development.

  10. The Unified Software Development Process and Framework Development = Birleşik Yazılım Geliştirme Süreci ve İskelet Yapılarının Geliştirilmesi

    Directory of Open Access Journals (Sweden)

    Abdelaziz KHAMIS

    2002-01-01

    Full Text Available Application frameworks are a very promising software reuse technology. The development of application frameworks is a complex process. Many methodologies and approaches have been proposed with the purpose of minimizing the complexities. The Unified Software Development Process directly addresses the complexity challenge of today's software applications. In this paper, we explore the role of the Unified Software Development Process together with a popular CASE tool: Rational Rose, in managing the complexity of developing application frameworks.

  11. An Effective Framework for Distributed Geospatial Query Processing in Grids

    Directory of Open Access Journals (Sweden)

    CHEN, B.

    2010-08-01

    Full Text Available The emergence of Internet has greatly revolutionized the way that geospatial information is collected, managed, processed and integrated. There are several important research issues to be addressed for distributed geospatial applications. First, the performance of geospatial applications is needed to be considered in the Internet environment. In this regard, the Grid as an effective distributed computing paradigm is a good choice. The Grid uses a series of middleware to interconnect and merge various distributed resources into a super-computer with capability of high performance computation. Secondly, it is necessary to ensure the secure use of independent geospatial applications in the Internet environment. The Grid just provides the utility of secure access to distributed geospatial resources. Additionally, it makes good sense to overcome the heterogeneity between individual geospatial information systems in Internet. The Open Geospatial Consortium (OGC proposes a number of generalized geospatial standards e.g. OGC Web Services (OWS to achieve interoperable access to geospatial applications. The OWS solution is feasible and widely adopted by both the academic community and the industry community. Therefore, we propose an integrated framework by incorporating OWS standards into Grids. Upon the framework distributed geospatial queries can be performed in an interoperable, high-performance and secure Grid environment.

  12. Control, Test and Monitoring Software Framework for the ATLAS Level-1 Calorimeter Trigger

    CERN Document Server

    Achenbach, R; Aharrouche, M; Andrei, V; Åsman, B; Barnett, B M; Bauss, B; Bendel, M; Bohm, C; Booth, J R A; Bracinik, J; Brawn, I P; Charlton, D G; Childers, J T; Collins, N J; Curtis, C J; Davis, A O; Eckweiler, S; Eisenhandler, E F; Faulkner, P J W; Fleckner, J; Föhlisch, F; Gee, C N P; Gillman, A R; Goringer, C; Groll, M; Hadley, D R; Hanke, P; Hellman, S; Hidvegi, A; Hillier, S J; Johansen, M; Kluge, E E; Kühl, T; Landon, M; Lendermann, V; Lilley, J N; Mahboubi, K; Mahout, G; Meier, K; Middleton, R P; Moa, T; Morris, J D; Müller, F; Neusiedl, A; Ohm, C; Oltmann, B; Perera, V J O; Prieur, D P F; Qian, W; Rieke, S; Rühr, F; Sankey, D P C; Schäfer, U; Schmitt, K; Schultz-Coulon, H C; Silverstein, S; Sjölin, J; Staley, R J; Stamen, R; Stockton, M C; Tan, C L A; Tapprogge, S; Thomas, J P; Thompson, P D; Watkins, P M; Watson, A; Weber, P; Wessels, M; Wildt, M

    2008-01-01

    The ATLAS first-level calorimeter trigger is a hardware-based system designed to identify high-pT jets, electron/photon and tau candidates and to measure total and missing ET in the ATLAS calorimeters. The complete trigger system consists of over 300 customdesignedVME modules of varying complexity. These modules are based around FPGAs or ASICs with many configurable parameters, both to initialize the system with correct calibrations and timings and to allow flexibility in the trigger algorithms. The control, testing and monitoring of these modules requires a comprehensive, but well-designed and modular, software framework, which we will describe in this paper.

  13. Modelling Framework and the Quantitative Analysis of Distributed Energy Resources in Future Distribution Networks

    DEFF Research Database (Denmark)

    Han, Xue; Sandels, Claes; Zhu, Kun

    2013-01-01

    There has been a large body of statements claiming that the large-scale deployment of Distributed Energy Resources (DERs) could eventually reshape the future distribution grid operation in numerous ways. Thus, it is necessary to introduce a framework to measure to what extent the power system......, comprising distributed generation, active demand and electric vehicles. Subsequently, quantitative analysis was made on the basis of the current and envisioned DER deployment scenarios proposed for Sweden. Simulations are performed in two typical distribution network models for four seasons. The simulation...... results show that in general the DER deployment brings in the possibilities to reduce the power losses and voltage drops by compensating power from the local generation and optimizing the local load profiles....

  14. Support for User Interfaces for Distributed Systems

    Science.gov (United States)

    Eychaner, Glenn; Niessner, Albert

    2005-01-01

    An extensible Java(TradeMark) software framework supports the construction and operation of graphical user interfaces (GUIs) for distributed computing systems typified by ground control systems that send commands to, and receive telemetric data from, spacecraft. Heretofore, such GUIs have been custom built for each new system at considerable expense. In contrast, the present framework affords generic capabilities that can be shared by different distributed systems. Dynamic class loading, reflection, and other run-time capabilities of the Java language and JavaBeans component architecture enable the creation of a GUI for each new distributed computing system with a minimum of custom effort. By use of this framework, GUI components in control panels and menus can send commands to a particular distributed system with a minimum of system-specific code. The framework receives, decodes, processes, and displays telemetry data; custom telemetry data handling can be added for a particular system. The framework supports saving and later restoration of users configurations of control panels and telemetry displays with a minimum of effort in writing system-specific code. GUIs constructed within this framework can be deployed in any operating system with a Java run-time environment, without recompilation or code changes.

  15. Property-Based Software Engineering Measurement

    Science.gov (United States)

    Briand, Lionel C.; Morasca, Sandro; Basili, Victor R.

    1997-01-01

    Little theory exists in the field of software system measurement. Concepts such as complexity, coupling, cohesion or even size are very often subject to interpretation and appear to have inconsistent definitions in the literature. As a consequence, there is little guidance provided to the analyst attempting to define proper measures for specific problems. Many controversies in the literature are simply misunderstandings and stem from the fact that some people talk about different measurement concepts under the same label (complexity is the most common case). There is a need to define unambiguously the most important measurement concepts used in the measurement of software products. One way of doing so is to define precisely what mathematical properties characterize these concepts, regardless of the specific software artifacts to which these concepts are applied. Such a mathematical framework could generate a consensus in the software engineering community and provide a means for better communication among researchers, better guidelines for analysts, and better evaluation methods for commercial static analyzers for practitioners. In this paper, we propose a mathematical framework which is generic, because it is not specific to any particular software artifact and rigorous, because it is based on precise mathematical concepts. We use this framework to propose definitions of several important measurement concepts (size, length, complexity, cohesion, coupling). It does not intend to be complete or fully objective; other frameworks could have been proposed and different choices could have been made. However, we believe that the formalisms and properties we introduce are convenient and intuitive. This framework contributes constructively to a firmer theoretical ground of software measurement.

  16. A Linear Algebra Framework for Static High Performance Fortran Code Distribution

    Directory of Open Access Journals (Sweden)

    Corinne Ancourt

    1997-01-01

    Full Text Available High Performance Fortran (HPF was developed to support data parallel programming for single-instruction multiple-data (SIMD and multiple-instruction multiple-data (MIMD machines with distributed memory. The programmer is provided a familiar uniform logical address space and specifies the data distribution by directives. The compiler then exploits these directives to allocate arrays in the local memories, to assign computations to elementary processors, and to migrate data between processors when required. We show here that linear algebra is a powerful framework to encode HPF directives and to synthesize distributed code with space-efficient array allocation, tight loop bounds, and vectorized communications for INDEPENDENT loops. The generated code includes traditional optimizations such as guard elimination, message vectorization and aggregation, and overlap analysis. The systematic use of an affine framework makes it possible to prove the compilation scheme correct.

  17. Mapserver – Information Flow Management Software for The Border Guard Distributed Data Exchange System

    Directory of Open Access Journals (Sweden)

    Blok Marek

    2016-09-01

    Full Text Available In this paper the architecture of the software designed for management of position and identification data of floating and flying objects in Maritime areas controlled by Polish Border Guard is presented. The software was designed for managing information stored in a distributed system with two variants of the software, one for a mobile device installed on a vessel, an airplane or a car and second for a central server. The details of implementation of all functionalities of the MapServer in both, mobile and central, versions are briefly presented on the basis of information flow diagrams.

  18. Integration of Simulink Models with Component-based Software Models

    DEFF Research Database (Denmark)

    Marian, Nicolae

    2008-01-01

    Model based development aims to facilitate the development of embedded control systems by emphasizing the separation of the design level from the implementation level. Model based design involves the use of multiple models that represent different views of a system, having different semantics...... of abstract system descriptions. Usually, in mechatronics systems, design proceeds by iterating model construction, model analysis, and model transformation. Constructing a MATLAB/Simulink model, a plant and controller behavior is simulated using graphical blocks to represent mathematical and logical...... constraints. COMDES (Component-based Design of Software for Distributed Embedded Systems) is such a component-based system framework developed by the software engineering group of Mads Clausen Institute for Product Innovation (MCI), University of Southern Denmark. Once specified, the software model has...

  19. CMS software and computing for LHC Run 2

    CERN Document Server

    INSPIRE-00067576

    2016-11-09

    The CMS offline software and computing system has successfully met the challenge of LHC Run 2. In this presentation, we will discuss how the entire system was improved in anticipation of increased trigger output rate, increased rate of pileup interactions and the evolution of computing technology. The primary goals behind these changes was to increase the flexibility of computing facilities where ever possible, as to increase our operational efficiency, and to decrease the computing resources needed to accomplish the primary offline computing workflows. These changes have resulted in a new approach to distributed computing in CMS for Run 2 and for the future as the LHC luminosity should continue to increase. We will discuss changes and plans to our data federation, which was one of the key changes towards a more flexible computing model for Run 2. Our software framework and algorithms also underwent significant changes. We will summarize the our experience with a new multi-threaded framework as deployed on ou...

  20. First-year experience with the ATLAS online monitoring framework

    International Nuclear Information System (INIS)

    Corso-Radu, A

    2010-01-01

    ATLAS is one of the four experiments in the Large Hadron Collider (LHC) at CERN, which has been put in operation this year. The challenging experimental environment and the extreme detector complexity required development of a highly scalable distributed monitoring framework, which is currently being used to monitor the quality of the data being taken as well as operational conditions of the hardware and software elements of the detector, trigger and data acquisition systems. At the moment the ATLAS Trigger/DAQ system is distributed over more than 1000 computers, which is about one third of the final ATLAS size. At every minute of an ATLAS data taking session the monitoring framework serves several thousands physics events to monitoring data analysis applications, handles more than 4 million histograms updates coming from more than 4 thousands applications, executes 10 thousands advanced data quality checks for a subset of those histograms, displays histograms and results of these checks on several dozens of monitors installed in main and satellite ATLAS control rooms. This note presents the overview of the online monitoring software framework, and describes the experience, which was gained during an extensive commissioning period as well as at the first phase of LHC beam in September 2008. Performance results, obtained on the current ATLAS DAQ system will also be presented, showing that the performance of the framework is adequate for the final ATLAS system.

  1. The design of a real-time software system for the distributed control of power station plant

    International Nuclear Information System (INIS)

    Maples, G.C.

    1980-01-01

    As the application of computers to the control of generating plants widens, the problems of resourcing several individual projects over their life cycle can become formidable. This paper indicates the factors relevant to containing the resource requirements associated with software, and outlines the benefits of adopting a standard machine-independent software system which enables engineers rather than computer specialists to develop programs for specific projects. The design objectives which have led to the current development within C.E.G.B. of CUTLASS (Computer Users Technical Languages and Applications Software System) are then considered. CUTLASS is intended to be a standard software system applicable to the majority of future on-line computing projects in the area of generation and is appropriate to stand alone schemes or distributed schemes having a host/target configuration. The CUTLASS system software provides the necessary environment in which to develop, test, and run the applications software, the latter being created by the user by means of a set of engineer-orientated languages. The paper describes the various facilities within CUTALSS, i.e. those considered essential to meet the requirements of future process control applications. Concentrating on the system software relating to the executive functions, and the organisation of global data and communications within distributed systems. The salient features of the engineer-orientated language sets are also discussed. (auth)

  2. Visual querying and analysis of large software repositories

    NARCIS (Netherlands)

    Voinea, Lucian; Telea, Alexandru

    We present a software framework for mining software repositories. Our extensible framework enables the integration of data extraction from repositories with data analysis and interactive visualization. We demonstrate the applicability of the framework by presenting several case studies performed on

  3. Delivering LHC software to HPC compute elements

    CERN Document Server

    Blomer, Jakob; Hardi, Nikola; Popescu, Radu

    2017-01-01

    In recent years, there was a growing interest in improving the utilization of supercomputers by running applications of experiments at the Large Hadron Collider (LHC) at CERN when idle cores cannot be assigned to traditional HPC jobs. At the same time, the upcoming LHC machine and detector upgrades will produce some 60 times higher data rates and challenge LHC experiments to use so far untapped compute resources. LHC experiment applications are tailored to run on high-throughput computing resources and they have a different anatomy than HPC applications. LHC applications comprise a core framework that allows hundreds of researchers to plug in their specific algorithms. The software stacks easily accumulate to many gigabytes for a single release. New releases are often produced on a daily basis. To facilitate the distribution of these software stacks to world-wide distributed computing resources, LHC experiments use a purpose-built, global, POSIX file system, the CernVM File System. CernVM-FS pre-processes dat...

  4. Multiscale, multiphysics beam dynamics framework design and applications

    International Nuclear Information System (INIS)

    Amundson, J F; Spentzouris, P; Dechow, D; Stoltz, P; McInnes, L; Norris, B

    2008-01-01

    Modern beam dynamics simulations require nontrivial implementations of multiple physics models. We discuss how component framework design in combination with the Common Component Architecture's component model and implementation eases the process of incorporation of existing state-of-the-art models with newly-developed models. We discuss current developments in componentized beam dynamics software, emphasizing design issues and distribution issues

  5. Framework for implementation of maintenance management in distribution network service providers

    International Nuclear Information System (INIS)

    Gomez Fernandez, Juan Francisco; Crespo Marquez, Adolfo

    2009-01-01

    Distribution network service providers (DNSP) are companies dealing with network infrastructure, such as distribution of gas, water, electricity or telecommunications, and they require the development of special maintenance management (MM) capabilities in order to satisfy the needs of their customers. In this sector, maintenance management information systems are essential to ensure control, gain knowledge and improve decision making. The aim of this paper is the study of specific characteristics of maintenance in these types of companies. We will investigate existing standards and best management practices with the scope of defining a suitable ad-hoc framework for implementation of maintenance management. The conclusion of the work supports the proposition of a framework consisting on a processes framework based on a structure of systems, integrated for continuous improvement of maintenance activities. The paper offers a very practical approach to the problem, as a result of more of 10 years of professional experience within this sector, and specially focused to network maintenance.

  6. MAUS: MICE Analysis User Software

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    The Muon Ionization Cooling Experiment (MICE) has developed the MICE Analysis User Software (MAUS) to simulate and analyse experimental data. It serves as the primary codebase for the experiment, providing for online data quality checks and offline batch simulation and reconstruction. The code is structured in a Map-Reduce framework to allow parallelization whether on a personal machine or in the control room. Various software engineering practices from industry are also used to ensure correct and maintainable physics code, which include unit, functional and integration tests, continuous integration and load testing, code reviews, and distributed version control systems. Lastly, there are various small design decisions like using JSON as the data structure, using SWIG to allow developers to write components in either Python or C++, or using the SCons python-based build system that may be of interest to other experiments.

  7. Development of requirements tracking and verification system for the software design of distributed control system

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Chul Hwan; Kim, Jang Yeol; Kim, Jung Tack; Lee, Jang Soo; Ham, Chang Shik [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1999-12-31

    In this paper a prototype of Requirement Tracking and Verification System(RTVS) for a Distributed Control System was implemented and tested. The RTVS is a software design and verification tool. The main functions required by the RTVS are managing, tracking and verification of the software requirements listed in the documentation of the DCS. The analysis of DCS software design procedures and interfaces with documents were performed to define the user of the RTVS, and the design requirements for RTVS were developed. 4 refs., 3 figs. (Author)

  8. Development of requirements tracking and verification system for the software design of distributed control system

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Chul Hwan; Kim, Jang Yeol; Kim, Jung Tack; Lee, Jang Soo; Ham, Chang Shik [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    In this paper a prototype of Requirement Tracking and Verification System(RTVS) for a Distributed Control System was implemented and tested. The RTVS is a software design and verification tool. The main functions required by the RTVS are managing, tracking and verification of the software requirements listed in the documentation of the DCS. The analysis of DCS software design procedures and interfaces with documents were performed to define the user of the RTVS, and the design requirements for RTVS were developed. 4 refs., 3 figs. (Author)

  9. A Distributed Framework for Real Time Path Planning in Practical Multi-agent Systems

    KAUST Repository

    Abdelkader, Mohamed; Jaleel, Hassan; Shamma, Jeff S.

    2017-01-01

    We present a framework for distributed, energy efficient, and real time implementable algorithms for path planning in multi-agent systems. The proposed framework is presented in the context of a motivating example of capture the flag which

  10. NASA JPL Distributed Systems Technology (DST) Object-Oriented Component Approach for Software Inter-Operability and Reuse

    Science.gov (United States)

    Hall, Laverne; Hung, Chaw-Kwei; Lin, Imin

    2000-01-01

    The purpose of this paper is to provide a description of NASA JPL Distributed Systems Technology (DST) Section's object-oriented component approach to open inter-operable systems software development and software reuse. It will address what is meant by the terminology object component software, give an overview of the component-based development approach and how it relates to infrastructure support of software architectures and promotes reuse, enumerate on the benefits of this approach, and give examples of application prototypes demonstrating its usage and advantages. Utilization of the object-oriented component technology approach for system development and software reuse will apply to several areas within JPL, and possibly across other NASA Centers.

  11. A Framework of the Use of Information in Software Testing

    Science.gov (United States)

    Kaveh, Payman

    2010-01-01

    With the increasing role that software systems play in our daily lives, software quality has become extremely important. Software quality is impacted by the efficiency of the software testing process. There are a growing number of software testing methodologies, models, and initiatives to satisfy the need to improve software quality. The main…

  12. Interface-based software integration

    Directory of Open Access Journals (Sweden)

    Aziz Ahmad Rais

    2016-07-01

    Full Text Available Enterprise architecture frameworks define the goals of enterprise architecture in order to make business processes and IT operations more effective, and to reduce the risk of future investments. These enterprise architecture frameworks offer different architecture development methods that help in building enterprise architecture. In practice, the larger organizations become, the larger their enterprise architecture and IT become. This leads to an increasingly complex system of enterprise architecture development and maintenance. Application software architecture is one type of architecture that, along with business architecture, data architecture and technology architecture, composes enterprise architecture. From the perspective of integration, enterprise architecture can be considered a system of interaction between multiple examples of application software. Therefore, effective software integration is a very important basis for the future success of the enterprise architecture in question. This article will provide interface-based integration practice in order to help simplify the process of building such a software integration system. The main goal of interface-based software integration is to solve problems that may arise with software integration requirements and developing software integration architecture.

  13. Object-oriented data analysis framework for neutron scattering experiments

    International Nuclear Information System (INIS)

    Suzuki, Jiro; Nakatani, Takeshi; Ohhara, Takashi; Inamura, Yasuhiro; Yonemura, Masao; Morishima, Takahiro; Aoyagi, Tetsuo; Manabe, Atsushi; Otomo, Toshiya

    2009-01-01

    Materials and Life Science Facility (MLF) of Japan Proton Accelerator Research Complex (J-PARC) is one of the facilities that provided the highest intensity pulsed neutron and muon beams. The MLF computing environment design group organizes the computing environments of MLF and instruments. It is important that the computing environment is provided by the facility side, because meta-data formats, the analysis functions and also data analysis strategy should be shared among many instruments in MLF. The C++ class library, named Manyo-lib, is a framework software for developing data reduction and analysis softwares. The framework is composed of the class library for data reduction and analysis operators, network distributed data processing modules and data containers. The class library is wrapped by the Python interface created by SWIG. All classes of the framework can be called from Python language, and Manyo-lib will be cooperated with the data acquisition and data-visualization components through the MLF-platform, a user interface unified in MLF, which is working on Python language. Raw data in the event-data format obtained by data acquisition systems will be converted into histogram format data on Manyo-lib in high performance, and data reductions and analysis are performed with user-application software developed based on Manyo-lib. We enforce standardization of data containers with Manyo-lib, and many additional fundamental data containers in Manyo-lib have been designed and developed. Experimental and analysis data in the data containers can be converted into NeXus file. Manyo-lib is the standard framework for developing analysis software in MLF, and prototypes of data-analysis softwares for each instrument are being developed by the instrument teams.

  14. An alternative model to distribute VO software to WLCG sites based on CernVM-FS: a prototype at PIC Tier1

    International Nuclear Information System (INIS)

    Lanciotti, E; Merino, G; Blomer, J; Bria, A

    2011-01-01

    In a distributed computing model as WLCG the software of experiment specific application software has to be efficiently distributed to any site of the Grid. Application software is currently installed in a shared area of the site visible for all Worker Nodes (WNs) of the site through some protocol (NFS, AFS or other). The software is installed at the site by jobs which run on a privileged node of the computing farm where the shared area is mounted in write mode. This model presents several drawbacks which cause a non-negligible rate of job failure. An alternative model for software distribution based on the CERN Virtual Machine File System (CernVM-FS) has been tried at PIC, the Spanish Tierl site of WLCG. The test bed used and the results are presented in this paper.

  15. Agent-Based Data Integration Framework

    Directory of Open Access Journals (Sweden)

    Łukasz Faber

    2014-01-01

    Full Text Available Combining data from diverse, heterogeneous sources while facilitating a unified access to it is an important (albeit difficult task. There are various possibilities of performing it. In this publication, we propose and describe an agent-based framework dedicated to acquiring and processing distributed, heterogeneous data collected from diverse sources (e.g., the Internet, external software, relational, and document databases. Using this multi-agent-based approach in the aspects of the general architecture (the organization and management of the framework, we create a proof-of-concept implementation. The approach is presented using a sample scenario in which the system is used to search for personal and professional profiles of scientists.

  16. A Distributed Python HPC Framework: ODIN, PyTrilinos, & Seamless

    Energy Technology Data Exchange (ETDEWEB)

    Grant, Robert [Enthought, Inc., Austin, TX (United States)

    2015-11-23

    Under this grant, three significant software packages were developed or improved, all with the goal of improving the ease-of-use of HPC libraries. The first component is a Python package, named DistArray (originally named Odin), that provides a high-level interface to distributed array computing. This interface is based on the popular and widely used NumPy package and is integrated with the IPython project for enhanced interactive parallel distributed computing. The second Python package is the Distributed Array Protocol (DAP) that enables separate distributed array libraries to share arrays efficiently without copying or sending messages. If a distributed array library supports the DAP, it is then automatically able to communicate with any other library that also supports the protocol. This protocol allows DistArray to communicate with the Trilinos library via PyTrilinos, which was also enhanced during this project. A third package, PyTrilinos, was extended to support distributed structured arrays (in addition to the unstructured arrays of its original design), allow more flexible distributed arrays (i.e., the restriction to double precision data was lifted), and implement the DAP. DAP support includes both exporting the protocol so that external packages can use distributed Trilinos data structures, and importing the protocol so that PyTrilinos can work with distributed data from external packages.

  17. Harmonic Domain Modeling of a Distribution System Using the DIgSILENT PowerFactory Software

    OpenAIRE

    Wasilewski, J.; Wiechowski, Wojciech Tomasz; Bak, Claus Leth

    2005-01-01

    The first part of this paper presents the comparison between two models of distribution system created in computer simulation software PowerFactory (PF). Model A is an exciting simplified equivalent model of the distribution system used by Transmission System Operator (TSO) Eltra for balenced load-flow calculation and stability studies. Model B is accurate model of the distribution system created on the basis of the detailed data of the investigated network and is used as a reference. The har...

  18. AgesGalore-A software program for evaluating spatially resolved luminescence data

    International Nuclear Information System (INIS)

    Greilich, S.; Harney, H.-L.; Woda, C.; Wagner, G.A.

    2006-01-01

    Low-light luminescence is usually recorded by photomultiplier tubes (PMTs) yielding integrated photon-number data. Highly sensitive CCD (charged coupled device) detectors allow for the spatially resolved recording of luminescence. The resulting two-dimensional images require suitable software for data processing. We present a recently developed software program specially designed for equivalent-dose evaluation in the framework of optically stimulated luminescence (OSL) dating. The software is capable of appropriate CCD data handling, parameter estimation using a Bayesian approach, and the pixel-wise fitting of functions for time and dose dependencies to the luminescence signal. The results of the fitting procedure and the equivalent-dose evaluation can be presented and analyzed both as spatial and as frequency distributions

  19. Frameworks for user - developer interactions in a software ...

    African Journals Online (AJOL)

    The dependence of today's society on Information and Communications technology has necessitated the need for software project managers to strive for continuous process improvement. A major challenge faced by most software project managers especially in developing countries however centers on effective ...

  20. Framework for implementing product portfolio management in software business

    NARCIS (Netherlands)

    Jagroep, Erik; Van De Weerd, Inge; Brinkkemper, Sjaak; Dobbe, Ton

    2014-01-01

    Whether a software product company takes up a project depends on the strategic decisions that are made with regard to an organization's products. A software project needs to fit strategic goals and enable an organization to realize a vision through its software products. Making decisions on a

  1. BioNet Digital Communications Framework

    Science.gov (United States)

    Gifford, Kevin; Kuzminsky, Sebastian; Williams, Shea

    2010-01-01

    BioNet v2 is a peer-to-peer middleware that enables digital communication devices to talk to each other. It provides a software development framework, standardized application, network-transparent device integration services, a flexible messaging model, and network communications for distributed applications. BioNet is an implementation of the Constellation Program Command, Control, Communications and Information (C3I) Interoperability specification, given in CxP 70022-01. The system architecture provides the necessary infrastructure for the integration of heterogeneous wired and wireless sensing and control devices into a unified data system with a standardized application interface, providing plug-and-play operation for hardware and software systems. BioNet v2 features a naming schema for mobility and coarse-grained localization information, data normalization within a network-transparent device driver framework, enabling of network communications to non-IP devices, and fine-grained application control of data subscription band width usage. BioNet directly integrates Disruption Tolerant Networking (DTN) as a communications technology, enabling networked communications with assets that are only intermittently connected including orbiting relay satellites and planetary rover vehicles.

  2. Towards Archetypes-Based Software Development

    Science.gov (United States)

    Piho, Gunnar; Roost, Mart; Perkins, David; Tepandi, Jaak

    We present a framework for the archetypes based engineering of domains, requirements and software (Archetypes-Based Software Development, ABD). An archetype is defined as a primordial object that occurs consistently and universally in business domains and in business software systems. An archetype pattern is a collaboration of archetypes. Archetypes and archetype patterns are used to capture conceptual information into domain specific models that are utilized by ABD. The focus of ABD is on software factories - family-based development artefacts (domain specific languages, patterns, frameworks, tools, micro processes, and others) that can be used to build the family members. We demonstrate the usage of ABD for developing laboratory information management system (LIMS) software for the Clinical and Biomedical Proteomics Group, at the Leeds Institute of Molecular Medicine, University of Leeds.

  3. A framework for business oriented software quality approaches

    NARCIS (Netherlands)

    Trienekens, J.J.M.; Veenendaal, van E.P.W.M.; McMullan, J.

    1997-01-01

    The importance of software for business systems continues to grow. Software products play an increasingly important role in industry and society. The need f~r delivering "quality products" and "quality services" has become as relevant for companies in the field qf sqfiware development as for any

  4. Development of a distributed air pollutant dry deposition modeling framework

    International Nuclear Information System (INIS)

    Hirabayashi, Satoshi; Kroll, Charles N.; Nowak, David J.

    2012-01-01

    A distributed air pollutant dry deposition modeling system was developed with a geographic information system (GIS) to enhance the functionality of i-Tree Eco (i-Tree, 2011). With the developed system, temperature, leaf area index (LAI) and air pollutant concentration in a spatially distributed form can be estimated, and based on these and other input variables, dry deposition of carbon monoxide (CO), nitrogen dioxide (NO 2 ), sulfur dioxide (SO 2 ), and particulate matter less than 10 microns (PM10) to trees can be spatially quantified. Employing nationally available road network, traffic volume, air pollutant emission/measurement and meteorological data, the developed system provides a framework for the U.S. city managers to identify spatial patterns of urban forest and locate potential areas for future urban forest planting and protection to improve air quality. To exhibit the usability of the framework, a case study was performed for July and August of 2005 in Baltimore, MD. - Highlights: ► A distributed air pollutant dry deposition modeling system was developed. ► The developed system enhances the functionality of i-Tree Eco. ► The developed system employs nationally available input datasets. ► The developed system is transferable to any U.S. city. ► Future planting and protection spots were visually identified in a case study. - Employing nationally available datasets and a GIS, this study will provide urban forest managers in U.S. cities a framework to quantify and visualize urban forest structure and its air pollution removal effect.

  5. Center for Adaptive Optics | Software

    Science.gov (United States)

    Optics Software The Center for Adaptive Optics acts as a clearing house for distributing Software to Institutes it gives specialists in Adaptive Optics a place to distribute their software. All software is shared on an "as-is" basis and the users should consult with the software authors with any

  6. Upgrading the Interface and Developer Tools of the Trigger Supervisor Software Framework of the CMS experiment at CERN

    CERN Document Server

    AUTHOR|(CDS)2097518; Karsmakers, Peter

    The Compact Muon Solenoid (CMS) Trigger Supervisor (TS) is a software framework that has been designed to handle the CMS Level-1 trigger setup, configuration and monitoring during data taking as well as all communications with the main run control of CMS. The interface consists of a web-based GUI rendered by a back-end C++ framework (AjaXell) and a front-end JavaScript framework (Dojo). These provide developers with the tools they need to to write their own custom control panels. However, currently there is much frustration with this framework given the age of the Dojo library and the various hacks needed to implement modern use cases. The task at hand is to renew this library and its developer tools, updating it to use the newest standards and technologies, while maintaining full compatibility with legacy code. This document describes the requirements, development process, and changes to this framework that were included in the upgrade from v2.x to v3.x. Keywords: CERN, CMS, L1 Trigger, C++, Polymer, Web Com...

  7. SBSI: an extensible distributed software infrastructure for parameter estimation in systems biology.

    Science.gov (United States)

    Adams, Richard; Clark, Allan; Yamaguchi, Azusa; Hanlon, Neil; Tsorman, Nikos; Ali, Shakir; Lebedeva, Galina; Goltsov, Alexey; Sorokin, Anatoly; Akman, Ozgur E; Troein, Carl; Millar, Andrew J; Goryanin, Igor; Gilmore, Stephen

    2013-03-01

    Complex computational experiments in Systems Biology, such as fitting model parameters to experimental data, can be challenging to perform. Not only do they frequently require a high level of computational power, but the software needed to run the experiment needs to be usable by scientists with varying levels of computational expertise, and modellers need to be able to obtain up-to-date experimental data resources easily. We have developed a software suite, the Systems Biology Software Infrastructure (SBSI), to facilitate the parameter-fitting process. SBSI is a modular software suite composed of three major components: SBSINumerics, a high-performance library containing parallelized algorithms for performing parameter fitting; SBSIDispatcher, a middleware application to track experiments and submit jobs to back-end servers; and SBSIVisual, an extensible client application used to configure optimization experiments and view results. Furthermore, we have created a plugin infrastructure to enable project-specific modules to be easily installed. Plugin developers can take advantage of the existing user-interface and application framework to customize SBSI for their own uses, facilitated by SBSI's use of standard data formats. All SBSI binaries and source-code are freely available from http://sourceforge.net/projects/sbsi under an Apache 2 open-source license. The server-side SBSINumerics runs on any Unix-based operating system; both SBSIVisual and SBSIDispatcher are written in Java and are platform independent, allowing use on Windows, Linux and Mac OS X. The SBSI project website at http://www.sbsi.ed.ac.uk provides documentation and tutorials.

  8. StakeMeter: value-based stakeholder identification and quantification framework for value-based software systems.

    Science.gov (United States)

    Babar, Muhammad Imran; Ghazali, Masitah; Jawawi, Dayang N A; Bin Zaheer, Kashif

    2015-01-01

    Value-based requirements engineering plays a vital role in the development of value-based software (VBS). Stakeholders are the key players in the requirements engineering process, and the selection of critical stakeholders for the VBS systems is highly desirable. Based on the stakeholder requirements, the innovative or value-based idea is realized. The quality of the VBS system is associated with the concrete set of valuable requirements, and the valuable requirements can only be obtained if all the relevant valuable stakeholders participate in the requirements elicitation phase. The existing value-based approaches focus on the design of the VBS systems. However, the focus on the valuable stakeholders and requirements is inadequate. The current stakeholder identification and quantification (SIQ) approaches are neither state-of-the-art nor systematic for the VBS systems. The existing approaches are time-consuming, complex and inconsistent which makes the initiation process difficult. Moreover, the main motivation of this research is that the existing SIQ approaches do not provide the low level implementation details for SIQ initiation and stakeholder metrics for quantification. Hence, keeping in view the existing SIQ problems, this research contributes in the form of a new SIQ framework called 'StakeMeter'. The StakeMeter framework is verified and validated through case studies. The proposed framework provides low-level implementation guidelines, attributes, metrics, quantification criteria and application procedure as compared to the other methods. The proposed framework solves the issues of stakeholder quantification or prioritization, higher time consumption, complexity, and process initiation. The framework helps in the selection of highly critical stakeholders for the VBS systems with less judgmental error.

  9. StakeMeter: value-based stakeholder identification and quantification framework for value-based software systems.

    Directory of Open Access Journals (Sweden)

    Muhammad Imran Babar

    Full Text Available Value-based requirements engineering plays a vital role in the development of value-based software (VBS. Stakeholders are the key players in the requirements engineering process, and the selection of critical stakeholders for the VBS systems is highly desirable. Based on the stakeholder requirements, the innovative or value-based idea is realized. The quality of the VBS system is associated with the concrete set of valuable requirements, and the valuable requirements can only be obtained if all the relevant valuable stakeholders participate in the requirements elicitation phase. The existing value-based approaches focus on the design of the VBS systems. However, the focus on the valuable stakeholders and requirements is inadequate. The current stakeholder identification and quantification (SIQ approaches are neither state-of-the-art nor systematic for the VBS systems. The existing approaches are time-consuming, complex and inconsistent which makes the initiation process difficult. Moreover, the main motivation of this research is that the existing SIQ approaches do not provide the low level implementation details for SIQ initiation and stakeholder metrics for quantification. Hence, keeping in view the existing SIQ problems, this research contributes in the form of a new SIQ framework called 'StakeMeter'. The StakeMeter framework is verified and validated through case studies. The proposed framework provides low-level implementation guidelines, attributes, metrics, quantification criteria and application procedure as compared to the other methods. The proposed framework solves the issues of stakeholder quantification or prioritization, higher time consumption, complexity, and process initiation. The framework helps in the selection of highly critical stakeholders for the VBS systems with less judgmental error.

  10. StakeMeter: Value-Based Stakeholder Identification and Quantification Framework for Value-Based Software Systems

    Science.gov (United States)

    Babar, Muhammad Imran; Ghazali, Masitah; Jawawi, Dayang N. A.; Zaheer, Kashif Bin

    2015-01-01

    Value-based requirements engineering plays a vital role in the development of value-based software (VBS). Stakeholders are the key players in the requirements engineering process, and the selection of critical stakeholders for the VBS systems is highly desirable. Based on the stakeholder requirements, the innovative or value-based idea is realized. The quality of the VBS system is associated with the concrete set of valuable requirements, and the valuable requirements can only be obtained if all the relevant valuable stakeholders participate in the requirements elicitation phase. The existing value-based approaches focus on the design of the VBS systems. However, the focus on the valuable stakeholders and requirements is inadequate. The current stakeholder identification and quantification (SIQ) approaches are neither state-of-the-art nor systematic for the VBS systems. The existing approaches are time-consuming, complex and inconsistent which makes the initiation process difficult. Moreover, the main motivation of this research is that the existing SIQ approaches do not provide the low level implementation details for SIQ initiation and stakeholder metrics for quantification. Hence, keeping in view the existing SIQ problems, this research contributes in the form of a new SIQ framework called ‘StakeMeter’. The StakeMeter framework is verified and validated through case studies. The proposed framework provides low-level implementation guidelines, attributes, metrics, quantification criteria and application procedure as compared to the other methods. The proposed framework solves the issues of stakeholder quantification or prioritization, higher time consumption, complexity, and process initiation. The framework helps in the selection of highly critical stakeholders for the VBS systems with less judgmental error. PMID:25799490

  11. Frameworks in CS1

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak; Caspersen, Michael Edelgaard

    2002-01-01

    In this paper we argue that introducing object-oriented frameworks as subject already in the CS1 curriculum is important if we are to train the programmers of tomorrow to become just as much software reusers as software producers. We present a simple, graphical, framework that we have successfull...... point for introducing graphical user interface frameworks such as Java Swing and AWT as the students are not overwhelmed by all the details of such frameworks right away but given a conceptual road-map and practical experience that allow them to cope with the complexity....

  12. Frameworks in CS1

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak; Caspersen, Michael Edelgaard

    2002-01-01

    point for introducing graphical user interface frameworks such as Java Swing and AWT as the students are not overwhelmed by all the details of such frameworks right away but given a conceptual road-map and practical experience that allow them to cope with the complexity.......In this paper we argue that introducing object-oriented frameworks as subject already in the CS1 curriculum is important if we are to train the programmers of tomorrow to become just as much software reusers as software producers. We present a simple, graphical, framework that we have successfully...

  13. Development, analysis, and evaluation of a commercial software framework for the study of Extremely Low Probability of Rupture (xLPR) events at nuclear power plants.

    Energy Technology Data Exchange (ETDEWEB)

    Kalinich, Donald A.; Helton, Jon Craig; Sallaberry, Cedric M.; Mattie, Patrick D.

    2010-12-01

    Sandia National Laboratories (SNL) participated in a Pilot Study to examine the process and requirements to create a software system to assess the extremely low probability of pipe rupture (xLPR) in nuclear power plants. This project was tasked to develop a prototype xLPR model leveraging existing fracture mechanics models and codes coupled with a commercial software framework to determine the framework, model, and architecture requirements appropriate for building a modular-based code. The xLPR pilot study was conducted to demonstrate the feasibility of the proposed developmental process and framework for a probabilistic code to address degradation mechanisms in piping system safety assessments. The pilot study includes a demonstration problem to assess the probability of rupture of DM pressurizer surge nozzle welds degraded by primary water stress-corrosion cracking (PWSCC). The pilot study was designed to define and develop the framework and model; then construct a prototype software system based on the proposed model. The second phase of the project will be a longer term program and code development effort focusing on the generic, primary piping integrity issues (xLPR code). The results and recommendations presented in this report will be used to help the U.S. Nuclear Regulatory Commission (NRC) define the requirements for the longer term program.

  14. Software Geometry in Simulations

    Science.gov (United States)

    Alion, Tyler; Viren, Brett; Junk, Tom

    2015-04-01

    The Long Baseline Neutrino Experiment (LBNE) involves many detectors. The experiment's near detector (ND) facility, may ultimately involve several detectors. The far detector (FD) will be significantly larger than any other Liquid Argon (LAr) detector yet constructed; many prototype detectors are being constructed and studied to motivate a plethora of proposed FD designs. Whether it be a constructed prototype or a proposed ND/FD design, every design must be simulated and analyzed. This presents a considerable challenge to LBNE software experts; each detector geometry must be described to the simulation software in an efficient way which allows for multiple authors to easily collaborate. Furthermore, different geometry versions must be tracked throughout their use. We present a framework called General Geometry Description (GGD), written and developed by LBNE software collaborators for managing software to generate geometries. Though GGD is flexible enough to be used by any experiment working with detectors, we present it's first use in generating Geometry Description Markup Language (GDML) files to interface with LArSoft, a framework of detector simulations, event reconstruction, and data analyses written for all LAr technology users at Fermilab. Brett is the other of the framework discussed here, the General Geometry Description (GGD).

  15. Online Data Monitoring Framework Based on Histogram Packaging in Network Distributed Data Acquisition Systems

    International Nuclear Information System (INIS)

    Konno, T; Ishitsuka, M; Kuze, M; Cabarera, A; Sakamoto, Y

    2011-01-01

    O nline monitor frameworkis a new general software framework for online data monitoring, which provides a way to collect information from online systems, including data acquisition, and displays them to shifters far from experimental sites. 'Monitor Server', a core system in this framework gathers the monitoring information from the online subsystems and the information is handled as collections of histograms named H istogram Package . Monitor Server broadcasts the histogram packages to 'Monitor Viewers', graphical user interfaces in the framework. We developed two types of the viewers with different technologies: Java and web browser. We adapted XML based file for the configuration of GUI components on the windows and graphical objects on the canvases. Monitor Viewer creates its GUIs automatically with the configuration files.This monitoring framework has been developed for the Double Chooz reactor neutrino oscillation experiment in France, but can be extended for general application to be used in other experiments. This document reports the structure of the online monitor framework with some examples from the adaption to the Double Chooz experiment.

  16. Hybrid molecular–continuum methods: From prototypes to coupling software

    KAUST Repository

    Neumann, Philipp

    2014-02-01

    In this contribution, we review software requirements in hybrid molecular-continuum simulations. For this purpose, we analyze a prototype implementation which combines two frameworks-the Molecular Dynamics framework MarDyn and the framework Peano for spatially adaptive mesh-based simulations-and point out particular challenges of a general coupling software. Based on this analysis, we discuss the software design of our recently published coupling tool. We explain details on its overall structure and show how the challenges that arise in respective couplings are resolved by the software. © 2013 Elsevier Ltd. All rights reserved.

  17. Software configuration management

    CERN Document Server

    Keyes, Jessica

    2004-01-01

    Software Configuration Management discusses the framework from a standards viewpoint, using the original DoD MIL-STD-973 and EIA-649 standards to describe the elements of configuration management within a software engineering perspective. Divided into two parts, the first section is composed of 14 chapters that explain every facet of configuration management related to software engineering. The second section consists of 25 appendices that contain many valuable real world CM templates.

  18. Mobile Autonomous Sensing Unit (MASU: A Framework That Supports Distributed Pervasive Data Sensing

    Directory of Open Access Journals (Sweden)

    Esunly Medina

    2016-07-01

    Full Text Available Pervasive data sensing is a major issue that transverses various research areas and application domains. It allows identifying people’s behaviour and patterns without overwhelming the monitored persons. Although there are many pervasive data sensing applications, they are typically focused on addressing specific problems in a single application domain, making them difficult to generalize or reuse. On the other hand, the platforms for supporting pervasive data sensing impose restrictions to the devices and operational environments that make them unsuitable for monitoring loosely-coupled or fully distributed work. In order to help address this challenge this paper present a framework that supports distributed pervasive data sensing in a generic way. Developers can use this framework to facilitate the implementations of their applications, thus reducing complexity and effort in such an activity. The framework was evaluated using simulations and also through an empirical test, and the obtained results indicate that it is useful to support such a sensing activity in loosely-coupled or fully distributed work scenarios.

  19. Development of a Dynamically Configurable,Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation

    Science.gov (United States)

    Afjeh, Abdollah A.; Reed, John A.

    2003-01-01

    This research is aimed at developing a neiv and advanced simulation framework that will significantly improve the overall efficiency of aerospace systems design and development. This objective will be accomplished through an innovative integration of object-oriented and Web-based technologies ivith both new and proven simulation methodologies. The basic approach involves Ihree major areas of research: Aerospace system and component representation using a hierarchical object-oriented component model which enables the use of multimodels and enforces component interoperability. Collaborative software environment that streamlines the process of developing, sharing and integrating aerospace design and analysis models. . Development of a distributed infrastructure which enables Web-based exchange of models to simplify the collaborative design process, and to support computationally intensive aerospace design and analysis processes. Research for the first year dealt with the design of the basic architecture and supporting infrastructure, an initial implementation of that design, and a demonstration of its application to an example aircraft engine system simulation.

  20. Software Engineering Reviews and Audits

    CERN Document Server

    Summers, Boyd L

    2011-01-01

    Accurate software engineering reviews and audits have become essential to the success of software companies and military and aerospace programs. These reviews and audits define the framework and specific requirements for verifying software development efforts. Authored by an industry professional with three decades of experience, Software Engineering Reviews and Audits offers authoritative guidance for conducting and performing software first article inspections, and functional and physical configuration software audits. It prepares readers to answer common questions for conducting and perform

  1. SDN-Enabled Communication Network Framework for Energy Internet

    Directory of Open Access Journals (Sweden)

    Zhaoming Lu

    2017-01-01

    Full Text Available To support distributed energy generators and improve energy utilization, energy Internet has attracted global research focus. In China, energy Internet has been proposed as an important issue of government and institutes. However, managing a large amount of distributed generators requires smart, low-latency, reliable, and safe networking infrastructure, which cannot be supported by traditional networks in power grids. In order to design and construct smart and flexible energy Internet, we proposed a software defined network framework with both microgrid cluster level and global grid level designed by a hierarchical manner, which will bring flexibility, efficiency, and reliability for power grid networks. Finally, we evaluate and verify the performance of this framework in terms of latency, reliability, and security by both theoretical analysis and real-world experiments.

  2. MCdevelop - a universal framework for Stochastic Simulations

    Science.gov (United States)

    Slawinska, M.; Jadach, S.

    2011-03-01

    We present MCdevelop, a universal computer framework for developing and exploiting the wide class of Stochastic Simulations (SS) software. This powerful universal SS software development tool has been derived from a series of scientific projects for precision calculations in high energy physics (HEP), which feature a wide range of functionality in the SS software needed for advanced precision Quantum Field Theory calculations for the past LEP experiments and for the ongoing LHC experiments at CERN, Geneva. MCdevelop is a "spin-off" product of HEP to be exploited in other areas, while it will still serve to develop new SS software for HEP experiments. Typically SS involve independent generation of large sets of random "events", often requiring considerable CPU power. Since SS jobs usually do not share memory it makes them easy to parallelize. The efficient development, testing and running in parallel SS software requires a convenient framework to develop software source code, deploy and monitor batch jobs, merge and analyse results from multiple parallel jobs, even before the production runs are terminated. Throughout the years of development of stochastic simulations for HEP, a sophisticated framework featuring all the above mentioned functionality has been implemented. MCdevelop represents its latest version, written mostly in C++ (GNU compiler gcc). It uses Autotools to build binaries (optionally managed within the KDevelop 3.5.3 Integrated Development Environment (IDE)). It uses the open-source ROOT package for histogramming, graphics and the mechanism of persistency for the C++ objects. MCdevelop helps to run multiple parallel jobs on any computer cluster with NQS-type batch system. Program summaryProgram title:MCdevelop Catalogue identifier: AEHW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http

  3. Behavior and Convergence of Wasserstein Metric in the Framework of Stable Distributions

    Czech Academy of Sciences Publication Activity Database

    Omelchenko, Vadym

    2012-01-01

    Roč. 2012, č. 30 (2012), s. 124-138 ISSN 1212-074X R&D Projects: GA ČR GAP402/10/0956 Institutional research plan: CEZ:AV0Z10750506 Institutional support: RVO:67985556 Keywords : Wasserstein Metric * Stable Distributions * Empirical Distribution Function Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2013/E/omelchenko-behavior and convergence of wasserstein metric in the framework of stable distributions.pdf

  4. Virtual Prototyping and Validation of Cpps within a New Software Framework

    Directory of Open Access Journals (Sweden)

    Sebastian Neumeyer

    2017-02-01

    Full Text Available As a result of the growing demand for highly customized and individual products, companies need to enable flexible and intelligent manufacturing. Cyber-physical production systems (CPPS will act autonomously in the future in an interlinked production and enable such flexibility. However, German mid-sized plant manufacturers rarely use virtual technologies for design and validation in order to design CPPS. The research project Virtual Commissioning with Smart Hybrid Prototyping (VIB-SHP investigated the usage of virtual technologies for manufacturing systems and CPPS design. Aspects of asynchronous communicating, intelligent- and autonomous-acting production equipment in an immersive validation environment, have been investigated. To enable manufacturing system designers to validate CPPS, a software framework for virtual prototyping has been developed. A mechatronic construction kit for production system design integrates discipline-specific models and manages them in a product lifecycle management (PLM solution. With this construction kit manufacturing designers are able to apply virtual technologies and the validation of communication processes with the help of behavior models. The presented approach resolves the sequential design process for the development of mechanical, electrical, and software elements and ensures the consistency of these models. With the help of a bill of material (BOM- and signal-based alignment of the discipline-specific models in an integrated mechatronic product model, the communication of the design status and changes are improved. The re-use of already-specified and -designed modules enable quick behavior modeling, code evaluation, as well as interaction with the virtualized assembly system in an immersive environment.

  5. NEAMS Software Licensing, Release, and Distribution: Implications for FY2013 Work Package Planning

    International Nuclear Information System (INIS)

    Bernholdt, David E.

    2012-01-01

    The vision of the NEAMS program is to bring truly predictive modeling and simulation (M and S) capabilities to the nuclear engineering community in order to enable a new approach to the analysis of nuclear systems. NEAMS anticipates issuing in FY 2018 a full release of its computational 'Fermi Toolkit' aimed at advanced reactor and fuel cycles. The NEAMS toolkit involves extensive software development activities, some of which have already been underway for several years, however, the Advanced Modeling and Simulation Office (AMSO), which sponsors the NEAMS program, has not yet issued any official guidance regarding software licensing, release, and distribution policies. This motivated an FY12 task in the Capability Transfer work package to develop and recommend an appropriate set of policies. The current preliminary report is intended to provide awareness of issues with implications for work package planning for FY13. We anticipate a small amount of effort associated with putting into place formal licenses and contributor agreements for NEAMS software which doesn't already have them. We do not anticipate any additional effort or costs associated with software release procedures or schedules beyond those dictated by the quality expectations for the software. The largest potential costs we anticipate would be associated with the setup and maintenance of shared code repositories for development and early access to NEAMS software products. We also anticipate an opportunity, with modest associated costs, to work with the Radiation Safety Information Computational Center (RSICC) to clarify export control assessment policies for software under development.

  6. Free software, Open source software, licenses. A short presentation including a procedure for research software and data dissemination

    OpenAIRE

    Gomez-Diaz , Teresa

    2014-01-01

    4 pages. Spanish version: Software libre, software de código abierto, licencias. Donde se propone un procedimiento de distribución de software y datos de investigación; The main goal of this document is to help the research community to understand the basic concepts of software distribution: Free software, Open source software, licenses. This document also includes a procedure for research software and data dissemination.

  7. Advanced Modular Software Performance Monitoring

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    The LHCb software is based on the Gaudi framework, on top of which are built several large and complex software applications. The LHCb experiment is now in the active phase of collecting and analyzing data and significant performance problems arise in the Gaudi based software beginning from High Level Trigger (HLT) programs and ending with data analysis frameworks (DaVinci). It’s not easy to find hot spots in the code - only special tools can help to understand where CPU or memory usage is not reasonable. There exist many performance analyzing tools, but the main problem is that they show reports in terms of class and function names and such information usually is not very useful - the majority of algorithm developers use the Gaudi framework abstractions and usually do not know about functions which lie at the lower level. We will show a new approach which adds to performance reports a higher abstraction level based on knowledge of framework architecture and run-time object properties. A set of profiling to...

  8. Advanced modular software performance monitoring

    CERN Document Server

    Mazurov, A

    2012-01-01

    The LHCb software is based on the Gaudi framework, on top of which are built several large and complex software applications. As the LHCb experiment is now in the active phase of collecting and analyzing data, performance problems arise in various parts of the software, from the High Level Trigger (HLT) programs to data analysis frameworks. It is not easy to find hotspots in the code - only specialized tools can help to understand where CPU or memory usage are not reasonable. There exist many performance analyzing tools, but the main problem is that they show reports in terms of class and function names and such information usually is not very useful - the majority of algorithm developers use the Gaudi framework abstractions and usually do not know about functions which lie at the lower level. We will show a new approach which adds to performance reports a higher abstraction level based on knowledge of framework architecture and run-time object properties. A set of profiling tools (based on Intel VTune Amplif...

  9. Tailorable software architectures in the accelerator control system environment

    International Nuclear Information System (INIS)

    Mejuev, Igor; Kumagai, Akira; Kadokura, Eiichi

    2001-01-01

    Tailoring is further evolution of an application after deployment in order to adapt it to requirements that were not accounted for in the original design. End-user tailorability has been extensively researched in applied computer science from HCl and software engineering perspectives. Tailorability allows coping with flexibility requirements, decreasing maintenance and development costs of software products. In general, dynamic or diverse software requirements constitute the need for implementing end-user tailorability in computer systems. In accelerator physics research the factor of dynamic requirements is especially important, due to frequent software and hardware modifications resulting in correspondingly high upgrade and maintenance costs. In this work we introduce the results of feasibility study on implementing end-user tailorability in the software for accelerator control system, considering the design and implementation of distributed monitoring application for 12 GeV KEK Proton Synchrotron as an example. The software prototypes used in this work are based on a generic tailoring platform (VEDICI), which allows decoupling of tailoring interfaces and runtime components. While representing a reusable application-independent framework, VEDICI can be potentially applied for tailoring of arbitrary compositional Web-based applications

  10. High-Level Application Framework for LCLS

    Energy Technology Data Exchange (ETDEWEB)

    Chu, P; Chevtsov, S.; Fairley, D.; Larrieu, C.; Rock, J.; Rogind, D.; White, G.; Zalazny, M.; /SLAC

    2008-04-22

    A framework for high level accelerator application software is being developed for the Linac Coherent Light Source (LCLS). The framework is based on plug-in technology developed by an open source project, Eclipse. Many existing functionalities provided by Eclipse are available to high-level applications written within this framework. The framework also contains static data storage configuration and dynamic data connectivity. Because the framework is Eclipse-based, it is highly compatible with any other Eclipse plug-ins. The entire infrastructure of the software framework will be presented. Planned applications and plug-ins based on the framework are also presented.

  11. Nuclear model codes and related software distributed by the OECD/NEA Data Bank

    International Nuclear Information System (INIS)

    Sartori, E.

    1993-01-01

    Software and data for nuclear energy applications is acquired, tested and distributed by several information centres; in particular, relevant computer codes are distributed internationally by the OECD/NEA Data Bank (France) and by ESTSC and EPIC/RSIC (United States). This activity is coordinated among the centres and is extended outside the OECD area through an arrangement with the IAEA. This article covers more specifically the availability of nuclear model codes and also those codes which further process their results into data sets needed for specific nuclear application projects. (author). 2 figs

  12. Charging Customers or Making Profit? Business Model Change in the Software Industry

    Directory of Open Access Journals (Sweden)

    Margit Malmmose Peyton

    2014-08-01

    Full Text Available Purpose: Advancements in technology, changing customer demands or new market entrants are often seen as a necessary condition to trigger the creation of new Business Models, or disruptive change in existing ones. Yet, the sufficient condition is often determined by pricing and how customers are willing to pay for the technology (Chesbrough and Rosenbloom, 2002. As a consequence, much research on Business Models has focused on innovation and technology management (Rajala et al., 2012; Zott et al., 2011, and software-specific frameworks for Business Models have emerged (Popp, 2011; Rajala et al., 2003; Rajala et al., 2004; Stahl, 2004. This paper attempts to illustrate Business Model change in the software industry. Design: Drawing on Rajala et al. (2003, this case study explores the (1 antecedents and (2 consequences of a Business Model-change in a logistics software company. The company decided to abolish their profitable fee-based licensing for an internet-based version of its core product and to offer it as freeware including unlimited service. Findings: Firstly, we illustrate how external developments in technology and customer demands (pricing, as well as the desire for a sustainable Business Model, have led to this drastic change. Secondly, we initially find that much of the company’s new Business Model is congruent with the company-focused framework of Rajala et al. (2003 [product strategy; distribution model, services and implementation; revenue logic]. Value: The existing frameworks for Business Models in the software industry cannot fully explain the disruptive change in the Business Model. Therefore, we suggest extending the framework by the element of ‘innovation’.

  13. A Distributed Framework for Real Time Path Planning in Practical Multi-agent Systems

    KAUST Repository

    Abdelkader, Mohamed

    2017-10-19

    We present a framework for distributed, energy efficient, and real time implementable algorithms for path planning in multi-agent systems. The proposed framework is presented in the context of a motivating example of capture the flag which is an adversarial game played between two teams of autonomous agents called defenders and attackers. We start with the centralized formulation of the problem as a linear program because of its computational efficiency. Then we present an approximation framework in which each agent solves a local version of the centralized linear program by communicating with its neighbors only. The premise in this work is that for practical multi-agent systems, real time implementability of distributed algorithms is more crucial then global optimality. Thus, instead of verifying the proposed framework by performing offline simulations in MATLAB, we run extensive simulations in a robotic simulator V-REP, which includes a detailed dynamic model of quadrotors. Moreover, to create a realistic scenario, we allow a human operator to control the attacker quadrotor through a joystick in a single attacker setup. These simulations authenticate that the proposed framework is real time implementable and results in a performance that is comparable with the global optimal solution under the considered scenarios.

  14. The Newcastle connection: A software subsystem for constructing distributed UNIX systems

    International Nuclear Information System (INIS)

    Randell, B.

    1985-01-01

    The Newcastle connection is a software subsystem that can be added to each of a set of physically interconnected UNIX or UNIX look-alike systems, so as to construct a distributed system which is functionally indistinguishable at both the user and the program level from a conventional single-processor UNIX system. The techniques used are applicable to a variety and multiplicity of both local and wide area networks, and enable all issues of inter-processor communication, network protocols, etc., to be hidden. A brief account is given of experience with such distributed systems, the first of which was constructed in 1982 using a set of PDP11s running UNIX Version 7, and connected by a Cambridge Ring - since this date the Connection has been used to construct distributed systems based on various other computers and versions of UNIX, both at Newcastle and elsewhere. The final sections compare our scheme to various precursor schemes and discuss its potential relevance to other operating systems. (orig.)

  15. A Reference Architecture for Providing Tools as a Service to Support Global Software Development

    DEFF Research Database (Denmark)

    Chauhan, Aufeef

    2014-01-01

    -computing paradigm for addressing above-mentioned issues by providing a framework to select appropriate tools as well as associated services and reference architecture of the cloud-enabled middleware platform that allows on demand provisioning of software engineering Tools as a Service (TaaS) with focus......Global Software Development (GSD) teams encounter challenges that are associated with distribution of software development activities across multiple geographic regions. The limited support for performing collaborative development and engineering activities and lack of sufficient support......-based solutions. The restricted ability of the organizations to have desired alignment of tools with software engineering and development processes results in administrative and managerial overhead that incur increased development cost and poor product quality. Moreover, stakeholders involved in the projects have...

  16. Software engineering processes principles and applications

    CERN Document Server

    Wang, Yingxu

    2000-01-01

    Fundamentals of the Software Engineering ProcessIntroductionA Unified Framework of the Software Engineering ProcessProcess AlgebraProcess-Based Software EngineeringSoftware Engineering Process System ModelingThe CMM ModelThe ISO 9001 ModelThe BOOTSTRAP ModelThe ISO/IEC 15504 (SPICE) ModelThe Software Engineering Process Reference Model: SEPRMSoftware Engineering Process System AnalysisBenchmarking the SEPRM ProcessesComparative Analysis of Current Process ModelsTransformation of Capability Levels Between Current Process ModelsSoftware Engineering Process EstablishmentSoftware Process Establish

  17. A REVIEW ON SOFTWARE PRONE DETECTION AND ITS PREVENTION TECHNIQUES

    OpenAIRE

    Laxmi Dewangan*1 & Prof. Anish Lazrus2

    2018-01-01

    The need of distributed and complex business applications in big business requests error free and quality application frameworks. This makes it critical in programming improvement to create quality and fault free programming. It is likewise critical to outline dependable and simple to keep up as it includes a great deal of human endeavors, cost and time amid programming life cycle. A software advancement process performs different exercises to limit the faults, for example, fault prediction, ...

  18. The social disutility of software ownership.

    Science.gov (United States)

    Douglas, David M

    2011-09-01

    Software ownership allows the owner to restrict the distribution of software and to prevent others from reading the software's source code and building upon it. However, free software is released to users under software licenses that give them the right to read the source code, modify it, reuse it, and distribute the software to others. Proponents of free software such as Richard M. Stallman and Eben Moglen argue that the social disutility of software ownership is a sufficient justification for prohibiting it. This social disutility includes the social instability of disregarding laws and agreements covering software use and distribution, inequality of software access, and the inability to help others by sharing software with them. Here I consider these and other social disutility claims against withholding specific software rights from users, in particular, the rights to read the source code, duplicate, distribute, modify, imitate, and reuse portions of the software within new programs. I find that generally while withholding these rights from software users does cause some degree of social disutility, only the rights to duplicate, modify and imitate cannot legitimately be denied to users on this basis. The social disutility of withholding the rights to distribute the software, read its source code and reuse portions of it in new programs is insufficient to prohibit software owners from denying them to users. A compromise between the software owner and user can minimise the social disutility of withholding these particular rights from users. However, the social disutility caused by software patents is sufficient for rejecting such patents as they restrict the methods of reducing social disutility possible with other forms of software ownership.

  19. Design and implementation of a standard framework for KSTAR control system

    International Nuclear Information System (INIS)

    Lee, Woongryol; Park, Mikyung; Lee, Taegu; Lee, Sangil; Yun, Sangwon; Park, Jinseop; Park, Kaprai

    2014-01-01

    Highlights: • We performed a standardized of control system in KSTAR. • EPICS based software framework is developed for the realization of various control systems. • The applicability of the framework is widened from a simple command dispatcher to the real time application. • Our framework supports the implementation of embedded IOC in FPGA board. - Abstract: Standardization of control system is an important issue in KSTAR which is organized with various heterogeneous systems. Diverse control systems in KSTAR have been adopting new application software since 2010. Development of this software was launched for easy implementation of a data acquisition system but it is extended to as a Standard Framework (SFW) of control system in KSTAR. It is composed with a single library, database, template, and descriptor files. The SFW based controller has common factors. It has non-blocking control command method with a thread. The internal sequence handler makes it can be synchronized with KSTAR experiment. It also has a ring buffer pool mechanism for streaming input data handling. Recently, there are two important functional improvements in the framework. Processor embedded FPGA was proposed as a standard hardware platform for specific application. These are also manipulated by the SFW based embedded application. This approach gives single board system an ability of low level distributed control under the EPICS environments. We also developed a real time monitoring system as a real time network inspection tool in 2012 campaign using the SFW

  20. Distributed Arithmetic for Efficient Base-Band Processing in Real-Time GNSS Software Receivers

    Directory of Open Access Journals (Sweden)

    Grégoire Waelchli

    2010-01-01

    Full Text Available The growing market of GNSS capable mobile devices is driving the interest of GNSS software solutions, as they can share many system resources (processor, memory, reducing both the size and the cost of their integration. Indeed, with the increasing performance of modern processors, it becomes now feasible to implement in software a multichannel GNSS receiver operating in real time. However, a major issue with this approach is the large computing resources required for the base-band processing, in particular for the correlation operations. Therefore, new algorithms need to be developed in order to reduce the overall complexity of the receiver architecture. Towards that aim, this paper first introduces the challenges of the software implementation of a GPS receiver, with a main focus given to the base-band processing and correlation operations. It then describes the already existing solutions and, from this, introduces a new algorithm based on distributed arithmetic.

  1. Interconnection test framework for the CMS level-1 trigger system

    International Nuclear Information System (INIS)

    Hammer, J.; Magrans de Abril, M.; Wulz, C.E.

    2012-01-01

    The Level-1 Trigger Control and Monitoring System is a software package designed to configure, monitor and test the Level-1 Trigger System of the Compact Muon Solenoid (CMS) experiment at CERN's Large Hadron Collider. It is a large and distributed system that runs over 50 PCs and controls about 200 hardware units. The objective of this paper is to describe and evaluate the architecture of a distributed testing framework - the Interconnection Test Framework (ITF). This generic and highly flexible framework for creating and executing hardware tests within the Level-1 Trigger environment is meant to automate testing of the 13 major subsystems interconnected with more than 1000 links. Features include a web interface to create and execute tests, modeling using finite state machines, dependency management, automatic configuration, and loops. Furthermore, the ITF will replace the existing heterogeneous testing procedures and help reducing both maintenance and complexity of operation tasks. (authors)

  2. Isobio software: biological dose distribution and biological dose volume histogram from physical dose conversion using linear-quadratic-linear model.

    Science.gov (United States)

    Jaikuna, Tanwiwat; Khadsiri, Phatchareewan; Chawapun, Nisa; Saekho, Suwit; Tharavichitkul, Ekkasit

    2017-02-01

    To develop an in-house software program that is able to calculate and generate the biological dose distribution and biological dose volume histogram by physical dose conversion using the linear-quadratic-linear (LQL) model. The Isobio software was developed using MATLAB version 2014b to calculate and generate the biological dose distribution and biological dose volume histograms. The physical dose from each voxel in treatment planning was extracted through Computational Environment for Radiotherapy Research (CERR), and the accuracy was verified by the differentiation between the dose volume histogram from CERR and the treatment planning system. An equivalent dose in 2 Gy fraction (EQD 2 ) was calculated using biological effective dose (BED) based on the LQL model. The software calculation and the manual calculation were compared for EQD 2 verification with pair t -test statistical analysis using IBM SPSS Statistics version 22 (64-bit). Two and three-dimensional biological dose distribution and biological dose volume histogram were displayed correctly by the Isobio software. Different physical doses were found between CERR and treatment planning system (TPS) in Oncentra, with 3.33% in high-risk clinical target volume (HR-CTV) determined by D 90% , 0.56% in the bladder, 1.74% in the rectum when determined by D 2cc , and less than 1% in Pinnacle. The difference in the EQD 2 between the software calculation and the manual calculation was not significantly different with 0.00% at p -values 0.820, 0.095, and 0.593 for external beam radiation therapy (EBRT) and 0.240, 0.320, and 0.849 for brachytherapy (BT) in HR-CTV, bladder, and rectum, respectively. The Isobio software is a feasible tool to generate the biological dose distribution and biological dose volume histogram for treatment plan evaluation in both EBRT and BT.

  3. The Joint COntrols Project Framework

    CERN Document Server

    González-Berges, M

    2003-01-01

    The Framework is one of the subprojects of the Joint COntrols Project (JCOP), which is collaboration between the four LHC experiments and CERN. By sharing development, this will reduce the overall effort required to build and maintain the experiment control systems. As such, the main aim of the Framework is to deliver a common set of software components, tools and guidelines that can be used by the four LHC experiments to build their control systems. Although commercial components are used wherever possible, further added value is obtained by customisation for HEP-specific applications. The supervisory layer of the Framework is based on the SCADA tool PVSS, which was selected after a detailed evaluation. This is integrated with the front-end layer via both OPC (OLE for Process Control), an industrial standard, and the CERN-developed DIM (Distributed Information Management System) protocol. Several components are already in production and being used by running fixed-target experiments at CERN as well as for th...

  4. Evaluation of Distribution Analysis Software for DER Applications

    Energy Technology Data Exchange (ETDEWEB)

    Staunton, RH

    2003-01-23

    unstoppable. In response, energy providers will be forced to both fully acknowledge the trend and plan for accommodating DER [3]. With bureaucratic barriers [4], lack of time/resources, tariffs, etc. still seen in certain regions of the country, changes still need to be made. Given continued technical advances in DER, the time is fast approaching when the industry, nation-wide, must not only accept DER freely but also provide or review in-depth technical assessments of how DER should be integrated into and managed throughout the distribution system. Characterization studies are needed to fully understand how both the utility system and DER devices themselves will respond to all reasonable events (e.g., grid disturbances, faults, rapid growth, diverse and multiple DER systems, large reactive loads). Some of this work has already begun as it relates to operation and control of DER [5] and microturbine performance characterization [6,7]. One of the most urgently needed tools that can provide these types of analyses is a distribution network analysis program in combination with models for various DER. Together, they can be used for (1) analyzing DER placement in distribution networks and (2) helping to ensure that adequate transmission reliability is maintained. Surveys of the market show products that represent a partial match to these needs; specifically, software that has been developed to plan electrical distribution systems and analyze reliability (in a near total absence of DER). The first part of this study (Sections 2 and 3 of the report) looks at a number of these software programs and provides both summary descriptions and comparisons. The second part of this study (Section 4 of the report) considers the suitability of these analysis tools for DER studies. It considers steady state modeling and assessment work performed by ORNL using one commercially available tool on feeder data provided by a southern utility. Appendix A provides a technical report on the results of

  5. Frameworks in CS1

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak; Caspersen, Michael Edelgaard

    2002-01-01

    point for introducing graphical user interface frameworks such as Java Swing and AWT as the students are not overwhelmed by all the details of such frameworks right away but given a conceptual road-map and practical experience that allow them to cope with the complexity.......In this paper we argue that introducing object-oriented frameworks as subject already in the CS1 curriculum is important if we are to train the programmers of tomorrow to become just as much software reusers as software producers. We present a simple, graphical, framework that we have successfully...... used to introduce the principles of object-oriented frameworks to students at the introductory programming level. Our framework, while simple, introduces central abstractions such as inversion of control, event-driven programming, and variability points/hot-spots. This has provided a good starting...

  6. A Scorecard Framework Proposal for Improving Software Factories’ Sustainability: A Case Study of a Spanish Firm in the Financial Sector

    Directory of Open Access Journals (Sweden)

    César Álvarez

    2015-12-01

    Full Text Available Financial institutions and especially banks have always been at the forefront of innovation in management policies in order to improve their performance, and banking is probably one of the sectors that more effectively measures productivity and efficiency in virtually all aspects of its business. However, there is one area that still fails: the productivity of its software development projects. For years banking institutions have chosen to outsource their software projects using software firms created by them for this purpose, but up until a few years ago, the deadline for the delivery of the projects was more important than the efficiency with which they were developed. The last economic crisis has forced financial institutions to review and improve the software development efficiency related to their software factories to achieve a sustainable and feasible model. The sustainability of these software factories can be achieved by improving their strategic management, and the Balanced Scorecard (BSC framework can be very useful in order to obtain this. Based on the concepts and practices of the BSC, this paper proposes a specific model to establish this kind of software factory as a way of improving their sustainability and applies it to a large Spanish firm specializing in financial sector software. We have included a preliminary validation plan as well as the first monitoring results. The adoption is still very recent and more data are needed to measure all the perspectives so no definitive conclusions can be drawn.

  7. Applicability of the FASTBUS standard to distributed control

    International Nuclear Information System (INIS)

    Deiss, S.R.; Downing, R.W.; Gustavson, D.B.; Larsen, R.S.; Logg, C.A.; Paffrath, L.

    1981-03-01

    The new FASTBUS standard has been designed to provide a framework for distributed processing in both experimental data acquisition and accelerator control. The features of FASTBUS which support distributed control are a priority arbitration scheme which allows intercrate as well as intracrate message flow between processors and slave devices; and a high bandwidth to permit efficient sharing of the data paths by high-speed devices. Sophisticated diagnostic aids permit system-wide error checking and/or correction. Software has been developed for large distributed systems. This consists of a system data base description, and initialization algorithms to allocate address space and establish preferred message routes. A diagnostics package is also being developed, based on an independent Ethernet-like serial link. The paper describes available hardware and software, on-going developments, and current applications

  8. Software architecture evolution

    DEFF Research Database (Denmark)

    Barais, Olivier; Le Meur, Anne-Francoise; Duchien, Laurence

    2008-01-01

    Software architectures must frequently evolve to cope with changing requirements, and this evolution often implies integrating new concerns. Unfortunately, when the new concerns are crosscutting, existing architecture description languages provide little or no support for this kind of evolution....... The software architect must modify multiple elements of the architecture manually, which risks introducing inconsistencies. This chapter provides an overview, comparison and detailed treatment of the various state-of-the-art approaches to describing and evolving software architectures. Furthermore, we discuss...... one particular framework named Tran SAT, which addresses the above problems of software architecture evolution. Tran SAT provides a new element in the software architecture descriptions language, called an architectural aspect, for describing new concerns and their integration into an existing...

  9. Software And Systems Engineering Risk Management

    Science.gov (United States)

    2010-04-01

    RSKM 2004 COSO Enterprise RSKM Framework 2006 ISO/IEC 16085 Risk Management Process 2008 ISO/IEC 12207 Software Lifecycle Processes 2009 ISO/IEC...1 Software And Systems Engineering Risk Management John Walz VP Technical and Conferences Activities, IEEE Computer Society Vice-Chair Planning...Software & Systems Engineering Standards Committee, IEEE Computer Society US TAG to ISO TMB Risk Management Working Group Systems and Software

  10. A Methodological Framework for Software Safety in Safety Critical Computer Systems

    OpenAIRE

    P. V. Srinivas Acharyulu; P. Seetharamaiah

    2012-01-01

    Software safety must deal with the principles of safety management, safety engineering and software engineering for developing safety-critical computer systems, with the target of making the system safe, risk-free and fail-safe in addition to provide a clarified differentaition for assessing and evaluating the risk, with the principles of software risk management. Problem statement: Prevailing software quality models, standards were not subsisting in adequately addressing the software safety ...

  11. Spiking Activity of a LIF Neuron in Distributed Delay Framework

    Directory of Open Access Journals (Sweden)

    Saket Kumar Choudhary

    2016-06-01

    Full Text Available Evolution of membrane potential and spiking activity for a single leaky integrate-and-fire (LIF neuron in distributed delay framework (DDF is investigated. DDF provides a mechanism to incorporate memory element in terms of delay (kernel function into a single neuron models. This investigation includes LIF neuron model with two different kinds of delay kernel functions, namely, gamma distributed delay kernel function and hypo-exponential distributed delay kernel function. Evolution of membrane potential for considered models is studied in terms of stationary state probability distribution (SPD. Stationary state probability distribution of membrane potential (SPDV for considered neuron models are found asymptotically similar which is Gaussian distributed. In order to investigate the effect of membrane potential delay, rate code scheme for neuronal information processing is applied. Firing rate and Fano-factor for considered neuron models are calculated and standard LIF model is used for comparative study. It is noticed that distributed delay increases the spiking activity of a neuron. Increase in spiking activity of neuron in DDF is larger for hypo-exponential distributed delay function than gamma distributed delay function. Moreover, in case of hypo-exponential delay function, a LIF neuron generates spikes with Fano-factor less than 1.

  12. Exploring a business to business recurring revenue framework for the delivery of software as a service through a cloud computing channel

    OpenAIRE

    Dempsey, David

    2015-01-01

    Cloud Computing (CC) is creating a new paradigm for the distribution of computer software applications. Within this context CC enabled Software as a Service (SaaS) fundamentally changes the revenue expectations and business model for the application software industry. This study considers the revenue expectation of the CC industry and its dependency on renewal subscriptions, while the study focuses on SaaS in the Business-to-Business (B2B) domain, delivered through the CC chann...

  13. DeepSpark: A Spark-Based Distributed Deep Learning Framework for Commodity Clusters

    OpenAIRE

    Kim, Hanjoo; Park, Jaehong; Jang, Jaehee; Yoon, Sungroh

    2016-01-01

    The increasing complexity of deep neural networks (DNNs) has made it challenging to exploit existing large-scale data processing pipelines for handling massive data and parameters involved in DNN training. Distributed computing platforms and GPGPU-based acceleration provide a mainstream solution to this computational challenge. In this paper, we propose DeepSpark, a distributed and parallel deep learning framework that exploits Apache Spark on commodity clusters. To support parallel operation...

  14. Fermi Offline Software: The Pros and Cons of Reusing Free Software

    International Nuclear Information System (INIS)

    Kelly, H

    2012-01-01

    The Fermi Gamma-ray Observatory, including the Large Area Telescope (LAT), was launched June 11, 2008. We are a relatively small collaboration, with a maximum of 25 software developers in our heyday. Within the LAT collaboration we support Red Hat Linux, Windows, and are moving towards Mac OS as well for offline simulation, reconstruction and analysis tools. Early on it was decided to use one software system to run our simulations as well as ultimately handle the event processing for real data. We leveraged many existing HEP external libraries (Geant4, Gaudi Framework, ROOT, CLHEP, CMT) to ease the burden on our developers. This strategy of re-using existing software helped us pull together our system quickly and test during our beam tests and data challenges. Now, after launch, we are in a new phase of the project, where we must move forward to support modern operating systems and compilers to get us through the life of the mission. This means upgrading our external libraries as well, which are not under our direct control. Meanwhile, it is crucial to our production system that we carefully orchestrate all upgrades to insure stability. An additional hurtle is that our number of active developers has dwindled dramatically. Many of those left are Windows developers reliant on the Visual Studio development environment, while our user base and production system depend on our Linux distributions. There have been a number of lessons learned, with undoubtedly more to come.

  15. Creating a Framework for Applying OAIS to Distributed Digital Preservation

    DEFF Research Database (Denmark)

    Zierau, Eld; Schultz, Matt; Skinner, Katherine

    apparatuses in order to achieve the reliable persistence of digital content. Although the use of distribution is common within the preservation field, there is not yet an accepted definition for “distributed digital preservation”. As the preservation field has matured, the term “distributed digital...... preservation” has been applied to myriad preservation approaches. In the white paper we define DDP as the use of replication, independence, and coordination to address the known threats to digital content through time to ensure their accessibility. The preservation field relies heavily upon an international......, delineating the various trends and practices that compel an elaboration upon OAIS, identifying the challenges ahead for advancing this endeavor, and putting forward a series of recommendations for making progress toward developing a formal framework for a DDP environment....

  16. The assessment of water loss from a damaged distribution pipe using the FEFLOW software

    Directory of Open Access Journals (Sweden)

    Iwanek Małgorzata

    2017-01-01

    Full Text Available Common reasons of real water loss in distribution systems are leakages caused by the failures or pipe breakages. Depending on the intensity of leakage from a damaged buried pipe, water can flow to the soil surface just after the failure occurs, much later or never at all. The localization of the place where the pipe breakage occurs is relatively easy when water outflow occurs on the soil surface. The volume of lost water strongly depends on the time it takes to localize the place of a pipe breakage. The aim of this paper was to predict the volume of water lost between the moment of a failure occurring and the moment of water outflow on the soil surface, during a prospective failure in a distribution system. The basis of the analysis was a numerical simulation of a water pipe failure using the FEFLOW v. 5.3 software (Finite Element subsurface FLOW systems for a real middle-sized distribution system. Simulations were conducted for variants depending on pipes’ diameter (80÷200 mm for minimal and maximal hydraulic pressure head in the system (20.14 and 60.41 m H2O, respectively. FEFLOW software application enabled to select places in the water system where possible failures would be difficult to detect.

  17. Implementation of the ATLAS trigger within the ATLAS Multi­Threaded Software Framework AthenaMT

    CERN Document Server

    Wynne, Benjamin; The ATLAS collaboration

    2016-01-01

    We present an implementation of the ATLAS High Level Trigger that provides parallel execution of trigger algorithms within the ATLAS multi­threaded software framework, AthenaMT. This development will enable the ATLAS High Level Trigger to meet future challenges due to the evolution of computing hardware and upgrades of the Large Hadron Collider, LHC, and ATLAS Detector. During the LHC data­taking period starting in 2021, luminosity will reach up to three times the original design value. Luminosity will increase further, to up to 7.5 times the design value, in 2026 following LHC and ATLAS upgrades. This includes an upgrade of the ATLAS trigger architecture that will result in an increase in the High Level Trigger input rate by a factor of 4 to 10 compared to the current maximum rate of 100 kHz. The current ATLAS multiprocess framework, AthenaMP, manages a number of processes that process events independently, executing algorithms sequentially in each process. AthenaMT will provide a fully multi­threaded env...

  18. Residence time distribution software analysis. User's manual

    International Nuclear Information System (INIS)

    1996-01-01

    Radiotracer applications cover a wide range of industrial activities in chemical and metallurgical processes, water treatment, mineral processing, environmental protection and civil engineering. Experiment design, data acquisition, treatment and interpretation are the basic elements of tracer methodology. The application of radiotracers to determine impulse response as RTD as well as the technical conditions for conducting experiments in industry and in the environment create a need for data processing using special software. Important progress has been made during recent years in the preparation of software programs for data treatment and interpretation. The software package developed for industrial process analysis and diagnosis by the stimulus-response methods contains all the methods for data processing for radiotracer experiments

  19. BEX Mejora continua framework

    OpenAIRE

    García Ramírez, David

    2014-01-01

    Memoria de la implementación de un software que permite la gestión y control de todo el framework que requiere gestionar el departamento de mejora continua (BEX Business Excelence). Memòria de la implementació d'un programari que permet la gestió i control de tot el framework que requereix gestionar el departament de millora contínua (BEX Business Excelence). Master thesis for the Free Software program.

  20. A CONCEPTUAL FRAMEWORK OF DISTRIBUTIVE JUSTICE IN ISLAMIC ECONOMICS

    Directory of Open Access Journals (Sweden)

    Shafinah Begum Abdul Rahim

    2015-06-01

    Full Text Available itical, behavioural and social sciences both in mainstream or Islam. Given its increasing relevance to the global village we share and the intensity of socio-economic problems invariably related to the distribution of resources amongst us, this work is aimed at adding value through a deeper understanding and appreciation of justice placed by the Syariah in all domains of of our economic lives. The existing works within this area appear to lean mostly towards redistributive mechanisms available in the revealed knowledge. Hence a comprehensive analysis of the notion of distributive justice from the theoretical level translated into practical terms is expected to contribute significantly to policymakers committed towards finding permanent solutions to economic problems especially in the Muslim world. It is a modest yet serious attempt to bridge the gap between distributive justice in letter and spirit as clearly ordained in the Holy Quran. The entire analysis is based on critical reviews and appraisals of the all relevant literary on distributive justice in Islamic Economics. The final product is a conceptual framework that can be used as a blueprint in establishing the notion of justice in the distribution of economic resources, i.e. income and wealth as aspired by the Syariah.

  1. New software of the control and data acquisition system for the Nuclotron internal target station

    International Nuclear Information System (INIS)

    Isupov, A.Yu.

    2012-01-01

    The control and data acquisition system for the Internal Target Station (ITS) of the Nuclotron (LHEP, JINR) is implemented. The new software is based on the ngdp framework under the Unix-like operating system FreeBSD to allow easy network distribution of the on-line data collected from ITS, as well as the internal target remote control

  2. A Component-Oriented Programming for Embedded Mobile Robot Software

    Directory of Open Access Journals (Sweden)

    Safaai Deris

    2008-11-01

    Full Text Available Applying software reuse to many Embedded Real-Time (ERT systems poses significant challenges to industrial software processes due to the resource-constrained and real-time requirements of the systems. Autonomous Mobile Robot (AMR system is a class of ERT systems, hence, inherits the challenge of applying software reuse in general ERT systems. Furthermore, software reuse in AMR systems is challenged by the diversities in terms of robot physical size and shape, environmental interaction and implementation platform. Thus, it is envisioned that component-based software engineering will be the suitable way to promote software reuse in AMR systems with consideration to general requirements to be self-contained, platform-independent and real-time predictable. A framework for component-oriented programming for AMR software development using PECOS component model is proposed in this paper. The main features of this framework are: (1 use graphical representation for components definition and composition; (2 target C language for optimal code generation with resource-constrained micro-controller; and (3 minimal requirement for run-time support. Real-time implementation indicates that, the PECOS component model together with the proposed framework is suitable for resource constrained embedded AMR systems software development.

  3. Framework for Securing Mobile Software Agents

    OpenAIRE

    Mwakalinga, G Jeffy; Yngström, Louise

    2006-01-01

    Information systems are growing in size and complexity making it infeasible for human administrators to manage them. The aim of this work is to study ways of securing and using mobile software agents to deter attackers, protect information systems, detect intrusions, automatically respond to the intrusions and attacks, and to produce recovery services to systems after attacks. Current systems provide intrusion detection, prevention, protection, response, and recovery services but most of thes...

  4. Software for the LHCb experiment

    CERN Document Server

    Corti, Gloria; Belyaev, Ivan; Cattaneo, Marco; Charpentier, Philippe; Frank, Markus; Koppenburg, Patrick; Mato-Vila, P; Ranjard, Florence; Roiser, Stefan

    2006-01-01

    LHCb is an experiment for precision measurements of CP-violation and rare decays in B mesons at the LHC collider at CERN. The LHCb software development strategy follows an architecture-centric approach as a way of creating a resilient software framework that can withstand changes in requirements and technology over the expected long lifetime of the experiment. The software architecture, called GAUDI, supports event data processing applications that run in different processing environments ranging from the real-time high- level triggers in the online system to the final physics analysis performed by more than one hundred physicists. The major architectural design choices and the arguments that lead to these choices will be outlined. Object oriented technologies have been used throughout. Initially developed for the LHCb experiment, GAUDI has been adopted and extended by other experiments. Several iterations of the GAUDI software framework have been released and are now being used routinely by the physicists of...

  5. How Social Software Supports Cooperative Practices in a Globally Distributed Software Project

    DEFF Research Database (Denmark)

    Giuffrida, Rosalba; Dittrich, Yvonne

    2014-01-01

    In Global Software Development (GSD), the lack of face- to-face communication is a major challenge and effective computer-mediated practices are necessary. This paper analyzes cooperative practices supported by Social Software (SoSo) in a GSD student project. The empirical results show...... that the role of SoSo is to support informal communication, enabling social talks and metawork, both necessary for establishing and for maintaining effective coordination mechanisms, thus successful cooperation....

  6. Software Process Improvement

    DEFF Research Database (Denmark)

    Kuhrmann, Marco; Diebold, Philipp; Münch, Jürgen

    2016-01-01

    Software process improvement (SPI) is around for decades: frameworks are proposed, success factors are studied, and experiences have been reported. However, the sheer mass of concepts, approaches, and standards published over the years overwhelms practitioners as well as researchers. What is out...... to new specialized frameworks. New and specialized frameworks account for the majority of the contributions found (approx. 38%). Furthermore, we find a growing interest in success factors (approx. 16%) to aid companies in conducting SPI and in adapting agile principles and practices for SPI (approx. 10...

  7. Framework for Computer-Aided Evolution of Object-Oriented Designs

    NARCIS (Netherlands)

    Ciraci, S.; van den Broek, P.M.; Aksit, Mehmet

    2008-01-01

    In this paper, we describe a framework for the computer aided evolution of the designs of object-oriented software systems. Evolution mechanisms are software structures that prepare software for certain type of evolutions. The framework uses a database which holds the evolution mechanisms, modeled

  8. Special software for computing the special functions of wave catastrophes

    Directory of Open Access Journals (Sweden)

    Andrey S. Kryukovsky

    2015-01-01

    Full Text Available The method of ordinary differential equations in the context of calculating the special functions of wave catastrophes is considered. Complementary numerical methods and algorithms are described. The paper shows approaches to accelerate such calculations using capabilities of modern computing systems. Methods for calculating the special functions of wave catastrophes are considered in the framework of parallel computing and distributed systems. The paper covers the development process of special software for calculating of special functions, questions of portability, extensibility and interoperability.

  9. A Systematic Mapping Study of Tools for Distributed Software Development Teams

    DEFF Research Database (Denmark)

    Tell, Paolo; Ali Babar, Muhammad

    schemas for providing a framework that can help identify the categories that have attracted significant amount of research and commercial efforts, and the research areas where there are gaps to be filled. Conclusions: The findings show that whilst commercial and open source solutions are predominantly...... gaps. Objective: The objective of this research is to systematically identify and classify a comprehensive list of the technologies that have been developed and/or used for supporting GSD teams. Method: This study has been undertaken as a Systematic Mapping Study (SMS). Our searches identified 1958......Context: A wide variety of technologies have been developed to support Global Software Development (GSD). However, the information about the dozens of available solutions is quite diverse and scattered making it quite difficult to have an overview able to identify common trends and unveil research...

  10. Federated software defined network operations for LHC experiments

    Science.gov (United States)

    Kim, Dongkyun; Byeon, Okhwan; Cho, Kihyeon

    2013-09-01

    The most well-known high-energy physics collaboration, the Large Hadron Collider (LHC), which is based on e-Science, has been facing several challenges presented by its extraordinary instruments in terms of the generation, distribution, and analysis of large amounts of scientific data. Currently, data distribution issues are being resolved by adopting an advanced Internet technology called software defined networking (SDN). Stability of the SDN operations and management is demanded to keep the federated LHC data distribution networks reliable. Therefore, in this paper, an SDN operation architecture based on the distributed virtual network operations center (DvNOC) is proposed to enable LHC researchers to assume full control of their own global end-to-end data dissemination. This may achieve an enhanced data delivery performance based on data traffic offloading with delay variation. The evaluation results indicate that the overall end-to-end data delivery performance can be improved over multi-domain SDN environments based on the proposed federated SDN/DvNOC operation framework.

  11. Model-based software process improvement

    Science.gov (United States)

    Zettervall, Brenda T.

    1994-01-01

    The activities of a field test site for the Software Engineering Institute's software process definition project are discussed. Products tested included the improvement model itself, descriptive modeling techniques, the CMM level 2 framework document, and the use of process definition guidelines and templates. The software process improvement model represents a five stage cyclic approach for organizational process improvement. The cycles consist of the initiating, diagnosing, establishing, acting, and leveraging phases.

  12. An Application of a Game Development Framework in Higher Education

    Directory of Open Access Journals (Sweden)

    Alf Inge Wang

    2009-01-01

    Full Text Available This paper describes how a game development framework was used as a learning aid in a software engineering. Games can be used within higher education in various ways to promote student participation, enable variation in how lectures are taught, and improve student interest. In this paper, we describe a case study at the Norwegian University of Science and Technology (NTNU where a game development framework was applied to make students learn software architecture by developing a computer game. We provide a model for how game development frameworks can be integrated with a software engineering or computer science course. We describe important requirements to consider when choosing a game development framework for a course and an evaluation of four frameworks based on these requirements. Further, we describe some extensions we made to the existing game development framework to let the students focus more on software architectural issues than the technical implementation issues. Finally, we describe a case study of how a game development framework was integrated in a software architecture course and the experiences from doing so.

  13. Understanding Green Software Development: A Conceptual Framework

    NARCIS (Netherlands)

    Ardito, Luca; Procaccianti, Giuseppe; Torchiano, Marco; Vetrò, Antonio

    2015-01-01

    The energy efficiency of IT has become one of the hottest topics in the last few years. The problem has been typically addressed by hardware manufacturers and designers, but recently the attention of industry and academia has shifted to the role of software for IT sustainability. Writing

  14. A dynamic system for ATLAS software installation on OSG grid sites

    International Nuclear Information System (INIS)

    Zhao, X; Maeno, T; Wenaus, T; Leuhring, F; Youssef, S; Brunelle, J; De Salvo, A; Thompson, A S

    2010-01-01

    A dynamic and reliable system for installing the ATLAS software releases on Grid sites is crucial to guarantee the timely and smooth start of ATLAS production and reduce its failure rate. In this paper, we discuss the issues encountered in the previous software installation system, and introduce the new approach, which is built upon the new development in the areas of the ATLAS workload management system (PanDA), and software package management system (pacman). It is also designed to integrate with the EGEE ATLAS software installation framework. In the new system, ATLAS software releases are packaged as pacball, a uniquely identifiable and reproducible self-installing data file. The distribution of pacballs to remote sites is managed by ATLAS data management system (DQ2) and PanDA server. The installation on remote sites is automatically triggered by the PanDA pilot jobs. The installation job payload connects to a central ATLAS software installation portal, making the information of installation status easily accessible across OSG and EGEE Grids. The issues encountered in running the new system in production, and our future plan for improvement, will also be discussed.

  15. Effectiveness of Software Quality Assurance in Offshore Development Enterprises in Sri Lanka

    OpenAIRE

    Malinda G. Sirisena

    2014-01-01

    The aim of this research is to evaluate the effectiveness of software quality assurance approaches of Sri Lankan offshore software development organizations, and to propose a framework which could be used across all offshore software development organizations. An empirical study was conducted using derived framework from popular software quality evaluation models. The research instrument employed was a questionnaire survey among thirty seven Sri Lankan registered offshore software develop...

  16. Stimulating Creativity Through Opportunistic Software Development

    NARCIS (Netherlands)

    Z. Obrenovic; D. Gasevic; A. P. W. Eliëns (Anton)

    2008-01-01

    htmlabstractUsing opportunistic software development principles in computer engineering education encourages students to be creative and to develop solutions that cross the boundaries of diverse technologies. A framework for opportunistic software development education helps to create a space in

  17. ACTS: from ATLAS software towards a common track reconstruction software

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00349786; The ATLAS collaboration; Salzburger, Andreas; Kiehn, Moritz; Hrdinka, Julia; Calace, Noemi

    2017-01-01

    Reconstruction of charged particles' trajectories is a crucial task for most particle physics experiments. The high instantaneous luminosity achieved at the LHC leads to a high number of proton-proton collisions per bunch crossing, which has put the track reconstruction software of the LHC experiments through a thorough test. Preserving track reconstruction performance under increasingly difficult experimental conditions, while keeping the usage of computational resources at a reasonable level, is an inherent problem for many HEP experiments. Exploiting concurrent algorithms and using multivariate techniques for track identification are the primary strategies to achieve that goal. Starting from current ATLAS software, the ACTS project aims to encapsulate track reconstruction software into a generic, framework- and experiment-independent software package. It provides a set of high-level algorithms and data structures for performing track reconstruction tasks as well as fast track simulation. The software is de...

  18. THE LABVIEW RADE FRAMEWORK DISTRIBUTED ARCHITECTURE

    CERN Document Server

    Andreassen, O O; Raimondo, A; Rijllart, A; Shaipov, V; Sorokoletov, R

    2011-01-01

    For accelerator GUI applications there is a need for a rapid development environment to create expert tools or to prototype operator applications. Typically a variety of tools are being used, such as Matlab or Excel, but their scope is limited, either because of their low flexibility or limited integration into the accelerator infrastructure. In addition, having several tools obliges users to deal with different programming techniques and data structures. We have addressed these limitations by using LabVIEW, extending it with interfaces to C++ and Java. In this way it fulfils requirements of ease of use, flexibility and connectivity, which makes up what we refer to as the RADE framework. Recent application requirements could only be met by implementing a distributed architecture with multiple servers running multiple services. This brought us the additional advantage to implement redundant services, to increase the availability and to make transparent updates. We will present two applications requiring high a...

  19. Framework for teleoperated microassembly systems

    Science.gov (United States)

    Reinhart, Gunther; Anton, Oliver; Ehrenstrasser, Michael; Patron, Christian; Petzold, Bernd

    2002-02-01

    Manual assembly of minute parts is currently done using simple devices such as tweezers or magnifying glasses. The operator therefore requires a great deal of concentration for successful assembly. Teleoperated micro-assembly systems are a promising method for overcoming the scaling barrier. However, most of today's telepresence systems are based on proprietary and one-of-a-kind solutions. Frameworks which supply the basic functions of a telepresence system, e.g. to establish flexible communication links that depend on bandwidth requirements or to synchronize distributed components, are not currently available. Large amounts of time and money have to be invested in order to create task-specific teleoperated micro-assembly systems from scratch. For this reason, an object-oriented framework for telepresence systems that is based on CORBA as a common middleware was developed at the Institute for Machine Tools and Industrial Management (iwb). The framework is based on a distributed architectural concept and is realized in C++. External hardware components such as haptic, video or sensor devices are coupled to the system by means of defined software interfaces. In this case, the special requirements of teleoperation systems have to be considered, e.g. dynamic parameter settings for sensors during operation. Consequently, an architectural concept based on logical sensors has been developed to achieve maximum flexibility and to enable a task-oriented integration of hardware components.

  20. Recommendations for institutional policy and network regulatory frameworks towards distributed generation in EU Member States

    International Nuclear Information System (INIS)

    Ten Donkelaar, M.; Van Oostvoorn, F.

    2005-01-01

    Recommendations regarding the development of regulatory frameworks and institutional policies towards an optimal integration of distributed generation (DG) into electricity networks are presented. These recommendations are based on findings from a benchmarking study conducted in the framework of the ENIRDG-net project. The aim of the benchmarking exercise was to identify examples of well-defined pro-DG policies, with clear targets and adequate implementation mechanisms. In this study an adequate pro-DG policy is defined on the basis of a level playing field, a situation where distributed and centralised generation receive equal incentives and have equal access to the liberalised markets for electricity. The benchmark study includes the results of a similar study conducted in the framework of the SUSTELNET project. When comparing the results a certain discrepancy can be noticed between the actual regulation and policy in a number of countries, the medium to long-term targets and the ideal situation described by the level playing field objective. To overcome this discrepancy, a number of recommendations have been drafted for future policy and regulation towards distributed generation

  1. Gazing through Windows at component software development

    International Nuclear Information System (INIS)

    Foster, David

    1996-01-01

    What has been presented here is an overview of the architectural plan for distributed computing by Microsoft. The business opportunity is tied to the rapid growth of consumer computing which is happening now and will continue far into the future. Being able to create a logically centralized, through the use of interface standards, and physically distributed computing environment where anyone can provide services is major challenge. Managing complexity and creating a consistent framework through the use of componentware technology is paramount to its success. The ability to scale distributed processing, manage diverse groups involved in data analysis and facilitate collaboration at all levels are the business processes of particular interest to the HEP community. In realizing the business opportunity they see, Microsoft and others, will help solve many of the basic problems facing HEP in the next ten years. By closely tracking the software developments and investing in understanding the technologies presented here, HEP will gain great benefit from commodity computing. (author)

  2. The Principals and Practice of Distributed High Throughput Computing

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The potential of Distributed Processing Systems to deliver computing capabilities with qualities ranging from high availability and reliability to easy expansion in functionality and capacity were recognized and formalized in the 1970’s. For more three decade these principals Distributed Computing guided the development of the HTCondor resource and job management system. The widely adopted suite of software tools offered by HTCondor are based on novel distributed computing technologies and are driven by the evolving needs of High Throughput scientific applications. We will review the principals that underpin our work, the distributed computing frameworks and technologies we developed and the lessons we learned from delivering effective and dependable software tools in an ever changing landscape computing technologies and needs that range today from a desktop computer to tens of thousands of cores offered by commercial clouds. About the speaker Miron Livny received a B.Sc. degree in Physics and Mat...

  3. A Transparent Framework for Evaluating the Effects of DGPV on Distribution System Costs

    Energy Technology Data Exchange (ETDEWEB)

    Horowitz, Kelsey A [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Mather, Barry A [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Ding, Fei [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Denholm, Paul L [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Palmintier, Bryan S [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2018-04-02

    Assessing the costs and benefits of distributed photovoltaic generators (DGPV) to the power system and electricity consumers is key to determining appropriate policies, tariff designs, and power system upgrades for the modern grid. We advance understanding of this topic by providing a transparent framework, terminology, and data set for evaluating distribution system upgrade costs, line losses, and interconnection costs as a function of DGPV penetration level.

  4. Stimulating creativity through opportunistic software development

    NARCIS (Netherlands)

    Obrenovic, Z.; Gasevic, D.; Eliëns, A.

    2008-01-01

    Using opportunistic software development principles in computer engineering education encourages students to be creative and to develop solutions that cross the boundaries of diverse technologies. A framework for opportunistic software development education helps to create a space in which students

  5. The Dynamics of Agile Practices for Safety-Critical Software Development

    DEFF Research Database (Denmark)

    Nielsen, Peter Axel; Tordrup Heeager, Lise

    2017-01-01

    This short paper reports from a case study of the agile development of safety-critical software. It utilizes a framework of dynamic relationships between agile practices with the purpose of demonstrating the utility of the framework to understand a case in its context, and it shows significant...... dynamics. The study is concluded by pointing at which further research on the framework is required to use the framework in managing the agile development of safety-critical software....

  6. Programming model for distributed intelligent systems

    Science.gov (United States)

    Sztipanovits, J.; Biegl, C.; Karsai, G.; Bogunovic, N.; Purves, B.; Williams, R.; Christiansen, T.

    1988-01-01

    A programming model and architecture which was developed for the design and implementation of complex, heterogeneous measurement and control systems is described. The Multigraph Architecture integrates artificial intelligence techniques with conventional software technologies, offers a unified framework for distributed and shared memory based parallel computational models and supports multiple programming paradigms. The system can be implemented on different hardware architectures and can be adapted to strongly different applications.

  7. Progressive retry for software error recovery in distributed systems

    Science.gov (United States)

    Wang, Yi-Min; Huang, Yennun; Fuchs, W. K.

    1993-01-01

    In this paper, we describe a method of execution retry for bypassing software errors based on checkpointing, rollback, message reordering and replaying. We demonstrate how rollback techniques, previously developed for transient hardware failure recovery, can also be used to recover from software faults by exploiting message reordering to bypass software errors. Our approach intentionally increases the degree of nondeterminism and the scope of rollback when a previous retry fails. Examples from our experience with telecommunications software systems illustrate the benefits of the scheme.

  8. Developing frameworks for protocol implementation

    NARCIS (Netherlands)

    de Barros Barbosa, C.; de barros Barbosa, C.; Ferreira Pires, Luis

    1999-01-01

    This paper presents a method to develop frameworks for protocol implementation. Frameworks are software structures developed for a specific application domain, which can be reused in the implementation of various different concrete systems in this domain. The use of frameworks support a protocol

  9. Gammasphere software development

    International Nuclear Information System (INIS)

    Piercey, R.B.

    1994-01-01

    This report describes the activities of the nuclear physics group at Mississippi State University which were performed during 1993. Significant progress has been made in the focus areas: chairing the Gammasphere Software Working Group (SWG); assisting with the porting and enhancement of the ORNL UPAK histogramming software package; and developing standard formats for Gammasphere data products. In addition, they have established a new public ftp archive to distribute software and software development tools and information

  10. Decision criteria for software component sourcing: steps towards a framework

    NARCIS (Netherlands)

    Kusters, R.J.; Pouwelse, L.; Martin, H.; Trienekens, J.J.M.; Hammoudi, Sl.; Maciaszek, L.; Missikoff, M.M.; Camp, O.; Cordeiro, J.

    2016-01-01

    Software developing organizations nowadays have a wide choice when it comes to sourcing software components. This choice ranges from developing or adapting in-house developed components via buying closed source components to utilizing open source components. This study seeks to determine criteria

  11. The ngdp framework for data acquisition systems

    International Nuclear Information System (INIS)

    Isupov, A.Yu.

    2010-01-01

    The ngdp framework is intended to provide a base for the data acquisition (DAQ) system software. The ngdp's design key features are: high modularity and scalability; usage of the kernel context (particularly kernel threads) of the operating systems (OS), which allows one to avoid preemptive scheduling and unnecessary memory-to-memory copying between contexts; elimination of intermediate data storages on the media slower than the operating memory like hard disks, etc. The ngdp, having the above properties, is suitable to organize and manage data transportation and processing for needs of essentially distributed DAQ systems

  12. A Framework for the Generation and Dissemination of Drop Size Distribution (DSD) Characteristics Using Multiple Platforms

    Science.gov (United States)

    Wolf, David B.; Tokay, Ali; Petersen, Walt; Williams, Christopher; Gatlin, Patrick; Wingo, Mathew

    2010-01-01

    Proper characterization of the precipitation drop size distribution (DSD) is integral to providing realistic and accurate space- and ground-based precipitation retrievals. Current technology allows for the development of DSD products from a variety of platforms, including disdrometers, vertical profilers and dual-polarization radars. Up to now, however, the dissemination or availability of such products has been limited to individual sites and/or field campaigns, in a variety of formats, often using inconsistent algorithms for computing the integral DSD parameters, such as the median- and mass-weighted drop diameter, total number concentration, liquid water content, rain rate, etc. We propose to develop a framework for the generation and dissemination of DSD characteristic products using a unified structure, capable of handling the myriad collection of disdrometers, profilers, and dual-polarization radar data currently available and to be collected during several upcoming GPM Ground Validation field campaigns. This DSD super-structure paradigm is an adaptation of the radar super-structure developed for NASA s Radar Software Library (RSL) and RSL_in_IDL. The goal is to provide the DSD products in a well-documented format, most likely NetCDF, along with tools to ingest and analyze the products. In so doing, we can develop a robust archive of DSD products from multiple sites and platforms, which should greatly benefit the development and validation of precipitation retrieval algorithms for GPM and other precipitation missions. An outline of this proposed framework will be provided as well as a discussion of the algorithms used to calculate the DSD parameters.

  13. A Case Study of Coordination in Distributed Agile Software Development

    Science.gov (United States)

    Hole, Steinar; Moe, Nils Brede

    Global Software Development (GSD) has gained significant popularity as an emerging paradigm. Companies also show interest in applying agile approaches in distributed development to combine the advantages of both approaches. However, in their most radical forms, agile and GSD can be placed in each end of a plan-based/agile spectrum because of how work is coordinated. We describe how three GSD projects applying agile methods coordinate their work. We found that trust is needed to reduce the need of standardization and direct supervision when coordinating work in a GSD project, and that electronic chatting supports mutual adjustment. Further, co-location and modularization mitigates communication problems, enables agility in at least part of a GSD project, and renders the implementation of Scrum of Scrums possible.

  14. Software Quality Assurance and Controls Standard

    Science.gov (United States)

    2010-04-27

    dassurance a wor pro uc s an processes comply with predefined provisions and plans. • According to International Standard (IS) 12207 – of the 44...from document (plan) focus to process focus – Alignment with framework standard IS 12207 software life cycle (SLC) processes with exact...Books and P blications IEEE Software and Systems Engineering curriculum ABET u Certified Software Development Professional Standards ISO /IEC

  15. A flexible object-based software framework for modeling complex systems with interacting natural and societal processes.

    Energy Technology Data Exchange (ETDEWEB)

    Christiansen, J. H.

    2000-06-15

    The Dynamic Information Architecture System (DIAS) is a flexible, extensible, object-based framework for developing and maintaining complex multidisciplinary simulations. The DIAS infrastructure makes it feasible to build and manipulate complex simulation scenarios in which many thousands of objects can interact via dozens to hundreds of concurrent dynamic processes. The flexibility and extensibility of the DIAS software infrastructure stem mainly from (1) the abstraction of object behaviors, (2) the encapsulation and formalization of model functionality, and (3) the mutability of domain object contents. DIAS simulation objects are inherently capable of highly flexible and heterogeneous spatial realizations. Geospatial graphical representation of DIAS simulation objects is addressed via the GeoViewer, an object-based GIS toolkit application developed at ANL. DIAS simulation capabilities have been extended by inclusion of societal process models generated by the Framework for Addressing Cooperative Extended Transactions (FACET), another object-based framework developed at Argonne National Laboratory. By using FACET models to implement societal behaviors of individuals and organizations within larger DIAS-based natural systems simulations, it has become possible to conveniently address a broad range of issues involving interaction and feedback among natural and societal processes. Example DIAS application areas discussed in this paper include a dynamic virtual oceanic environment, detailed simulation of clinical, physiological, and logistical aspects of health care delivery, and studies of agricultural sustainability of urban centers under environmental stress in ancient Mesopotamia.

  16. Diseño y desarrollo de un framework metodológico e instrumental para asistir a la evaluación de software

    OpenAIRE

    Angeleri, Paula; Sorgen, Amos; Bidone, Pablo; Fava, Agustín; Grasso, Walter

    2014-01-01

    El presente artículo presenta un proyecto de investigación conjunto entre actores de la Academia y la Industria. El proyecto tuvo por objetivo la creación de un framework integral de evaluación de productos de software que tenga en cuenta todos los factores que influyen en el proceso de evaluación de la calidad de un producto, e incluya al menos un método de evaluación, un modelo de calidad, herramientas y guías que den soporte al proceso de evaluación. El artículo describe el Framework, su o...

  17. A Scalable Distribution Network Risk Evaluation Framework via Symbolic Dynamics

    Science.gov (United States)

    Yuan, Kai; Liu, Jian; Liu, Kaipei; Tan, Tianyuan

    2015-01-01

    Background Evaluations of electric power distribution network risks must address the problems of incomplete information and changing dynamics. A risk evaluation framework should be adaptable to a specific situation and an evolving understanding of risk. Methods This study investigates the use of symbolic dynamics to abstract raw data. After introducing symbolic dynamics operators, Kolmogorov-Sinai entropy and Kullback-Leibler relative entropy are used to quantitatively evaluate relationships between risk sub-factors and main factors. For layered risk indicators, where the factors are categorized into four main factors – device, structure, load and special operation – a merging algorithm using operators to calculate the risk factors is discussed. Finally, an example from the Sanya Power Company is given to demonstrate the feasibility of the proposed method. Conclusion Distribution networks are exposed and can be affected by many things. The topology and the operating mode of a distribution network are dynamic, so the faults and their consequences are probabilistic. PMID:25789859

  18. Experience Supporting the Integration of LHC Experiments Software Framework with the LCG Middleware

    CERN Document Server

    Santinelli, Roberto

    2006-01-01

    The LHC experiments are currently preparing for data acquisition in 2007 and because of the large amount of required computing and storage resources, they decided to embrace the grid paradigm. The LHC Computing Project (LCG) provides and operates a computing infrastructure suitable for data handling, Monte Carlo production and analysis. While LCG offers a set of high level services, intended to be generic enough to accommodate the needs of different Virtual Organizations, the LHC experiments software framework and applications are very specific and focused on the computing and data models. The LCG Experiment Integration Support team works in close contact with the experiments, the middleware developers and the LCG certification and operations teams to integrate the underlying grid middleware with the experiment specific components. The strategical position between the experiments and the middleware suppliers allows EIS team to play a key role at communications level between the customers and the service provi...

  19. Capataz: a framework for distributing algorithms via the World Wide Web

    Directory of Open Access Journals (Sweden)

    Gonzalo J. Martínez

    2015-08-01

    Full Text Available In recent years, some scientists have embraced the distributed computing paradigm. As experiments and simulations demand ever more computing power, coordinating the efforts of many different processors is often the only reasonable resort. We developed an open-source distributed computing framework based on web technologies, and named it Capataz. Acting as an HTTP server, web browsers running on many different devices can connect to it to contribute in the execution of distributed algorithms written in Javascript. Capataz takes advantage of architectures with many cores using web workers. This paper presents an improvement in Capataz´ usability and why it was needed. In previous experiments the total time of distributed algorithms proved to be susceptible to changes in the execution time of the jobs. The system now adapts by bundling jobs together if they are too simple. The computational experiment to test the solution is a brute force estimation of pi. The benchmark results show that by bundling jobs, the overall perfomance is greatly increased.

  20. Deterministic Design Optimization of Structures in OpenMDAO Framework

    Science.gov (United States)

    Coroneos, Rula M.; Pai, Shantaram S.

    2012-01-01

    Nonlinear programming algorithms play an important role in structural design optimization. Several such algorithms have been implemented in OpenMDAO framework developed at NASA Glenn Research Center (GRC). OpenMDAO is an open source engineering analysis framework, written in Python, for analyzing and solving Multi-Disciplinary Analysis and Optimization (MDAO) problems. It provides a number of solvers and optimizers, referred to as components and drivers, which users can leverage to build new tools and processes quickly and efficiently. Users may download, use, modify, and distribute the OpenMDAO software at no cost. This paper summarizes the process involved in analyzing and optimizing structural components by utilizing the framework s structural solvers and several gradient based optimizers along with a multi-objective genetic algorithm. For comparison purposes, the same structural components were analyzed and optimized using CometBoards, a NASA GRC developed code. The reliability and efficiency of the OpenMDAO framework was compared and reported in this report.

  1. A pattern framework for software quality assessment and tradeoff analysis

    NARCIS (Netherlands)

    Folmer, Eelke; Boscht, Jan

    The earliest design decisions often have a significant impact on software quality and are the most costly to revoke. One of the challenges in architecture design is to reduce the frequency of retrofit problems in software designs; not being able to improve the quality of a system cost effectively, a

  2. Conversion and distribution of bibliographic information for further use on microcomputers with database software such as CDS/ISIS

    International Nuclear Information System (INIS)

    Nieuwenhuysen, P.; Besemer, H.

    1990-05-01

    This paper describes methods to work on microcomputers with data obtained from bibliographic and related databases distributed by online data banks, on CD-ROM or on tape. Also, we mention some user reactions to this technique. We list the different types of software needed to perform these services. Afterwards, we report about our development of software, to convert data so that they can be entered into UNESCO's program named CDS/ISIS (Version 2.3) for local database management on IBM microcomputers or compatibles; this software allows the preservation of the structure of the source data in records, fields, subfields and field occurrences. (author). 10 refs, 1 fig

  3. Software architecture 2

    CERN Document Server

    Oussalah, Mourad Chabanne

    2014-01-01

    Over the past 20 years, software architectures have significantly contributed to the development of complex and distributed systems. Nowadays, it is recognized that one of the critical problems in the design and development of any complex software system is its architecture, i.e. the organization of its architectural elements. Software Architecture presents the software architecture paradigms based on objects, components, services and models, as well as the various architectural techniques and methods, the analysis of architectural qualities, models of representation of architectural templa

  4. Software architecture 1

    CERN Document Server

    Oussalah , Mourad Chabane

    2014-01-01

    Over the past 20 years, software architectures have significantly contributed to the development of complex and distributed systems. Nowadays, it is recognized that one of the critical problems in the design and development of any complex software system is its architecture, i.e. the organization of its architectural elements. Software Architecture presents the software architecture paradigms based on objects, components, services and models, as well as the various architectural techniques and methods, the analysis of architectural qualities, models of representation of architectural template

  5. Software Engineering Environment for Component-based Design of Embedded Software

    DEFF Research Database (Denmark)

    Guo, Yu

    2010-01-01

    as well as application models in a computer-aided software engineering environment. Furthermore, component models have been realized following carefully developed design patterns, which provide for an efficient and reusable implementation. The components have been ultimately implemented as prefabricated...... executable objects that can be linked together into an executable application. The development of embedded software using the COMDES framework is supported by the associated integrated engineering environment consisting of a number of tools, which support basic functionalities, such as system modelling......, validation, and executable code generation for specific hardware platforms. Developing such an environment and the associated tools is a highly complex engineering task. Therefore, this thesis has investigated key design issues and analysed existing platforms supporting model-driven software development...

  6. Hybrid molecular–continuum methods: From prototypes to coupling software

    KAUST Repository

    Neumann, Philipp; Eckhardt, Wolfgang; Bungartz, Hans-Joachim

    2014-01-01

    In this contribution, we review software requirements in hybrid molecular-continuum simulations. For this purpose, we analyze a prototype implementation which combines two frameworks-the Molecular Dynamics framework MarDyn and the framework Peano

  7. Madagascar: open-source software project for multidimensional data analysis and reproducible computational experiments

    Directory of Open Access Journals (Sweden)

    Sergey Fomel

    2013-11-01

    Full Text Available The Madagascar software package is designed for analysis of large-scale multidimensional data, such as those occurring in exploration geophysics. Madagascar provides a framework for reproducible research. By “reproducible research” we refer to the discipline of attaching software codes and data to computational results reported in publications. The package contains a collection of (a computational modules, (b data-processing scripts, and (c research papers. Madagascar is distributed on SourceForge under a GPL v2 license https://sourceforge.net/projects/rsf/. By October 2013, more than 70 people from different organizations around the world have contributed to the project, with increasing year-to-year activity. The Madagascar website is http://www.ahay.org/.

  8. Assume-Guarantee Verification of Software Components in SOFA 2 Framework

    Czech Academy of Sciences Publication Activity Database

    Parízek, P.; Plášil, František

    2010-01-01

    Roč. 4, č. 3 (2010), s. 210-221 ISSN 1751-8806 R&D Projects: GA AV ČR 1ET400300504 Grant - others:GA MŠk(CZ) 7E08004 Institutional research plan: CEZ:AV0Z10300504 Keywords : components * software verification * model checking Subject RIV: JC - Computer Hardware ; Software Impact factor: 0.671, year: 2010

  9. Orthographic Software Modelling: A Novel Approach to View-Based Software Engineering

    Science.gov (United States)

    Atkinson, Colin

    The need to support multiple views of complex software architectures, each capturing a different aspect of the system under development, has been recognized for a long time. Even the very first object-oriented analysis/design methods such as the Booch method and OMT supported a number of different diagram types (e.g. structural, behavioral, operational) and subsequent methods such as Fusion, Kruchten's 4+1 views and the Rational Unified Process (RUP) have added many more views over time. Today's leading modeling languages such as the UML and SysML, are also oriented towards supporting different views (i.e. diagram types) each able to portray a different facets of a system's architecture. More recently, so called enterprise architecture frameworks such as the Zachman Framework, TOGAF and RM-ODP have become popular. These add a whole set of new non-functional views to the views typically emphasized in traditional software engineering environments.

  10. The Ragnarok Architectural Software Configuration Management Model

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak

    1999-01-01

    The architecture is the fundamental framework for designing and implementing large scale software, and the ability to trace and control its evolution is essential. However, many traditional software configuration management tools view 'software' merely as a set of files, not as an architecture....... This introduces an unfortunate impedance mismatch between the design domain (architecture level) and configuration management domain (file level.) This paper presents a software configuration management model that allows tight version control and configuration management of the architecture of a software system...

  11. Global Software Engineering

    DEFF Research Database (Denmark)

    Ebert, Christof; Kuhrmann, Marco; Prikladnicki, Rafael

    2016-01-01

    Professional software products and IT systems and services today are developed mostly by globally distributed teams, projects, and companies. Successfully orchestrating Global Software Engineering (GSE) has become the major success factor both for organizations and practitioners. Yet, more than...... and experience reported at the IEEE International Conference on Software Engineering (ICGSE) series. The outcomes of our analysis show GSE as a field highly attached to industry and, thus, a considerable share of ICGSE papers address the transfer of Software Engineering concepts and solutions to the global stage...

  12. Bottlenecks in Software Defect Prediction Implementation in Industrial Projects

    OpenAIRE

    Hryszko Jarosław; Madeyski Lech

    2015-01-01

    Case studies focused on software defect prediction in real, industrial software development projects are extremely rare. We report on dedicated R&D project established in cooperation between Wroclaw University of Technology and one of the leading automotive software development companies to research possibilities of introduction of software defect prediction using an open source, extensible software measurement and defect prediction framework called DePress (Defect Prediction in Software Syst...

  13. Architecture-driven Migration of Legacy Systems to Cloud-enabled Software

    DEFF Research Database (Denmark)

    Ahmad, Aakash; Babar, Muhammad Ali

    2014-01-01

    of legacy systems to cloud computing. The framework leverages the software reengineering concepts that aim to recover the architecture from legacy source code. Then the framework exploits the software evolution concepts to support architecture-driven migration of legacy systems to cloud-based architectures....... The Legacy-to-Cloud Migration Horseshoe comprises of four processes: (i) architecture migration planning, (ii) architecture recovery and consistency, (iii) architecture transformation and (iv) architecture-based development of cloud-enabled software. We aim to discover, document and apply the migration...

  14. An Open Architecture Framework for Electronic Warfare Based Approach to HLA Federate Development

    Directory of Open Access Journals (Sweden)

    HyunSeo Kang

    2018-01-01

    Full Text Available A variety of electronic warfare models are developed in the Electronic Warfare Research Center. An Open Architecture Framework for Electronic Warfare (OAFEw has been developed for reusability of various object models participating in the electronic warfare simulation and for extensibility of the electronic warfare simulator. OAFEw is a kind of component-based software (SW lifecycle management support framework. This OAFEw is defined by six components and ten rules. The purpose of this study is to construct a Distributed Simulation Interface Model, according to the rules of OAFEw, and create Use Case Model of OAFEw Reference Conceptual Model version 1.0. This is embodied in the OAFEw-FOM (Federate Object Model for High-Level Architecture (HLA based distributed simulation. Therefore, we design and implement EW real-time distributed simulation that can work with a model in C++ and MATLAB API (Application Programming Interface. In addition, OAFEw-FOM, electronic component model, and scenario of the electronic warfare domain were designed through simple scenarios for verification, and real-time distributed simulation between C++ and MATLAB was performed through OAFEw-Distributed Simulation Interface.

  15. GCE Data Toolbox for MATLAB - a software framework for automating environmental data processing, quality control and documentation

    Science.gov (United States)

    Sheldon, W.; Chamblee, J.; Cary, R. H.

    2013-12-01

    developing automated workflows for unattended processing. Finalized data and structured metadata can be exported in a wide variety of text and MATLAB formats or uploaded to a relational database for long-term archiving and distribution. The GCE Data Toolbox can be used as a complete, light-weight solution for environmental data and metadata management, but it can also be used in conjunction with other cyber infrastructure to provide a more comprehensive solution. For example, newly acquired data can be retrieved from a Data Turbine or Campbell LoggerNet Database server for quality control and processing, then transformed to CUAHSI Observations Data Model format and uploaded to a HydroServer for distribution through the CUAHSI Hydrologic Information System. The GCE Data Toolbox can also be leveraged in analytical workflows developed using Kepler or other systems that support MATLAB integration or tool chaining. This software can therefore be leveraged in many ways to help researchers manage, analyze and distribute the data they collect.

  16. Requirements Engineering in Building Climate Science Software

    Science.gov (United States)

    Batcheller, Archer L.

    Software has an important role in supporting scientific work. This dissertation studies teams that build scientific software, focusing on the way that they determine what the software should do. These requirements engineering processes are investigated through three case studies of climate science software projects. The Earth System Modeling Framework assists modeling applications, the Earth System Grid distributes data via a web portal, and the NCAR (National Center for Atmospheric Research) Command Language is used to convert, analyze and visualize data. Document analysis, observation, and interviews were used to investigate the requirements-related work. The first research question is about how and why stakeholders engage in a project, and what they do for the project. Two key findings arise. First, user counts are a vital measure of project success, which makes adoption important and makes counting tricky and political. Second, despite the importance of quantities of users, a few particular "power users" develop a relationship with the software developers and play a special role in providing feedback to the software team and integrating the system into user practice. The second research question focuses on how project objectives are articulated and how they are put into practice. The team seeks to both build a software system according to product requirements but also to conduct their work according to process requirements such as user support. Support provides essential communication between users and developers that assists with refining and identifying requirements for the software. It also helps users to learn and apply the software to their real needs. User support is a vital activity for scientific software teams aspiring to create infrastructure. The third research question is about how change in scientific practice and knowledge leads to changes in the software, and vice versa. The "thickness" of a layer of software infrastructure impacts whether the

  17. The leaf angle distribution of natural plant populations: assessing the canopy with a novel software tool.

    Science.gov (United States)

    Müller-Linow, Mark; Pinto-Espinosa, Francisco; Scharr, Hanno; Rascher, Uwe

    2015-01-01

    Three-dimensional canopies form complex architectures with temporally and spatially changing leaf orientations. Variations in canopy structure are linked to canopy function and they occur within the scope of genetic variability as well as a reaction to environmental factors like light, water and nutrient supply, and stress. An important key measure to characterize these structural properties is the leaf angle distribution, which in turn requires knowledge on the 3-dimensional single leaf surface. Despite a large number of 3-d sensors and methods only a few systems are applicable for fast and routine measurements in plants and natural canopies. A suitable approach is stereo imaging, which combines depth and color information that allows for easy segmentation of green leaf material and the extraction of plant traits, such as leaf angle distribution. We developed a software package, which provides tools for the quantification of leaf surface properties within natural canopies via 3-d reconstruction from stereo images. Our approach includes a semi-automatic selection process of single leaves and different modes of surface characterization via polygon smoothing or surface model fitting. Based on the resulting surface meshes leaf angle statistics are computed on the whole-leaf level or from local derivations. We include a case study to demonstrate the functionality of our software. 48 images of small sugar beet populations (4 varieties) have been analyzed on the base of their leaf angle distribution in order to investigate seasonal, genotypic and fertilization effects on leaf angle distributions. We could show that leaf angle distributions change during the course of the season with all varieties having a comparable development. Additionally, different varieties had different leaf angle orientation that could be separated in principle component analysis. In contrast nitrogen treatment had no effect on leaf angles. We show that a stereo imaging setup together with the

  18. Establishing the Common Community Physics Package by Transitioning the GFS Physics to a Collaborative Software Framework

    Science.gov (United States)

    Xue, L.; Firl, G.; Zhang, M.; Jimenez, P. A.; Gill, D.; Carson, L.; Bernardet, L.; Brown, T.; Dudhia, J.; Nance, L. B.; Stark, D. R.

    2017-12-01

    The Global Model Test Bed (GMTB) has been established to support the evolution of atmospheric physical parameterizations in NCEP global modeling applications. To accelerate the transition to the Next Generation Global Prediction System (NGGPS), a collaborative model development framework known as the Common Community Physics Package (CCPP) is created within the GMTB to facilitate engagement from the broad community on physics experimentation and development. A key component to this Research to Operation (R2O) software framework is the Interoperable Physics Driver (IPD) that hooks the physics parameterizations from one end to the dynamical cores on the other end with minimum implementation effort. To initiate the CCPP, scientists and engineers from the GMTB separated and refactored the GFS physics. This exercise demonstrated the process of creating IPD-compliant code and can serve as an example for other physics schemes to do the same and be considered for inclusion into the CCPP. Further benefits to this process include run-time physics suite configuration and considerably reduced effort for testing modifications to physics suites through GMTB's physics test harness. The implementation will be described and the preliminary results will be presented at the conference.

  19. Improving Flight Software Module Validation Efforts : a Modular, Extendable Testbed Software Framework

    Science.gov (United States)

    Lange, R. Connor

    2012-01-01

    Ever since Explorer-1, the United States' first Earth satellite, was developed and launched in 1958, JPL has developed many more spacecraft, including landers and orbiters. While these spacecraft vary greatly in their missions, capabilities,and destination, they all have something in common. All of the components of these spacecraft had to be comprehensively tested. While thorough testing is important to mitigate risk, it is also a very expensive and time consuming process. Thankfully,since virtually all of the software testing procedures for SMAP are computer controlled, these procedures can be automated. Most people testing SMAP flight software (FSW) would only need to write tests that exercise specific requirements and then check the filtered results to verify everything occurred as planned. This gives developers the ability to automatically launch tests on the testbed, distill the resulting logs into only the important information, generate validation documentation, and then deliver the documentation to management. With many of the steps in FSW testing automated, developers can use their limited time more effectively and can validate SMAP FSW modules quicker and test them more rigorously. As a result of the various benefits of automating much of the testing process, management is considering this automated tools use in future FSW validation efforts.

  20. Distributed Software-Attestation Defense against Sensor Worm Propagation

    Directory of Open Access Journals (Sweden)

    Jun-Won Ho

    2015-01-01

    Full Text Available Wireless sensor networks are vulnerable to sensor worm attacks in which the attacker compromises a few nodes and makes these compromised nodes initiate worm spread over the network, targeting the worm infection of the whole nodes in the network. Several defense mechanisms have been proposed to prevent worm propagation in wireless sensor networks. Although these proposed schemes use software diversity technique for worm propagation prevention under the belief that different software versions do not have common vulnerability, they have fundamental drawback in which it is difficult to realize the aforementioned belief in sensor motes. To resolve this problem, we propose on-demand software-attestation based scheme to defend against worm propagation in sensor network. The main idea of our proposed scheme is to perform software attestations against sensor nodes in on-demand manner and detect the infected nodes by worm, resulting in worm propagation block in the network. Through analysis, we show that our proposed scheme defends against worm propagation in efficient and robust manner. Through simulation, we demonstrate that our proposed scheme stops worm propagation at the reasonable overhead while preventing a majority of sensor nodes from being infected by worm.

  1. Tool support for distributed software engineering

    NARCIS (Netherlands)

    Spanjers, H.; Ter Huurne, M.; Bendas, D.; Graaf, B.; Lormans, M.; Van Solingen, R.

    2006-01-01

    Developing a software system in collaboration with other partners, and on different geographical locations is a big challenge for organizations. In this article we first discuss a system that automates build and test processes: SoftFab. This system has been successfully applied in practice in the

  2. EMMA: a new paradigm in configurable software

    International Nuclear Information System (INIS)

    Nogiec, J. M.; Trombly-Freytag, K.

    2017-01-01

    EMMA is a framework designed to create a family of configurable software systems, with emphasis on extensibility and flexibility. It is based on a loosely coupled, event driven architecture. The EMMA framework has been built upon the premise of composing software systems from independent components. It opens up opportunities for reuse of components and their functionality and composing them together in many different ways. As a result, it provides the developer of test and measurement applications with a lightweight alternative to microservices, while sharing their various advantages, including composability, loose coupling, encapsulation, and reuse.

  3. EMMA: a new paradigm in configurable software

    Science.gov (United States)

    Nogiec, J. M.; Trombly-Freytag, K.

    2017-10-01

    EMMA is a framework designed to create a family of configurable software systems, with emphasis on extensibility and flexibility. It is based on a loosely coupled, event driven architecture. The EMMA framework has been built upon the premise of composing software systems from independent components. It opens up opportunities for reuse of components and their functionality and composing them together in many different ways. It provides the developer of test and measurement applications with a lightweight alternative to microservices, while sharing their various advantages, including composability, loose coupling, encapsulation, and reuse.

  4. Software development for teleroentgenogram analysis

    Science.gov (United States)

    Goshkoderov, A. A.; Khlebnikov, N. A.; Obabkov, I. N.; Serkov, K. V.; Gajniyarov, I. M.; Aliev, A. A.

    2017-09-01

    A framework for the analysis and calculation of teleroentgenograms was developed. Software development was carried out in the Department of Children's Dentistry and Orthodontics in Ural State Medical University. The software calculates the teleroentgenogram by the original method which was developed in this medical department. Program allows designing its own methods for calculating the teleroentgenograms by new methods. It is planned to use the technology of machine learning (Neural networks) in the software. This will help to make the process of calculating the teleroentgenograms easier because methodological points will be placed automatically.

  5. Supporting Trust in Globally Distributed Software Teams: The Impact of Visualized Collaborative Traces on Perceived Trustworthiness

    Science.gov (United States)

    Trainer, Erik Harrison

    2012-01-01

    Trust plays an important role in collaborations because it creates an environment in which people can openly exchange ideas and information with one another and engineer innovative solutions together with less perceived risk. The rise in globally distributed software development has created an environment in which workers are likely to have less…

  6. Assessment of grid-friendly collective optimization framework for distributed energy resources

    Energy Technology Data Exchange (ETDEWEB)

    Pensini, Alessandro; Robinson, Matthew; Heine, Nicholas; Stadler, Michael; Mammoli, Andrea

    2015-11-04

    Distributed energy resources have the potential to provide services to facilities and buildings at lower cost and environmental impact in comparison to traditional electric-gridonly services. The reduced cost could result from a combination of higher system efficiency and exploitation of electricity tariff structures. Traditionally, electricity tariffs are designed to encourage the use of ‘off peak’ power and discourage the use of ‘onpeak’ power, although recent developments in renewable energy resources and distributed generation systems (such as their increasing levels of penetration and their increased controllability) are resulting in pressures to adopt tariffs of increasing complexity. Independently of the tariff structure, more or less sophisticated methods exist that allow distributed energy resources to take advantage of such tariffs, ranging from simple pre-planned schedules to Software-as-a-Service schedule optimization tools. However, as the penetration of distributed energy resources increases, there is an increasing chance of a ‘tragedy of the commons’ mechanism taking place, where taking advantage of tariffs for local benefit can ultimately result in degradation of service and higher energy costs for all. In this work, we use a scheduling optimization tool, in combination with a power distribution system simulator, to investigate techniques that could mitigate the deleterious effect of ‘selfish’ optimization, so that the high-penetration use of distributed energy resources to reduce operating costs remains advantageous while the quality of service and overall energy cost to the community is not affected.

  7. Fighting Software Piracy: Some Global Conditional Policy Instruments

    OpenAIRE

    Asongu, Simplice A; Singh, Pritam; Le Roux, Sara

    2016-01-01

    This study examines the efficiency of tools for fighting software piracy in the conditional distributions of software piracy. Our paper examines software piracy in 99 countries for the period 1994-2010, using contemporary and non-contemporary quantile regressions. The intuition for modelling distributions contingent on existing levels of software piracy is that the effectiveness of tools against piracy may consistently decrease or increase simultaneously with increasing levels of software pir...

  8. Light-Weight and Versatile Monitor for a Self-Adaptive Software Framework for IoT Systems

    Directory of Open Access Journals (Sweden)

    Young-Joo Kim

    2016-01-01

    Full Text Available Today, various Internet of Things (IoT devices and applications are being developed. Such IoT devices have different hardware (HW and software (SW capabilities; therefore, most applications require customization when IoT devices are changed or new applications are created. However, the applications executed on these devices are not optimized for power and performance because IoT device systems do not provide suitable static and dynamic information about fast-changing system resources and applications. Therefore, this paper proposes a light-weight and versatile monitor for a self-adaptive software framework to automatically control system resources according to the system status. The monitor helps running applications guarantee low power consumption and high performance for an optimal environment. The proposed monitor has two components: a monitoring component, which provides real-time static and dynamic information about system resources and applications, and a controlling component, which supports real-time control of system resources. For the experimental verification, we created a video transport system based on IoT devices and measured the CPU utilization by dynamic voltage and frequency scaling (DVFS for the monitor. The results demonstrate that, for up to 50 monitored processes, the monitor shows an average CPU utilization of approximately 4% in the three DVFS modes and demonstrates maximum optimization in the Performance mode of DVFS.

  9. Integrating Remote Sensing with Species Distribution Models; Mapping Tamarisk Invasions Using the Software for Assisted Habitat Modeling (SAHM).

    Science.gov (United States)

    West, Amanda M; Evangelista, Paul H; Jarnevich, Catherine S; Young, Nicholas E; Stohlgren, Thomas J; Talbert, Colin; Talbert, Marian; Morisette, Jeffrey; Anderson, Ryan

    2016-10-11

    Early detection of invasive plant species is vital for the management of natural resources and protection of ecosystem processes. The use of satellite remote sensing for mapping the distribution of invasive plants is becoming more common, however conventional imaging software and classification methods have been shown to be unreliable. In this study, we test and evaluate the use of five species distribution model techniques fit with satellite remote sensing data to map invasive tamarisk (Tamarix spp.) along the Arkansas River in Southeastern Colorado. The models tested included boosted regression trees (BRT), Random Forest (RF), multivariate adaptive regression splines (MARS), generalized linear model (GLM), and Maxent. These analyses were conducted using a newly developed software package called the Software for Assisted Habitat Modeling (SAHM). All models were trained with 499 presence points, 10,000 pseudo-absence points, and predictor variables acquired from the Landsat 5 Thematic Mapper (TM) sensor over an eight-month period to distinguish tamarisk from native riparian vegetation using detection of phenological differences. From the Landsat scenes, we used individual bands and calculated Normalized Difference Vegetation Index (NDVI), Soil-Adjusted Vegetation Index (SAVI), and tasseled capped transformations. All five models identified current tamarisk distribution on the landscape successfully based on threshold independent and threshold dependent evaluation metrics with independent location data. To account for model specific differences, we produced an ensemble of all five models with map output highlighting areas of agreement and areas of uncertainty. Our results demonstrate the usefulness of species distribution models in analyzing remotely sensed data and the utility of ensemble mapping, and showcase the capability of SAHM in pre-processing and executing multiple complex models.

  10. Integrating remote sensing with species distribution models; Mapping tamarisk invasions using the Software for Assisted Habitat Modeling (SAHM)

    Science.gov (United States)

    West, Amanda M.; Evangelista, Paul H.; Jarnevich, Catherine S.; Young, Nicholas E.; Stohlgren, Thomas J.; Talbert, Colin; Talbert, Marian; Morisette, Jeffrey; Anderson, Ryan

    2016-01-01

    Early detection of invasive plant species is vital for the management of natural resources and protection of ecosystem processes. The use of satellite remote sensing for mapping the distribution of invasive plants is becoming more common, however conventional imaging software and classification methods have been shown to be unreliable. In this study, we test and evaluate the use of five species distribution model techniques fit with satellite remote sensing data to map invasive tamarisk (Tamarix spp.) along the Arkansas River in Southeastern Colorado. The models tested included boosted regression trees (BRT), Random Forest (RF), multivariate adaptive regression splines (MARS), generalized linear model (GLM), and Maxent. These analyses were conducted using a newly developed software package called the Software for Assisted Habitat Modeling (SAHM). All models were trained with 499 presence points, 10,000 pseudo-absence points, and predictor variables acquired from the Landsat 5 Thematic Mapper (TM) sensor over an eight-month period to distinguish tamarisk from native riparian vegetation using detection of phenological differences. From the Landsat scenes, we used individual bands and calculated Normalized Difference Vegetation Index (NDVI), Soil-Adjusted Vegetation Index (SAVI), and tasseled capped transformations. All five models identified current tamarisk distribution on the landscape successfully based on threshold independent and threshold dependent evaluation metrics with independent location data. To account for model specific differences, we produced an ensemble of all five models with map output highlighting areas of agreement and areas of uncertainty. Our results demonstrate the usefulness of species distribution models in analyzing remotely sensed data and the utility of ensemble mapping, and showcase the capability of SAHM in pre-processing and executing multiple complex models.

  11. Continuous software engineering – a microservices architecture perspective

    OpenAIRE

    O'Connor, Rory; Elger, Peter; Clarke, Paul

    2017-01-01

    From its earliest days, software development has been beset with challenges in relation to timely delivery, appropriateness of features and quality of deliverables. Many advances in software development processes have helped to address these concerns. For example, agile software development has helped to deliver working software more frequently and capability maturity frameworks have brought about improved consistency in quality levels. However, the age-old challenge of better, cheaper, faste...

  12. Trends in software testing

    CERN Document Server

    Mohanty, J; Balakrishnan, Arunkumar

    2017-01-01

    This book is focused on the advancements in the field of software testing and the innovative practices that the industry is adopting. Considering the widely varied nature of software testing, the book addresses contemporary aspects that are important for both academia and industry. There are dedicated chapters on seamless high-efficiency frameworks, automation on regression testing, software by search, and system evolution management. There are a host of mathematical models that are promising for software quality improvement by model-based testing. There are three chapters addressing this concern. Students and researchers in particular will find these chapters useful for their mathematical strength and rigor. Other topics covered include uncertainty in testing, software security testing, testing as a service, test technical debt (or test debt), disruption caused by digital advancement (social media, cloud computing, mobile application and data analytics), and challenges and benefits of outsourcing. The book w...

  13. Supporting Collective Inquiry: A Technology Framework for Distributed Learning

    Science.gov (United States)

    Tissenbaum, Michael

    This design-based study describes the implementation and evaluation of a technology framework to support smart classrooms and Distributed Technology Enhanced Learning (DTEL) called SAIL Smart Space (S3). S3 is an open-source technology framework designed to support students engaged in inquiry investigations as a knowledge community. To evaluate the effectiveness of S3 as a generalizable technology framework, a curriculum named PLACE (Physics Learning Across Contexts and Environments) was developed to support two grade-11 physics classes (n = 22; n = 23) engaged in a multi-context inquiry curriculum based on the Knowledge Community and Inquiry (KCI) pedagogical model. This dissertation outlines three initial design studies that established a set of design principles for DTEL curricula, and related technology infrastructures. These principles guided the development of PLACE, a twelve-week inquiry curriculum in which students drew upon their community-generated knowledge base as a source of evidence for solving ill-structured physics problems based on the physics of Hollywood movies. During the culminating smart classroom activity, the S3 framework played a central role in orchestrating student activities, including managing the flow of materials and students using real-time data mining and intelligent agents that responded to emergent class patterns. S3 supported students' construction of knowledge through the use individual, collective and collaborative scripts and technologies, including tablets and interactive large-format displays. Aggregate and real-time ambient visualizations helped the teacher act as a wondering facilitator, supporting students in their inquiry where needed. A teacher orchestration tablet gave the teacher some control over the flow of the scripted activities, and alerted him to critical moments for intervention. Analysis focuses on S3's effectiveness in supporting students' inquiry across multiple learning contexts and scales of time, and in

  14. Code-first development with Entity Framework

    CERN Document Server

    Barskiy, Sergey

    2015-01-01

    This book is intended for software developers with some prior experience with the Microsoft .NET framework who want to learn how to use Entity Framework. This book will get you up and running quickly, providing many examples that illustrate all the key concepts of Entity Framework.

  15. Software Design of SMD LEDs for Homogeneous Distribution of Irradiation in the Model of Dark Room

    Directory of Open Access Journals (Sweden)

    Andrej Liner

    2014-01-01

    Full Text Available This article describes wireless optical data networks using visible spectra of optical radiation with a focus on interior areas with direct line of sight LOS (line-of-sight. This type of network represents progressively evolving area of information technologies. Development of lightning technologies based on white power LED was the impulse for wireless optical data networks based on visible spectra of optical radiation (VLC development. Its basic advantage is the flexibility of users. Users don’t have to stay on one place during the data sharing anymore. Wireless optical data networks represent an alternative solution for metallic and fiber networks [1], [2]. This paper deals with the software simulation of homogeneous distribution of optical irradiation in dark room model, carrying out in LightTools software. First, in previous simulations, the optical source composed from 9 SMD LED’s type LW G6SP-EAFA-JKQL-1 was designed. In various simulations, various numbers and distributions of LED’s were used. These were placed at the ceiling of the dark room. At last, the results of optical irradiation homogeneity are compared.

  16. Citation and Recognition of contributions using Semantic Provenance Knowledge Captured in the OPeNDAP Software Framework

    Science.gov (United States)

    West, P.; Michaelis, J.; Lebot, T.; McGuinness, D. L.; Fox, P. A.

    2014-12-01

    Providing proper citation and attribution for published data, derived data products, and the software tools used to generate them, has always been an important aspect of scientific research. However, It is often the case that this type of detailed citation and attribution is lacking. This is in part because it often requires manual markup since dynamic generation of this type of provenance information is not typically done by the tools used to access, manipulate, transform, and visualize data. In addition, the tools themselves lack the information needed to be properly cited themselves. The OPeNDAP Hyrax Software Framework is a tool that provides access to and the ability to constrain, manipulate, and transform, different types of data from different data formats, into a common format, the DAP (Data Access Protocol), in order to derive new data products. A user, or another software client, specifies an HTTP URL in order to access a particular piece of data, and appropriately transform it to suit a specific purpose of use. The resulting data products, however, do not contain any information about what data was used to create it, or the software process used to generate it, let alone information that would allow the proper citing and attribution to down stream researchers and tool developers. We will present our approach to provenance capture in Hyrax including a mechanism that can be used to report back to the hosting site any derived products, such as publications and reports, using the W3C PROV recommendation pingback service. We will demonstrate our utilization of Semantic Web and Web standards, the development of an information model that extends the PROV model for provenance capture, and the development of the pingback service. We will present our findings, as well as our practices for providing provenance information, visualization of the provenance information, and the development of pingback services, to better enable scientists and tool developers to be

  17. Developments and applications of DAQ framework DABC v2

    International Nuclear Information System (INIS)

    Adamczewski-Musch, J; Kurz, N; Linev, S

    2015-01-01

    The Data Acquisition Backbone Core (DABC) is a software framework for distributed data acquisition. In 2013 Version 2 of DABC has been released with several improvements. For monitoring and control, an HTTP web server and a proprietary command channel socket have been provided. Web browser GUIs have been implemented for configuration and control of DABC and MBS DAQ nodes via such HTTP server. Several specific plug-ins, for example interfacing PEXOR/KINPEX optical readout PCIe boards, or HADES trbnet input and hld file output, have been further developed. In 2014, DABC v2 was applied for production data taking of the HADES collaboration's pion beam time at GSI. It fully replaced the functionality of the previous event builder software and added new features concerning online monitoring. (paper)

  18. Leveraging intellectual capital through Lewin's Force Field Analysis: The case of software development companies

    Directory of Open Access Journals (Sweden)

    Alexandru Capatina

    2017-09-01

    Full Text Available This article presents an original conceptual framework for the strategic management of intellectual capital assets in software development companies. The framework is based on Lewin's Force Field Analysis. The framework makes it possible to assess software company managers’ opinions regarding the way driving and restraining forces affect the pillars of intellectual capital. The capacity to adapt to change is vital for companies in knowledge-intensive industries. Accordingly, this study examined a sample of 74 Romanian software development companies. The aim was to help companies benefit from managing the driving and restraining forces acting upon the pillars of intellectual capital (human, structural, and relational. The effects of the driving forces, quantified by PathMaker software's Force Field Tool, were observed to be greater than the restraining forces for each pillar of intellectual capital. This paper contributes by showing the explanatory power of this framework. The framework thus offers a tool that helps managers drive change in their organizations through effective intellectual capital management. Furthermore, this article describes how to encourage the implementation of changes that create value for software development companies.

  19. Continuous Integration for Concurrent MOOSE Framework and Application Development on GitHub

    OpenAIRE

    Slaughter, Andrew E.; Peterson, John W.; Gaston, Derek R.; Permann, Cody J.; Andrš, David; Miller, Jason M.

    2015-01-01

    For the past several years, Idaho National Laboratory’s MOOSE framework team has employed modern software engineering techniques (continuous integration, joint application/framework source code repos- itories, automated regression testing, etc.) in developing closed-source multiphysics simulation software (Gaston et al., 'Journal of Open Research Software' vol. 2, article e10, 2014). In March 2014, the MOOSE framework was released under an open source license on GitHub, significantly expandin...

  20. The GOLM-database standard- a framework for time-series data management based on free software

    Science.gov (United States)

    Eichler, M.; Francke, T.; Kneis, D.; Reusser, D.

    2009-04-01

    Monitoring and modelling projects usually involve time series data originating from different sources. Often, file formats, temporal resolution and meta-data documentation rarely adhere to a common standard. As a result, much effort is spent on converting, harmonizing, merging, checking, resampling and reformatting these data. Moreover, in work groups or during the course of time, these tasks tend to be carried out redundantly and repeatedly, especially when new data becomes available. The resulting duplication of data in various formats strains additional ressources. We propose a database structure and complementary scripts for facilitating these tasks. The GOLM- (General Observation and Location Management) framework allows for import and storage of time series data of different type while assisting in meta-data documentation, plausibility checking and harmonization. The imported data can be visually inspected and its coverage among locations and variables may be visualized. Supplementing scripts provide options for data export for selected stations and variables and resampling of the data to the desired temporal resolution. These tools can, for example, be used for generating model input files or reports. Since GOLM fully supports network access, the system can be used efficiently by distributed working groups accessing the same data over the internet. GOLM's database structure and the complementary scripts can easily be customized to specific needs. Any involved software such as MySQL, R, PHP, OpenOffice as well as the scripts for building and using the data base, including documentation, are free for download. GOLM was developed out of the practical requirements of the OPAQUE-project. It has been tested and further refined in the ERANET-CRUE and SESAM projects, all of which used GOLM to manage meteorological, hydrological and/or water quality data.

  1. The equipment access software for a distributed UNIX-based accelerator control system

    International Nuclear Information System (INIS)

    Trofimov, Nikolai; Zelepoukine, Serguei; Zharkov, Eugeny; Charrue, Pierre; Gareyte, Claire; Poirier, Herve

    1994-01-01

    This paper presents a generic equipment access software package for a distributed control system using computers with UNIX or UNIX-like operating systems. The package consists of three main components, an application Equipment Access Library, Message Handler and Equipment Data Base. An application task, which may run in any computer in the network, sends requests to access equipment through Equipment Library calls. The basic request is in the form Equipment-Action-Data and is routed via a remote procedure call to the computer to which the given equipment is connected. In this computer the request is received by the Message Handler. According to the type of the equipment connection, the Message Handler either passes the request to the specific process software in the same computer or forwards it to a lower level network of equipment controllers using MIL1553B, GPIB, RS232 or BITBUS communication. The answer is then returned to the calling application. Descriptive information required for request routing and processing is stored in the real-time Equipment Data Base. The package has been written to be portable and is currently available on DEC Ultrix, LynxOS, HPUX, XENIX, OS-9 and Apollo domain. ((orig.))

  2. Trustworthiness Measurement Algorithm for TWfMS Based on Software Behaviour Entropy

    Directory of Open Access Journals (Sweden)

    Qiang Han

    2018-03-01

    Full Text Available As the virtual mirror of complex real-time business processes of organisations’ underlying information systems, the workflow management system (WfMS has emerged in recent decades as a new self-autonomous paradigm in the open, dynamic, distributed computing environment. In order to construct a trustworthy workflow management system (TWfMS, the design of a software behaviour trustworthiness measurement algorithm is an urgent task for researchers. Accompanying the trustworthiness mechanism, the measurement algorithm, with uncertain software behaviour trustworthiness information of the WfMS, should be resolved as an infrastructure. Based on the framework presented in our research prior to this paper, we firstly introduce a formal model for the WfMS trustworthiness measurement, with the main property reasoning based on calculus operators. Secondly, this paper proposes a novel measurement algorithm from the software behaviour entropy of calculus operators through the principle of maximum entropy (POME and the data mining method. Thirdly, the trustworthiness measurement algorithm for incomplete software behaviour tests and runtime information is discussed and compared by means of a detailed explanation. Finally, we provide conclusions and discuss certain future research areas of the TWfMS.

  3. Dtest Testing Software

    Science.gov (United States)

    Jain, Abhinandan; Cameron, Jonathan M.; Myint, Steven

    2013-01-01

    This software runs a suite of arbitrary software tests spanning various software languages and types of tests (unit level, system level, or file comparison tests). The dtest utility can be set to automate periodic testing of large suites of software, as well as running individual tests. It supports distributing multiple tests over multiple CPU cores, if available. The dtest tool is a utility program (written in Python) that scans through a directory (and its subdirectories) and finds all directories that match a certain pattern and then executes any tests in that directory as described in simple configuration files.

  4. The distributed development environment for SDSS software

    International Nuclear Information System (INIS)

    Berman, E.; Gurbani, V.; Mackinnon, B.; Newberg, H. Nicinski, T.; Petravick, D.; Pordes, R.; Sergey, G.; Stoughton, C.; Lupton, R.

    1994-04-01

    The authors present an integrated science software development environment, code maintenance and support system for the Sloan Digital Sky Survey (SDSS) now being actively used throughout the collaboration

  5. Modernization of software quality assurance

    Science.gov (United States)

    Bhaumik, Gokul

    1988-01-01

    The customers satisfaction depends not only on functional performance, it also depends on the quality characteristics of the software products. An examination of this quality aspect of software products will provide a clear, well defined framework for quality assurance functions, which improve the life-cycle activities of software development. Software developers must be aware of the following aspects which have been expressed by many quality experts: quality cannot be added on; the level of quality built into a program is a function of the quality attributes employed during the development process; and finally, quality must be managed. These concepts have guided our development of the following definition for a Software Quality Assurance function: Software Quality Assurance is a formal, planned approach of actions designed to evaluate the degree of an identifiable set of quality attributes present in all software systems and their products. This paper is an explanation of how this definition was developed and how it is used.

  6. A multi-agent approach to professional software engineering

    NARCIS (Netherlands)

    M. Lützenberger; T. Küster; T. Konnerth; A. Thiele; N. Masuch; A. Heßler; J. Keiser; M. Burkhardt; S. Kaiser (Silvan); J. Tonn; M. Kaisers (Michael); S. Albayrak; M. Cossentino; A. Seghrouchni; M. Winikoff

    2013-01-01

    htmlabstractThe community of agent researchers and engineers has produced a number of interesting and mature results. However, agent technology is still not widely adopted by industrial software developers or software companies - possibly because existing frameworks are infused with academic

  7. Software metrics a rigorous and practical approach

    CERN Document Server

    Fenton, Norman

    2014-01-01

    A Framework for Managing, Measuring, and Predicting Attributes of Software Development Products and ProcessesReflecting the immense progress in the development and use of software metrics in the past decades, Software Metrics: A Rigorous and Practical Approach, Third Edition provides an up-to-date, accessible, and comprehensive introduction to software metrics. Like its popular predecessors, this third edition discusses important issues, explains essential concepts, and offers new approaches for tackling long-standing problems.New to the Third EditionThis edition contains new material relevant

  8. ACTS: from ATLAS software towards a common track reconstruction software

    Science.gov (United States)

    Gumpert, C.; Salzburger, A.; Kiehn, M.; Hrdinka, J.; Calace, N.; ATLAS Collaboration

    2017-10-01

    Reconstruction of charged particles’ trajectories is a crucial task for most particle physics experiments. The high instantaneous luminosity achieved at the LHC leads to a high number of proton-proton collisions per bunch crossing, which has put the track reconstruction software of the LHC experiments through a thorough test. Preserving track reconstruction performance under increasingly difficult experimental conditions, while keeping the usage of computational resources at a reasonable level, is an inherent problem for many HEP experiments. Exploiting concurrent algorithms and using multivariate techniques for track identification are the primary strategies to achieve that goal. Starting from current ATLAS software, the ACTS project aims to encapsulate track reconstruction software into a generic, framework- and experiment-independent software package. It provides a set of high-level algorithms and data structures for performing track reconstruction tasks as well as fast track simulation. The software is developed with special emphasis on thread-safety to support parallel execution of the code and data structures are optimised for vectorisation to speed up linear algebra operations. The implementation is agnostic to the details of the detection technologies and magnetic field configuration which makes it applicable to many different experiments.

  9. Value Driven Outcomes (VDO): a pragmatic, modular, and extensible software framework for understanding and improving health care costs and outcomes

    Science.gov (United States)

    Kawamoto, Kensaku; Martin, Cary J; Williams, Kip; Tu, Ming-Chieh; Park, Charlton G; Hunter, Cheri; Staes, Catherine J; Bray, Bruce E; Deshmukh, Vikrant G; Holbrook, Reid A; Morris, Scott J; Fedderson, Matthew B; Sletta, Amy; Turnbull, James; Mulvihill, Sean J; Crabtree, Gordon L; Entwistle, David E; McKenna, Quinn L; Strong, Michael B; Pendleton, Robert C; Lee, Vivian S

    2015-01-01

    Objective To develop expeditiously a pragmatic, modular, and extensible software framework for understanding and improving healthcare value (costs relative to outcomes). Materials and methods In 2012, a multidisciplinary team was assembled by the leadership of the University of Utah Health Sciences Center and charged with rapidly developing a pragmatic and actionable analytics framework for understanding and enhancing healthcare value. Based on an analysis of relevant prior work, a value analytics framework known as Value Driven Outcomes (VDO) was developed using an agile methodology. Evaluation consisted of measurement against project objectives, including implementation timeliness, system performance, completeness, accuracy, extensibility, adoption, satisfaction, and the ability to support value improvement. Results A modular, extensible framework was developed to allocate clinical care costs to individual patient encounters. For example, labor costs in a hospital unit are allocated to patients based on the hours they spent in the unit; actual medication acquisition costs are allocated to patients based on utilization; and radiology costs are allocated based on the minutes required for study performance. Relevant process and outcome measures are also available. A visualization layer facilitates the identification of value improvement opportunities, such as high-volume, high-cost case types with high variability in costs across providers. Initial implementation was completed within 6 months, and all project objectives were fulfilled. The framework has been improved iteratively and is now a foundational tool for delivering high-value care. Conclusions The framework described can be expeditiously implemented to provide a pragmatic, modular, and extensible approach to understanding and improving healthcare value. PMID:25324556

  10. Visualization framework for CAVE virtual reality systems

    OpenAIRE

    Kageyama, Akira; Tomiyama, Asako

    2016-01-01

    We have developed a software framework for scientific visualization in immersive-type, room-sized virtual reality (VR) systems, or Cave automatic virtual environment (CAVEs). This program, called Multiverse, allows users to select and invoke visualization programs without leaving CAVE’s VR space. Multiverse is a kind of immersive “desktop environment” for users, with a three-dimensional graphical user interface. For application developers, Multiverse is a software framework with useful class ...

  11. Exploring the Role of Social Software in Global Software Development Projects

    DEFF Research Database (Denmark)

    Giuffrida, Rosalba; Dittrich, Y.

    2011-01-01

    We present a PhD project that investigates the use of Social Software (SoSo) in Global Software Development (GSD) teams. Since SoSo in unstructured and informal in its own nature, we explore how informal communication, which is challenging in GSD, is supported by SoSo in distributed teams and how...

  12. Knowledge-Based Software Management

    International Nuclear Information System (INIS)

    Sally Schaffner; Matthew Bickley; Brian Bevins; Leon Clancy; Karen White

    2003-01-01

    Management of software in a dynamic environment such as is found at Jefferson Lab can be a daunting task. Software development tasks are distributed over a wide range of people with varying skill levels. The machine configuration is constantly changing requiring upgrades to software at both the hardware control level and the operator control level. In order to obtain high quality support from vendor service agreements, which is vital to maintaining 24/7 operations, hardware and software must be kept at industry's current levels. This means that periodic upgrades independent of machine configuration changes must take place. It is often difficult to identify and organize the information needed to guide the process of development, upgrades and enhancements. Dependencies between support software and applications need to be consistently identified to prevent introducing errors during upgrades and to allow adequate testing to be planned and performed. Developers also need access to information regarding compilers, make files and organized distribution directories. This paper describes a system under development at Jefferson Lab which will provide software developers and managers this type of information in a timely user-friendly fashion. The current status and future plans for the system will be detailed

  13. SOFTWARE EFFORT ESTIMATION FRAMEWORK TO IMPROVE ORGANIZATION PRODUCTIVITY USING EMOTION RECOGNITION OF SOFTWARE ENGINEERS IN SPONTANEOUS SPEECH

    Directory of Open Access Journals (Sweden)

    B.V.A.N.S.S. Prabhakar Rao

    2015-10-01

    Full Text Available Productivity is a very important part of any organisation in general and software industry in particular. Now a day’s Software Effort estimation is a challenging task. Both Effort and Productivity are inter-related to each other. This can be achieved from the employee’s of the organization. Every organisation requires emotionally stable employees in their firm for seamless and progressive working. Of course, in other industries this may be achieved without man power. But, software project development is labour intensive activity. Each line of code should be delivered from software engineer. Tools and techniques may helpful and act as aid or supplementary. Whatever be the reason software industry has been suffering with success rate. Software industry is facing lot of problems in delivering the project on time and within the estimated budget limit. If we want to estimate the required effort of the project it is significant to know the emotional state of the team member. The responsibility of ensuring emotional contentment falls on the human resource department and the department can deploy a series of systems to carry out its survey. This analysis can be done using a variety of tools, one such, is through study of emotion recognition. The data needed for this is readily available and collectable and can be an excellent source for the feedback systems. The challenge of recognition of emotion in speech is convoluted primarily due to the noisy recording condition, the variations in sentiment in sample space and exhibition of multiple emotions in a single sentence. The ambiguity in the labels of training set also increases the complexity of problem addressed. The existing models using probabilistic models have dominated the study but present a flaw in scalability due to statistical inefficiency. The problem of sentiment prediction in spontaneous speech can thus be addressed using a hybrid system comprising of a Convolution Neural Network and

  14. Integrating Remote Sensing with Species Distribution Models; Mapping Tamarisk Invasions Using the Software for Assisted Habitat Modeling (SAHM)

    OpenAIRE

    West, Amanda M.; Evangelista, Paul H.; Jarnevich, Catherine S.; Young, Nicholas E.; Stohlgren, Thomas J.; Talbert, Colin; Talbert, Marian; Morisette, Jeffrey; Anderson, Ryan

    2016-01-01

    Early detection of invasive plant species is vital for the management of natural resources and protection of ecosystem processes. The use of satellite remote sensing for mapping the distribution of invasive plants is becoming more common, however conventional imaging software and classification methods have been shown to be unreliable. In this study, we test and evaluate the use of five species distribution model techniques fit with satellite remote sensing data to map invasive tamarisk (Tama...

  15. Component-based development of software language engineering tools

    NARCIS (Netherlands)

    Ssanyu, J.; Hemerik, C.

    2011-01-01

    In this paper we outline how Software Language Engineering (SLE) could benefit from Component-based Software Development (CBSD) techniques and present an architecture aimed at developing a coherent set of lightweight SLE components, fitting into a general-purpose component framework. In order to

  16. MOIRA Software Framework - Integrated User-friendly Shell for The Environmental Decision Support Systems

    International Nuclear Information System (INIS)

    Hofman, Dmitry; Nordlinder, Sture

    2003-01-01

    MOIRA DSS is a model-based computerised system for the identification of optimal remedial strategies to restore radionuclide contaminated fresh water environment The examples of the questions which decision-maker could address to the system are 'Is lake liming effective in reducing the radiocesium uptake by fish?', C an control of catchment run-off be an effective measure against further redistribution of radionuclides by river?', 'Is sediment removal worthwhile to reduce further contamination of the aquatic environment?'. The MOIRA system could help decision-maker to avoid implementation of inappropriate and expensive countermeasures. MOIRA gives the possibility to predict effeas of implementation of different types of the countermeasures and evaluate both 'ecological' and 'social' effect of the countermeasures. Decision support process using MOIRA DSS can be subdivided to the following steps: Definition of the site-specific environmental and socio-economic parameters using GIS-based data. Unknown site-specific data could be estimated using GIS-based models, default data for the socio-economic parameters, data directly provided by user. Providing data about fallout of the radionuclides. Definition of the time interval for which prognosis will be made. Definition of the alternative strategies of the countermeasures. Evaluation of the sequences of the implementation of the user-defined strategies and 'no actions' strategy using predictive models. Ranking strategies using Multi-Attribute Analysis Module (MAA) Preparation of the recommendations in the form of report. This process requires usage of several computerised tools such as predictive models, multi-attribute analysis software, geographical information system, data base. MOIRA software framework could be used as the basis for the creation of the wide range of the user-friendly and easy-to-learn decision support systems. It can also provide the advanced graphical user interface and data checking system for the

  17. Upper Secondary and Vocational Level Teachers at Social Software

    Science.gov (United States)

    Valtonen, Teemu; Kontkanen, Sini; Dillon, Patrick; Kukkonen, Jari; Väisänen, Pertti

    2014-01-01

    This study focuses on upper secondary and vocational level teachers as users of social software i.e. what software they use during their leisure and work and for what purposes they use software in teaching. The study is theorised within a technological pedagogical content knowledge framework, the emphasis is especially on technological knowledge…

  18. An Open Source Extensible Smart Energy Framework

    Energy Technology Data Exchange (ETDEWEB)

    Rankin, Linda [V-Squared, Portland, OR (United States)

    2017-03-23

    Aggregated distributed energy resources are the subject of much interest in the energy industry and are expected to play an important role in meeting our future energy needs by changing how we use, distribute and generate electricity. This energy future includes an increased amount of energy from renewable resources, load management techniques to improve resiliency and reliability, and distributed energy storage and generation capabilities that can be managed to meet the needs of the grid as well as individual customers. These energy assets are commonly referred to as Distributed Energy Resources (DER). DERs rely on a means to communicate information between an energy provider and multitudes of devices. Today DER control systems are typically vendor-specific, using custom hardware and software solutions. As a result, customers are locked into communication transport protocols, applications, tools, and data formats. Today’s systems are often difficult to extend to meet new application requirements, resulting in stranded assets when business requirements or energy management models evolve. By partnering with industry advisors and researchers, an implementation DER research platform was developed called the Smart Energy Framework (SEF). The hypothesis of this research was that an open source Internet of Things (IoT) framework could play a role in creating a commodity-based eco-system for DER assets that would reduce costs and provide interoperable products. SEF is based on the AllJoynTM IoT open source framework. The demonstration system incorporated DER assets, specifically batteries and smart water heaters. To verify the behavior of the distributed system, models of water heaters and batteries were also developed. An IoT interface for communicating between the assets and a control server was defined. This interface supports a series of “events” and telemetry reporting, similar to those defined by current smart grid communication standards. The results of this

  19. Software Formal Inspections Guidebook

    Science.gov (United States)

    1993-01-01

    The Software Formal Inspections Guidebook is designed to support the inspection process of software developed by and for NASA. This document provides information on how to implement a recommended and proven method for conducting formal inspections of NASA software. This Guidebook is a companion document to NASA Standard 2202-93, Software Formal Inspections Standard, approved April 1993, which provides the rules, procedures, and specific requirements for conducting software formal inspections. Application of the Formal Inspections Standard is optional to NASA program or project management. In cases where program or project management decide to use the formal inspections method, this Guidebook provides additional information on how to establish and implement the process. The goal of the formal inspections process as documented in the above-mentioned Standard and this Guidebook is to provide a framework and model for an inspection process that will enable the detection and elimination of defects as early as possible in the software life cycle. An ancillary aspect of the formal inspection process incorporates the collection and analysis of inspection data to effect continual improvement in the inspection process and the quality of the software subjected to the process.

  20. Development of DC-TOF control software framework

    International Nuclear Information System (INIS)

    Kim, Hong Joo; Kim, Hyun Ok

    2010-06-01

    Disk-Chopper Time-of-Flight spectrometer (DC-TOF) is a new cold neutron instrument under construction at the Korea Atomic Energy Research Institute (KAERI). It will be equipped with a total of 352 2m PSDs(Position Sensitive Detectors), which are grouped into 11 panels. We developed the main DAQ/Control software works well between multi-DSPs of electronics and user. It is convenient to operate DC-TOF system and monitor it's data quality using GUI(Graphical User Interface). Also it satisfies design throughout with test result of 100K events/s

  1. Efficient and Flexible Climate Analysis with Python in a Cloud-Based Distributed Computing Framework

    Science.gov (United States)

    Gannon, C.

    2017-12-01

    As climate models become progressively more advanced, and spatial resolution further improved through various downscaling projects, climate projections at a local level are increasingly insightful and valuable. However, the raw size of climate datasets presents numerous hurdles for analysts wishing to develop customized climate risk metrics or perform site-specific statistical analysis. Four Twenty Seven, a climate risk consultancy, has implemented a Python-based distributed framework to analyze large climate datasets in the cloud. With the freedom afforded by efficiently processing these datasets, we are able to customize and continually develop new climate risk metrics using the most up-to-date data. Here we outline our process for using Python packages such as XArray and Dask to evaluate netCDF files in a distributed framework, StarCluster to operate in a cluster-computing environment, cloud computing services to access publicly hosted datasets, and how this setup is particularly valuable for generating climate change indicators and performing localized statistical analysis.

  2. A framework for developing remote sensing applications

    International Nuclear Information System (INIS)

    Ahmad, T.; Hayat, M.F.; Afzal, M.; Asif, H.M.S.; Asif, K.H.

    2014-01-01

    Remote Sensing Application (RSA) is important as one of the critical enabler of e-systems such as e- governments, e-commerce, and e-sciences. In this study, we argued that owning to the specialized needs of RSA such as volatility and interactive nature, a customized Software Engineering (SE) approach should be adapted for their development. Based on this argument we have also identified the shortcomings of the conventional SE approaches and the classical waterfall software development life cycle model. In this study, we have proposed a modification to the classical waterfall software development life cycle model for proposing a customized software development Framework for RSAs. We have identified four (4) different types of changes that can occur to an already developed RS application. The proposed framework was capable to incorporate all four types of changes. Remote Sensing, software engineering, functional requirements, types of changes. (author)

  3. Design and Analysis of Electrical Distribution Networks and Balancing Markets in the UK: A New Framework with Applications

    Directory of Open Access Journals (Sweden)

    Vijayanarasimha Hindupur Pakka

    2016-02-01

    Full Text Available We present a framework for the design and simulation of electrical distribution systems and short term electricity markets specific to the UK. The modelling comprises packages relating to the technical and economic features of the electrical grid. The first package models the medium/low distribution networks with elements such as transformers, voltage regulators, distributed generators, composite loads, distribution lines and cables. This model forms the basis for elementary analysis such as load flow and short circuit calculations and also enables the investigation of effects of integrating distributed resources, voltage regulation, resource scheduling and the like. The second part of the modelling exercise relates to the UK short term electricity market with specific features such as balancing mechanism and bid-offer strategies. The framework is used for investigating methods of voltage regulation using multiple control technologies, to demonstrate the effects of high penetration of wind power on balancing prices and finally use these prices towards achieving demand response through aggregated prosumers.

  4. Pybus - A Python Software Bus

    International Nuclear Information System (INIS)

    Lavrijsen, Wim T.L.P.

    2004-01-01

    A software bus, just like its hardware equivalent, allows for the discovery, installation, configuration, loading, unloading, and run-time replacement of software components, as well as channeling of inter-component communication. Python, a popular open-source programming language, encourages a modular design on software written in it, but it offers little or no component functionality. However, the language and its interpreter provide sufficient hooks to implement a thin, integral layer of component support. This functionality can be presented to the developer in the form of a module, making it very easy to use. This paper describes a Pythonmodule, PyBus, with which the concept of a ''software bus'' can be realized in Python. It demonstrates, within the context of the ATLAS software framework Athena, how PyBus can be used for the installation and (run-time) configuration of software, not necessarily Python modules, from a Python application in a way that is transparent to the end-user

  5. Optimal Electricity Distribution Framework for Public Space: Assessing Renewable Energy Proposals for Freshkills Park, New York City

    Directory of Open Access Journals (Sweden)

    Kaan Ozgun

    2015-03-01

    Full Text Available Integrating renewable energy into public space is becoming more common as a climate change solution. However, this approach is often guided by the environmental pillar of sustainability, with less focus on the economic and social pillars. The purpose of this paper is to examine this issue in the speculative renewable energy propositions for Freshkills Park in New York City submitted for the 2012 Land Art Generator Initiative (LAGI competition. This paper first proposes an optimal electricity distribution (OED framework in and around public spaces based on relevant ecology and energy theory (Odum’s fourth and fifth law of thermodynamics. This framework addresses social engagement related to public interaction, and economic engagement related to the estimated quantity of electricity produced, in conjunction with environmental engagement related to the embodied energy required to construct the renewable energy infrastructure. Next, the study uses the OED framework to analyse the top twenty-five projects submitted for the LAGI 2012 competition. The findings reveal an electricity distribution imbalance and suggest a lack of in-depth understanding about sustainable electricity distribution within public space design. The paper concludes with suggestions for future research.

  6. An Automated Defect Prediction Framework using Genetic Algorithms: A Validation of Empirical Studies

    Directory of Open Access Journals (Sweden)

    Juan Murillo-Morera

    2016-05-01

    Full Text Available Today, it is common for software projects to collect measurement data through development processes. With these data, defect prediction software can try to estimate the defect proneness of a software module, with the objective of assisting and guiding software practitioners. With timely and accurate defect predictions, practitioners can focus their limited testing resources on higher risk areas. This paper reports the results of three empirical studies that uses an automated genetic defect prediction framework. This framework generates and compares different learning schemes (preprocessing + attribute selection + learning algorithms and selects the best one using a genetic algorithm, with the objective to estimate the defect proneness of a software module. The first empirical study is a performance comparison of our framework with the most important framework of the literature. The second empirical study is a performance and runtime comparison between our framework and an exhaustive framework. The third empirical study is a sensitivity analysis. The last empirical study, is our main contribution in this paper. Performance of the software development defect prediction models (using AUC, Area Under the Curve was validated using NASA-MDP and PROMISE data sets. Seventeen data sets from NASA-MDP (13 and PROMISE (4 projects were analyzed running a NxM-fold cross-validation. A genetic algorithm was used to select the components of the learning schemes automatically, and to assess and report the results. Our results reported similar performance between frameworks. Our framework reported better runtime than exhaustive framework. Finally, we reported the best configuration according to sensitivity analysis.

  7. Maintenance simulation: Software issues

    Energy Technology Data Exchange (ETDEWEB)

    Luk, C.H.; Jette, M.A.

    1995-07-01

    The maintenance of a distributed software system in a production environment involves: (1) maintaining software integrity, (2) maintaining and database integrity, (3) adding new features, and (4) adding new systems. These issues will be discussed in general: what they are and how they are handled. This paper will present our experience with a distributed resource management system that accounts for resources consumed, in real-time, on a network of heterogenous computers. The simulated environments to maintain this system will be presented relate to the four maintenance areas.

  8. A development framework for semantically interoperable health information systems.

    Science.gov (United States)

    Lopez, Diego M; Blobel, Bernd G M E

    2009-02-01

    Semantic interoperability is a basic challenge to be met for new generations of distributed, communicating and co-operating health information systems (HIS) enabling shared care and e-Health. Analysis, design, implementation and maintenance of such systems and intrinsic architectures have to follow a unified development methodology. The Generic Component Model (GCM) is used as a framework for modeling any system to evaluate and harmonize state of the art architecture development approaches and standards for health information systems as well as to derive a coherent architecture development framework for sustainable, semantically interoperable HIS and their components. The proposed methodology is based on the Rational Unified Process (RUP), taking advantage of its flexibility to be configured for integrating other architectural approaches such as Service-Oriented Architecture (SOA), Model-Driven Architecture (MDA), ISO 10746, and HL7 Development Framework (HDF). Existing architectural approaches have been analyzed, compared and finally harmonized towards an architecture development framework for advanced health information systems. Starting with the requirements for semantic interoperability derived from paradigm changes for health information systems, and supported in formal software process engineering methods, an appropriate development framework for semantically interoperable HIS has been provided. The usability of the framework has been exemplified in a public health scenario.

  9. Thermodynamic framework for compact q-Gaussian distributions

    Science.gov (United States)

    Souza, Andre M. C.; Andrade, Roberto F. S.; Nobre, Fernando D.; Curado, Evaldo M. F.

    2018-02-01

    Recent works have associated systems of particles, characterized by short-range repulsive interactions and evolving under overdamped motion, to a nonlinear Fokker-Planck equation within the class of nonextensive statistical mechanics, with a nonlinear diffusion contribution whose exponent is given by ν = 2 - q. The particular case ν = 2 applies to interacting vortices in type-II superconductors, whereas ν > 2 covers systems of particles characterized by short-range power-law interactions, where correlations among particles are taken into account. In the former case, several studies presented a consistent thermodynamic framework based on the definition of an effective temperature θ (presenting experimental values much higher than typical room temperatures T, so that thermal noise could be neglected), conjugated to a generalized entropy sν (with ν = 2). Herein, the whole thermodynamic scheme is revisited and extended to systems of particles interacting repulsively, through short-ranged potentials, described by an entropy sν, with ν > 1, covering the ν = 2 (vortices in type-II superconductors) and ν > 2 (short-range power-law interactions) physical examples. One basic requirement concerns a cutoff in the equilibrium distribution Peq(x) , approached due to a confining external harmonic potential, ϕ(x) = αx2 / 2 (α > 0). The main results achieved are: (a) The definition of an effective temperature θ conjugated to the entropy sν; (b) The construction of a Carnot cycle, whose efficiency is shown to be η = 1 -(θ2 /θ1) , where θ1 and θ2 are the effective temperatures associated with two isothermal transformations, with θ1 >θ2; (c) Thermodynamic potentials, Maxwell relations, and response functions. The present thermodynamic framework, for a system of interacting particles under the above-mentioned conditions, and associated to an entropy sν, with ν > 1, certainly enlarges the possibility of experimental verifications.

  10. Value Driven Outcomes (VDO): a pragmatic, modular, and extensible software framework for understanding and improving health care costs and outcomes.

    Science.gov (United States)

    Kawamoto, Kensaku; Martin, Cary J; Williams, Kip; Tu, Ming-Chieh; Park, Charlton G; Hunter, Cheri; Staes, Catherine J; Bray, Bruce E; Deshmukh, Vikrant G; Holbrook, Reid A; Morris, Scott J; Fedderson, Matthew B; Sletta, Amy; Turnbull, James; Mulvihill, Sean J; Crabtree, Gordon L; Entwistle, David E; McKenna, Quinn L; Strong, Michael B; Pendleton, Robert C; Lee, Vivian S

    2015-01-01

    To develop expeditiously a pragmatic, modular, and extensible software framework for understanding and improving healthcare value (costs relative to outcomes). In 2012, a multidisciplinary team was assembled by the leadership of the University of Utah Health Sciences Center and charged with rapidly developing a pragmatic and actionable analytics framework for understanding and enhancing healthcare value. Based on an analysis of relevant prior work, a value analytics framework known as Value Driven Outcomes (VDO) was developed using an agile methodology. Evaluation consisted of measurement against project objectives, including implementation timeliness, system performance, completeness, accuracy, extensibility, adoption, satisfaction, and the ability to support value improvement. A modular, extensible framework was developed to allocate clinical care costs to individual patient encounters. For example, labor costs in a hospital unit are allocated to patients based on the hours they spent in the unit; actual medication acquisition costs are allocated to patients based on utilization; and radiology costs are allocated based on the minutes required for study performance. Relevant process and outcome measures are also available. A visualization layer facilitates the identification of value improvement opportunities, such as high-volume, high-cost case types with high variability in costs across providers. Initial implementation was completed within 6 months, and all project objectives were fulfilled. The framework has been improved iteratively and is now a foundational tool for delivering high-value care. The framework described can be expeditiously implemented to provide a pragmatic, modular, and extensible approach to understanding and improving healthcare value. © The Author 2014. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  11. Algorithm and Implementation of Distributed ESN Using Spark Framework and Parallel PSO

    Directory of Open Access Journals (Sweden)

    Kehe Wu

    2017-04-01

    Full Text Available The echo state network (ESN employs a huge reservoir with sparsely and randomly connected internal nodes and only trains the output weights, which avoids the suboptimal problem, exploding and vanishing gradients, high complexity and other disadvantages faced by traditional recurrent neural network (RNN training. In light of the outstanding adaption to nonlinear dynamical systems, ESN has been applied into a wide range of applications. However, in the era of Big Data, with an enormous amount of data being generated continuously every day, the data are often distributed and stored in real applications, and thus the centralized ESN training process is prone to being technologically unsuitable. In order to achieve the requirement of Big Data applications in the real world, in this study we propose an algorithm and its implementation for distributed ESN training. The mentioned algorithm is based on the parallel particle swarm optimization (P-PSO technique and the implementation uses Spark, a famous large-scale data processing framework. Four extremely large-scale datasets, including artificial benchmarks, real-world data and image data, are adopted to verify our framework on a stretchable platform. Experimental results indicate that the proposed work is accurate in the era of Big Data, regarding speed, accuracy and generalization capabilities.

  12. Integrating a Trust Framework with a Distributed Certificate Validation Scheme for MANETs

    Directory of Open Access Journals (Sweden)

    Marias Giannis F

    2006-01-01

    Full Text Available Many trust establishment solutions in mobile ad hoc networks (MANETs rely on public key certificates. Therefore, they should be accompanied by an efficient mechanism for certificate revocation and validation. Ad hoc distributed OCSP for trust (ADOPT is a lightweight, distributed, on-demand scheme based on cached OCSP responses, which provides certificate status information to the nodes of a MANET. In this paper we discuss the ADOPT scheme and issues on its deployment over MANETs. We present some possible threats to ADOPT and suggest the use of a trust assessment and establishment framework, named ad hoc trust framework (ATF, to support ADOPT's robustness and efficiency. ADOPT is deployed as a trust-aware application that provides feedback to ATF, which calculates the trustworthiness of the peer nodes' functions and helps ADOPT to improve its performance by rapidly locating valid certificate status information. Moreover, we introduce the TrustSpan algorithm to reduce the overhead that ATF produces, and the TrustPath algorithm to identify and use trusted routes for propagating sensitive information, such as third parties' accusations. Simulation results show that ATF adds limited overhead compared to its efficiency in detecting and isolating malicious and selfish nodes. ADOPT's reliability is increased, since it can rapidly locate a legitimate response by using information provided by ATF.

  13. Frameworks Coordinate Scientific Data Management

    Science.gov (United States)

    2012-01-01

    Jet Propulsion Laboratory computer scientists developed a unique software framework to help NASA manage its massive amounts of science data. Through a partnership with the Apache Software Foundation of Forest Hill, Maryland, the technology is now available as an open-source solution and is in use by cancer researchers and pediatric hospitals.

  14. Distributed, Embedded and Real-time Java Systems

    CERN Document Server

    Wellings, Andy

    2012-01-01

    Research on real-time Java technology has been prolific over the past decade, leading to a large number of corresponding hardware and software solutions, and frameworks for distributed and embedded real-time Java systems.  This book is aimed primarily at researchers in real-time embedded systems, particularly those who wish to understand the current state of the art in using Java in this domain.  Much of the work in real-time distributed, embedded and real-time Java has focused on the Real-time Specification for Java (RTSJ) as the underlying base technology, and consequently many of the Chapters in this book address issues with, or solve problems using, this framework. Describes innovative techniques in: scheduling, memory management, quality of service and communication systems supporting real-time Java applications; Includes coverage of multiprocessor embedded systems and parallel programming; Discusses state-of-the-art resource management for embedded systems, including Java’s real-time garbage collect...

  15. Free Software and Free Textbooks

    Science.gov (United States)

    Takhteyev, Yuri

    2012-01-01

    Some of the world's best and most sophisticated software is distributed today under "free" or "open source" licenses, which allow the recipients of such software to use, modify, and share it without paying royalties or asking for permissions. If this works for software, could it also work for educational resources, such as books? The economics of…

  16. Colaborated Architechture Framework for Composition UML 2.0 in Zachman Framework

    Science.gov (United States)

    Hermawan; Hastarista, Fika

    2016-01-01

    Zachman Framework (ZF) is the framework of enterprise architechture that most widely adopted in the Enterprise Information System (EIS) development. In this study, has been developed Colaborated Architechture Framework (CAF) to collaborate ZF with Unified Modeling Language (UML) 2.0 modeling. The CAF provides the composition of ZF matrix that each cell is consist of the Model Driven architechture (MDA) from the various UML models and many Software Requirement Specification (SRS) documents. Implementation of this modeling is used to develops Enterprise Resource Planning (ERP). Because ERP have a coverage of applications in large numbers and complexly relations, it is necessary to use Agile Model Driven Design (AMDD) approach as an advanced method to transforms MDA into components of application modules with efficiently and accurately. Finally, through the using of the CAF, give good achievement in fullfilment the needs from all stakeholders that are involved in the overall process stage of Rational Unified Process (RUP), and also obtaining a high satisfaction to fullfiled the functionality features of the ERP software in PT. Iglas (Persero) Gresik.

  17. Generic Software Architecture for Prognostics (GSAP) User Guide

    Science.gov (United States)

    Teubert, Christopher Allen; Daigle, Matthew John; Watkins, Jason; Sankararaman, Shankar; Goebel, Kai

    2016-01-01

    The Generic Software Architecture for Prognostics (GSAP) is a framework for applying prognostics. It makes applying prognostics easier by implementing many of the common elements across prognostic applications. The standard interface enables reuse of prognostic algorithms and models across systems using the GSAP framework.

  18. Gammasphere software development. Progress report

    Energy Technology Data Exchange (ETDEWEB)

    Piercey, R.B.

    1994-01-01

    This report describes the activities of the nuclear physics group at Mississippi State University which were performed during 1993. Significant progress has been made in the focus areas: chairing the Gammasphere Software Working Group (SWG); assisting with the porting and enhancement of the ORNL UPAK histogramming software package; and developing standard formats for Gammasphere data products. In addition, they have established a new public ftp archive to distribute software and software development tools and information.

  19. Model-centric software architecture reconstruction

    NARCIS (Netherlands)

    Stoermer, C.; Rowe, A.; O'Brien, L.; Verhoef, C.

    2006-01-01

    Much progress has been achieved in defining methods, techniques, and tools for software architecture reconstruction (SAR). However, less progress has been achieved in constructing reasoning frameworks from existing systems that support organizations in architecture analysis and design decisions.

  20. The SOPHY Framework

    DEFF Research Database (Denmark)

    Laursen, Karl Kaas; Pedersen, Martin Fejrskov; Bendtsen, Jan Dimon

    The goal of the Sophy framework (Simulation, Observation and Planning in Hybrid Systems) is to implement a multi-level framework for description, simulation, observation, fault detection and recovery, diagnosis and autonomous planning in distributed embedded hybrid systems. A Java-based distributed...

  1. The SOPHY framework

    DEFF Research Database (Denmark)

    Laursen, Karl Kaas; Pedersen, M. F.; Bendtsen, Jan Dimon

    2005-01-01

    The goal of the Sophy framework (Simulation, Observation and Planning in Hybrid Systems) is to implement a multi-level framework for description, simulation, observation, fault detection and recovery, diagnosis and autonomous planning in distributed embedded hybrid systems. A Java-based distributed...

  2. Dynamic, Interactive and Visual Analysis of Population Distribution and Mobility Dynamics in an Urban Environment Using the Mobility Explorer Framework

    Directory of Open Access Journals (Sweden)

    Jan Peters-Anders

    2017-05-01

    Full Text Available This paper investigates the extent to which a mobile data source can be utilised to generate new information intelligence for decision-making in smart city planning processes. In this regard, the Mobility Explorer framework is introduced and applied to the City of Vienna (Austria by using anonymised mobile phone data from a mobile phone service provider. This framework identifies five necessary elements that are needed to develop complex planning applications. As part of the investigation and experiments a new dynamic software tool, called Mobility Explorer, has been designed and developed based on the requirements of the planning department of the City of Vienna. As a result, the Mobility Explorer enables city stakeholders to interactively visualise the dynamic diurnal population distribution, mobility patterns and various other complex outputs for planning needs. Based on the experiences during the development phase, this paper discusses mobile data issues, presents the visual interface, performs various user-defined analyses, demonstrates the application’s usefulness and critically reflects on the evaluation results of the citizens’ motion exploration that reveal the great potential of mobile phone data in smart city planning but also depict its limitations. These experiences and lessons learned from the Mobility Explorer application development provide useful insights for other cities and planners who want to make informed decisions using mobile phone data in their city planning processes through dynamic visualisation of Call Data Record (CDR data.

  3. Software for the international linear collider: Simulation and ...

    Indian Academy of Sciences (India)

    Software plays an increasingly important role already in the early stages of a large project like the ILC. In international collaboration a data format for the ILC detector and physics studies has been developed. Building upon this software frameworks are made available which ease the event reconstruction and analysis.

  4. When to make proprietary software open source

    NARCIS (Netherlands)

    Caulkins, J.P.; Feichtinger, G.; Grass, D.; Hartl, R.F.; Kort, P.M.; Seidl, A.

    Software can be distributed closed source (proprietary) or open source (developed collaboratively). While a firm cannot sell open source software, and so loses potential sales revenue, the open source software development process can have a substantial positive impact on the quality of a software,

  5. Software Development and Test Methodology for a Distributed Ground System

    Science.gov (United States)

    Ritter, George; Guillebeau, Pat; McNair, Ann R. (Technical Monitor)

    2002-01-01

    The Marshall Space Flight Center's (MSFC) Payload Operations Center (POC) ground system has evolved over a period of about 10 years. During this time the software processes have migrated from more traditional to more contemporary development processes in an effort to minimize unnecessary overhead while maximizing process benefits. The Software processes that have evolved still emphasize requirements capture, software configuration management, design documenting, and making sure the products that have been developed are accountable to initial requirements. This paper will give an overview of how the Software Processes have evolved, highlighting the positives as well as the negatives. In addition, we will mention the COTS tools that have been integrated into the processes and how the COTS have provided value to the project.

  6. Java Web Frameworks Which One to Choose?

    OpenAIRE

    Nassourou, Mohamadou

    2010-01-01

    This article discusses web frameworks that are available to a software developer in Java language. It introduces MVC paradigm and some frameworks that implement it. The article presents an overview of Struts, Spring MVC, JSF Frameworks, as well as guidelines for selecting one of them as development environment.

  7. Trust in Co-sourced Software Development

    DEFF Research Database (Denmark)

    Schlichter, Bjarne Rerup; Persson, John Stouby

    2014-01-01

    Software development projects are increasingly geographical distributed with offshoring. Co-sourcing is a highly integrative and cohesive approach, seen successful, to software development offshoring. However, research of how dynamic aspects of trust are shaped in co-sourcing activities is limite...... understanding or personal trust relations. The paper suggests how certain work practices among developers and managers can be explained using a dynamic trust lens based on Abstract Systems, especially dis- and re-embedding mechanisms......Software development projects are increasingly geographical distributed with offshoring. Co-sourcing is a highly integrative and cohesive approach, seen successful, to software development offshoring. However, research of how dynamic aspects of trust are shaped in co-sourcing activities is limited...

  8. Modeling and Implementation of Cattle/Beef Supply Chain Traceability Using a Distributed RFID-Based Framework in China

    Science.gov (United States)

    Liang, Wanjie; Cao, Jing; Fan, Yan; Zhu, Kefeng; Dai, Qiwei

    2015-01-01

    In recent years, traceability systems have been developed as effective tools for improving the transparency of supply chains, thereby guaranteeing the quality and safety of food products. In this study, we proposed a cattle/beef supply chain traceability model and a traceability system based on radio frequency identification (RFID) technology and the EPCglobal network. First of all, the transformations of traceability units were defined and analyzed throughout the cattle/beef chain. Secondly, we described the internal and external traceability information acquisition, transformation, and transmission processes throughout the beef supply chain in detail, and explained a methodology for modeling traceability information using the electronic product code information service (EPCIS) framework. Then, the traceability system was implemented based on Fosstrak and FreePastry software packages, and animal ear tag code and electronic product code (EPC) were employed to identify traceability units. Finally, a cattle/beef supply chain included breeding business, slaughter and processing business, distribution business and sales outlet was used as a case study to evaluate the beef supply chain traceability system. The results demonstrated that the major advantages of the traceability system are the effective sharing of information among business and the gapless traceability of the cattle/beef supply chain. PMID:26431340

  9. Modeling and Implementation of Cattle/Beef Supply Chain Traceability Using a Distributed RFID-Based Framework in China.

    Science.gov (United States)

    Liang, Wanjie; Cao, Jing; Fan, Yan; Zhu, Kefeng; Dai, Qiwei

    2015-01-01

    In recent years, traceability systems have been developed as effective tools for improving the transparency of supply chains, thereby guaranteeing the quality and safety of food products. In this study, we proposed a cattle/beef supply chain traceability model and a traceability system based on radio frequency identification (RFID) technology and the EPCglobal network. First of all, the transformations of traceability units were defined and analyzed throughout the cattle/beef chain. Secondly, we described the internal and external traceability information acquisition, transformation, and transmission processes throughout the beef supply chain in detail, and explained a methodology for modeling traceability information using the electronic product code information service (EPCIS) framework. Then, the traceability system was implemented based on Fosstrak and FreePastry software packages, and animal ear tag code and electronic product code (EPC) were employed to identify traceability units. Finally, a cattle/beef supply chain included breeding business, slaughter and processing business, distribution business and sales outlet was used as a case study to evaluate the beef supply chain traceability system. The results demonstrated that the major advantages of the traceability system are the effective sharing of information among business and the gapless traceability of the cattle/beef supply chain.

  10. Software Maintenance Management Evaluation and Continuous Improvement

    CERN Document Server

    April, Alain

    2008-01-01

    This book explores the domain of software maintenance management and provides road maps for improving software maintenance organizations. It describes full maintenance maturity models organized by levels 1, 2, and 3, which allow for benchmarking and continuous improvement paths. Goals for each key practice area are also provided, and the model presented is fully aligned with the architecture and framework of software development maturity models of CMMI and ISO 15504. It is complete with case studies, figures, tables, and graphs.

  11. View of software for HEP experiments

    Energy Technology Data Exchange (ETDEWEB)

    Johnstad, H.; Lebrun, P.; Lessner, E.S.; Montgomery, H.E.

    1986-05-01

    A view of the software structure typical of a High Energy Physics experiment is given and the availability of general software modules in most of the important regions is discussed. The aim is to provide a framework for discussion of capabilities and inadequecies and thereby define areas where effort should be assigned and perhaps also to serve as a useful source document for the newcomer to High Energy Physics. 74 refs.

  12. View of software for HEP experiments

    International Nuclear Information System (INIS)

    Johnstad, H.; Lebrun, P.; Lessner, E.S.; Montgomery, H.E.

    1986-05-01

    A view of the software structure typical of a High Energy Physics experiment is given and the availability of general software modules in most of the important regions is discussed. The aim is to provide a framework for discussion of capabilities and inadequecies and thereby define areas where effort should be assigned and perhaps also to serve as a useful source document for the newcomer to High Energy Physics. 74 refs

  13. Semantic Web technologies in software engineering

    OpenAIRE

    Gall, H C; Reif, G

    2008-01-01

    Over the years, the software engineering community has developed various tools to support the specification, development, and maintainance of software. Many of these tools use proprietary data formats to store artifacts which hamper interoperability. However, the Semantic Web provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries. Ontologies are used define the concepts in the domain of discourse and their relationships an...

  14. A framework for stochastic simulation of distribution practices for hotel reservations

    Energy Technology Data Exchange (ETDEWEB)

    Halkos, George E.; Tsilika, Kyriaki D. [Laboratory of Operations Research, Department of Economics, University of Thessaly, Korai 43, 38 333, Volos (Greece)

    2015-03-10

    The focus of this study is primarily on the Greek hotel industry. The objective is to design and develop a framework for stochastic simulation of reservation requests, reservation arrivals, cancellations and hotel occupancy with a planning horizon of a tourist season. In Greek hospitality industry there have been two competing policies for reservation planning process up to 2003: reservations coming directly from customers and a reservations management relying on tour operator(s). Recently the Internet along with other emerging technologies has offered the potential to disrupt enduring distribution arrangements. The focus of the study is on the choice of distribution intermediaries. We present an empirical model for the hotel reservation planning process that makes use of a symbolic simulation, Monte Carlo method, as, requests for reservations, cancellations, and arrival rates are all sources of uncertainty. We consider as a case study the problem of determining the optimal booking strategy for a medium size hotel in Skiathos Island, Greece. Probability distributions and parameters estimation result from the historical data available and by following suggestions made in the relevant literature. The results of this study may assist hotel managers define distribution strategies for hotel rooms and evaluate the performance of the reservations management system.

  15. A framework for stochastic simulation of distribution practices for hotel reservations

    International Nuclear Information System (INIS)

    Halkos, George E.; Tsilika, Kyriaki D.

    2015-01-01

    The focus of this study is primarily on the Greek hotel industry. The objective is to design and develop a framework for stochastic simulation of reservation requests, reservation arrivals, cancellations and hotel occupancy with a planning horizon of a tourist season. In Greek hospitality industry there have been two competing policies for reservation planning process up to 2003: reservations coming directly from customers and a reservations management relying on tour operator(s). Recently the Internet along with other emerging technologies has offered the potential to disrupt enduring distribution arrangements. The focus of the study is on the choice of distribution intermediaries. We present an empirical model for the hotel reservation planning process that makes use of a symbolic simulation, Monte Carlo method, as, requests for reservations, cancellations, and arrival rates are all sources of uncertainty. We consider as a case study the problem of determining the optimal booking strategy for a medium size hotel in Skiathos Island, Greece. Probability distributions and parameters estimation result from the historical data available and by following suggestions made in the relevant literature. The results of this study may assist hotel managers define distribution strategies for hotel rooms and evaluate the performance of the reservations management system

  16. Management of Globally Distributed Software Development Projects in Multiple-Vendor Constellations

    Science.gov (United States)

    Schott, Katharina; Beck, Roman; Gregory, Robert Wayne

    Global information systems development outsourcing is an apparent trend that is expected to continue in the foreseeable future. Thereby, IS-related services are not only increasingly provided from different geographical sites simultaneously but beyond that from multiple service providers based in different countries. The purpose of this paper is to understand how the involvement of multiple service providers affects the management of the globally distributed information systems development projects. As research on this topic is scarce, we applied an exploratory in-depth single-case study design as research approach. The case we analyzed comprises a global software development outsourcing project initiated by a German bank together with several globally distributed vendors. For data collection and data analysis we have adopted techniques suggested by the grounded theory method. Whereas the extant literature points out the increased management overhead associated with multi-sourcing, the analysis of our case suggests that the required effort for managing global outsourcing projects with multiple vendors depends among other things on the maturation level of the cooperation within the vendor portfolio. Furthermore, our data indicate that this interplay maturity is positively impacted through knowledge about the client that has been derived based on already existing client-vendor relationships. The paper concludes by offering theoretical and practical implications.

  17. Applications of the BEam Cross section Analysis Software (BECAS)

    DEFF Research Database (Denmark)

    Blasques, José Pedro Albergaria Amaral; Bitsche, Robert; Fedorov, Vladimir

    2013-01-01

    A newly developed framework is presented for structural design and analysis of long slender beam-like structures, e.g., wind turbine blades. The framework is based on the BEam Cross section Analysis Software – BECAS – a finite element based cross section analysis tool. BECAS is used for the gener......A newly developed framework is presented for structural design and analysis of long slender beam-like structures, e.g., wind turbine blades. The framework is based on the BEam Cross section Analysis Software – BECAS – a finite element based cross section analysis tool. BECAS is used...... for the generation of beam finite element models which correctly account for effects stemming from material anisotropy and inhomogeneity in cross sections of arbitrary geometry. These type of modelling approach allows for an accurate yet computationally inexpensive representation of a general class of three...

  18. TOGAF usage in outsourcing of software development

    Directory of Open Access Journals (Sweden)

    Aziz Ahmad Rais

    2013-12-01

    Full Text Available TOGAF is an Enterprise Architecture framework that provides a method for developing Enterprise Architecture called architecture development method (ADM. The purpose of this paper is whether TOGAF ADM can be used for developing software application architecture. Because the software application architecture is one of the disciplines in application development life cycle, it is important to find out how the enterprise architecture development method can support the application architecture development. Having an open standard that can be used in the application architecture development could help in outsourcing of software development. If ADM could be used for software application architecture development, then we could consider its usability in outsourcing of software development.

  19. A theoretical framework for convergence and continuous dependence of estimates in inverse problems for distributed parameter systems

    Science.gov (United States)

    Banks, H. T.; Ito, K.

    1988-01-01

    Numerical techniques for parameter identification in distributed-parameter systems are developed analytically. A general convergence and stability framework (for continuous dependence on observations) is derived for first-order systems on the basis of (1) a weak formulation in terms of sesquilinear forms and (2) the resolvent convergence form of the Trotter-Kato approximation. The extension of this framework to second-order systems is considered.

  20. FRAMEWORK FOR AD HOC NETWORK COMMUNICATION IN MULTI-ROBOT SYSTEMS

    Directory of Open Access Journals (Sweden)

    Khilda Slyusar

    2016-11-01

    Full Text Available Assume a team of mobile robots operating in environments where no communication infrastructure like routers or access points is available. The robots have to create a mobile ad hoc network, in that case, it provides communication on peer-to-peer basis. The paper gives an overview of existing solutions how to route messages in such ad hoc networks between robots that are not directly connected and introduces a design of a software framework for realization of such communication. Feasibility of the proposed framework is shown on the example of distributed multi-robot exploration of an a priori unknown environment. Testing of developed functionality in an exploration scenario is based on results of several experiments with various input conditions of the exploration process and various sizes of a team and is described herein.