WorldWideScience

Sample records for large-scale software systems

  1. Software quality assurance: in large scale and complex software-intensive systems

    NARCIS (Netherlands)

    Mistrik, I.; Soley, R.; Ali, N.; Grundy, J.; Tekinerdogan, B.

    2015-01-01

    Software Quality Assurance in Large Scale and Complex Software-intensive Systems presents novel and high-quality research related approaches that relate the quality of software architecture to system requirements, system architecture and enterprise-architecture, or software testing. Modern software

  2. Automatic management software for large-scale cluster system

    International Nuclear Information System (INIS)

    Weng Yunjian; Chinese Academy of Sciences, Beijing; Sun Gongxing

    2007-01-01

    At present, the large-scale cluster system faces to the difficult management. For example the manager has large work load. It needs to cost much time on the management and the maintenance of large-scale cluster system. The nodes in large-scale cluster system are very easy to be chaotic. Thousands of nodes are put in big rooms so that some managers are very easy to make the confusion with machines. How do effectively carry on accurate management under the large-scale cluster system? The article introduces ELFms in the large-scale cluster system. Furthermore, it is proposed to realize the large-scale cluster system automatic management. (authors)

  3. Large-scale Complex IT Systems

    OpenAIRE

    Sommerville, Ian; Cliff, Dave; Calinescu, Radu; Keen, Justin; Kelly, Tim; Kwiatkowska, Marta; McDermid, John; Paige, Richard

    2011-01-01

    This paper explores the issues around the construction of large-scale complex systems which are built as 'systems of systems' and suggests that there are fundamental reasons, derived from the inherent complexity in these systems, why our current software engineering methods and techniques cannot be scaled up to cope with the engineering challenges of constructing such systems. It then goes on to propose a research and education agenda for software engineering that identifies the major challen...

  4. Large-scale complex IT systems

    OpenAIRE

    Sommerville, Ian; Cliff, Dave; Calinescu, Radu; Keen, Justin; Kelly, Tim; Kwiatkowska, Marta; McDermid, John; Paige, Richard

    2012-01-01

    12 pages, 2 figures This paper explores the issues around the construction of large-scale complex systems which are built as 'systems of systems' and suggests that there are fundamental reasons, derived from the inherent complexity in these systems, why our current software engineering methods and techniques cannot be scaled up to cope with the engineering challenges of constructing such systems. It then goes on to propose a research and education agenda for software engineering that ident...

  5. Large Scale Software Building with CMake in ATLAS

    Science.gov (United States)

    Elmsheuser, J.; Krasznahorkay, A.; Obreshkov, E.; Undrus, A.; ATLAS Collaboration

    2017-10-01

    The offline software of the ATLAS experiment at the Large Hadron Collider (LHC) serves as the platform for detector data reconstruction, simulation and analysis. It is also used in the detector’s trigger system to select LHC collision events during data taking. The ATLAS offline software consists of several million lines of C++ and Python code organized in a modular design of more than 2000 specialized packages. Because of different workflows, many stable numbered releases are in parallel production use. To accommodate specific workflow requests, software patches with modified libraries are distributed on top of existing software releases on a daily basis. The different ATLAS software applications also require a flexible build system that strongly supports unit and integration tests. Within the last year this build system was migrated to CMake. A CMake configuration has been developed that allows one to easily set up and build the above mentioned software packages. This also makes it possible to develop and test new and modified packages on top of existing releases. The system also allows one to detect and execute partial rebuilds of the release based on single package changes. The build system makes use of CPack for building RPM packages out of the software releases, and CTest for running unit and integration tests. We report on the migration and integration of the ATLAS software to CMake and show working examples of this large scale project in production.

  6. Large scale and performance tests of the ATLAS online software

    International Nuclear Information System (INIS)

    Alexandrov; Kotov, V.; Mineev, M.; Roumiantsev, V.; Wolters, H.; Amorim, A.; Pedro, L.; Ribeiro, A.; Badescu, E.; Caprini, M.; Burckhart-Chromek, D.; Dobson, M.; Jones, R.; Kazarov, A.; Kolos, S.; Liko, D.; Lucio, L.; Mapelli, L.; Nassiakou, M.; Schweiger, D.; Soloviev, I.; Hart, R.; Ryabov, Y.; Moneta, L.

    2001-01-01

    One of the sub-systems of the Trigger/DAQ system of the future ATLAS experiment is the Online Software system. It encompasses the functionality needed to configure, control and monitor the DAQ. Its architecture is based on a component structure described in the ATLAS Trigger/DAQ technical proposal. Regular integration tests ensure its smooth operation in test beam setups during its evolutionary development towards the final ATLAS online system. Feedback is received and returned into the development process. Studies of the system behavior have been performed on a set of up to 111 PCs on a configuration which is getting closer to the final size. Large scale and performance test of the integrated system were performed on this setup with emphasis on investigating the aspects of the inter-dependence of the components and the performance of the communication software. Of particular interest were the run control state transitions in various configurations of the run control hierarchy. For the purpose of the tests, the software from other Trigger/DAQ sub-systems has been emulated. The author presents a brief overview of the online system structure, its components and the large scale integration tests and their results

  7. Computing in Large-Scale Dynamic Systems

    NARCIS (Netherlands)

    Pruteanu, A.S.

    2013-01-01

    Software applications developed for large-scale systems have always been difficult to de- velop due to problems caused by the large number of computing devices involved. Above a certain network size (roughly one hundred), necessary services such as code updating, topol- ogy discovery and data

  8. Software challenges in extreme scale systems

    International Nuclear Information System (INIS)

    Sarkar, Vivek; Harrod, William; Snavely, Allan E

    2009-01-01

    Computer systems anticipated in the 2015 - 2020 timeframe are referred to as Extreme Scale because they will be built using massive multi-core processors with 100's of cores per chip. The largest capability Extreme Scale system is expected to deliver Exascale performance of the order of 10 18 operations per second. These systems pose new critical challenges for software in the areas of concurrency, energy efficiency and resiliency. In this paper, we discuss the implications of the concurrency and energy efficiency challenges on future software for Extreme Scale Systems. From an application viewpoint, the concurrency and energy challenges boil down to the ability to express and manage parallelism and locality by exploring a range of strong scaling and new-era weak scaling techniques. For expressing parallelism and locality, the key challenges are the ability to expose all of the intrinsic parallelism and locality in a programming model, while ensuring that this expression of parallelism and locality is portable across a range of systems. For managing parallelism and locality, the OS-related challenges include parallel scalability, spatial partitioning of OS and application functionality, direct hardware access for inter-processor communication, and asynchronous rather than interrupt-driven events, which are accompanied by runtime system challenges for scheduling, synchronization, memory management, communication, performance monitoring, and power management. We conclude by discussing the importance of software-hardware co-design in addressing the fundamental challenges for application enablement on Extreme Scale systems.

  9. Large Scale Software Building with CMake in ATLAS

    CERN Document Server

    Elmsheuser, Johannes; The ATLAS collaboration; Obreshkov, Emil; Undrus, Alexander

    2016-01-01

    The offline software of the ATLAS experiment at the LHC (Large Hadron Collider) serves as the platform for detector data reconstruction, simulation and analysis. It is also used in the detector trigger system to select LHC collision events during data taking. ATLAS offline software consists of several million lines of C++ and Python code organized in a modular design of more than 2000 specialized packages. Because of different workflows many stable numbered releases are in parallel production use. To accommodate specific workflow requests, software patches with modified libraries are distributed on top of existing software releases on a daily basis. The different ATLAS software applications require a flexible build system that strongly supports unit and integration tests. Within the last year this build system was migrated to CMake. A CMake configuration has been developed that allows one to easily set up and build the mentioned software packages. This also makes it possible to develop and test new and modifi...

  10. Large scale software building with CMake in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00218447; The ATLAS collaboration; Elmsheuser, Johannes; Obreshkov, Emil; Undrus, Alexander

    2017-01-01

    The offline software of the ATLAS experiment at the LHC (Large Hadron Collider) serves as the platform for detector data reconstruction, simulation and analysis. It is also used in the detector trigger system to select LHC collision events during data taking. ATLAS offline software consists of several million lines of C++ and Python code organized in a modular design of more than 2000 specialized packages. Because of different workflows many stable numbered releases are in parallel production use. To accommodate specific workflow requests, software patches with modified libraries are distributed on top of existing software releases on a daily basis. The different ATLAS software applications require a flexible build system that strongly supports unit and integration tests. Within the last year this build system was migrated to CMake. A CMake configuration has been developed that allows one to easily set up and build the mentioned software packages. This also makes it possible to develop and test new and modifi...

  11. Performance Health Monitoring of Large-Scale Systems

    Energy Technology Data Exchange (ETDEWEB)

    Rajamony, Ram [IBM Research, Austin, TX (United States)

    2014-11-20

    This report details the progress made on the ASCR funded project Performance Health Monitoring for Large Scale Systems. A large-­scale application may not achieve its full performance potential due to degraded performance of even a single subsystem. Detecting performance faults, isolating them, and taking remedial action is critical for the scale of systems on the horizon. PHM aims to develop techniques and tools that can be used to identify and mitigate such performance problems. We accomplish this through two main aspects. The PHM framework encompasses diagnostics, system monitoring, fault isolation, and performance evaluation capabilities that indicates when a performance fault has been detected, either due to an anomaly present in the system itself or due to contention for shared resources between concurrently executing jobs. Software components called the PHM Control system then build upon the capabilities provided by the PHM framework to mitigate degradation caused by performance problems.

  12. Benefits of transactive memory systems in large-scale development

    OpenAIRE

    Aivars, Sablis

    2016-01-01

    Context. Large-scale software development projects are those consisting of a large number of teams, maybe even spread across multiple locations, and working on large and complex software tasks. That means that neither a team member individually nor an entire team holds all the knowledge about the software being developed and teams have to communicate and coordinate their knowledge. Therefore, teams and team members in large-scale software development projects must acquire and manage expertise...

  13. Optical interconnect for large-scale systems

    Science.gov (United States)

    Dress, William

    2013-02-01

    This paper presents a switchless, optical interconnect module that serves as a node in a network of identical distribution modules for large-scale systems. Thousands to millions of hosts or endpoints may be interconnected by a network of such modules, avoiding the need for multi-level switches. Several common network topologies are reviewed and their scaling properties assessed. The concept of message-flow routing is discussed in conjunction with the unique properties enabled by the optical distribution module where it is shown how top-down software control (global routing tables, spanning-tree algorithms) may be avoided.

  14. Testing on a Large Scale Running the ATLAS Data Acquisition and High Level Trigger Software on 700 PC Nodes

    CERN Document Server

    Burckhart-Chromek, Doris; Adragna, P; Alexandrov, L; Amorim, A; Armstrong, S; Badescu, E; Baines, J T M; Barros, N; Beck, H P; Bee, C; Blair, R; Bogaerts, J A C; Bold, T; Bosman, M; Caprini, M; Caramarcu, C; Ciobotaru, M; Comune, G; Corso-Radu, A; Cranfield, R; Crone, G; Dawson, J; Della Pietra, M; Di Mattia, A; Dobinson, Robert W; Dobson, M; Dos Anjos, A; Dotti, A; Drake, G; Ellis, Nick; Ermoline, Y; Ertorer, E; Falciano, S; Ferrari, R; Ferrer, M L; Francis, D; Gadomski, S; Gameiro, S; Garitaonandia, H; Gaudio, G; George, S; Gesualdi-Mello, A; Gorini, B; Green, B; Haas, S; Haberichter, W N; Hadavand, H; Haeberli, C; Haller, J; Hansen, J; Hauser, R; Hillier, S J; Höcker, A; Hughes-Jones, R E; Joos, M; Kazarov, A; Kieft, G; Klous, S; Kohno, T; Kolos, S; Korcyl, K; Kordas, K; Kotov, V; Kugel, A; Landon, M; Lankford, A; Leahu, L; Leahu, M; Lehmann-Miotto, G; Le Vine, M J; Liu, W; Maeno, T; Männer, R; Mapelli, L; Martin, B; Masik, J; McLaren, R; Meessen, C; Meirosu, C; Mineev, M; Misiejuk, A; Morettini, P; Mornacchi, G; Müller, M; Garcia-Murillo, R; Nagasaka, Y; Negri, A; Padilla, C; Pasqualucci, E; Pauly, T; Perera, V; Petersen, J; Pope, B; Albuquerque-Portes, M; Pretzl, K; Prigent, D; Roda, C; Ryabov, Yu; Salvatore, D; Schiavi, C; Schlereth, J L; Scholtes, I; Sole-Segura, E; Seixas, M; Sloper, J; Soloviev, I; Spiwoks, R; Stamen, R; Stancu, S; Strong, S; Sushkov, S; Szymocha, T; Tapprogge, S; Teixeira-Dias, P; Torres, R; Touchard, F; Tremblet, L; Ünel, G; Van Wasen, J; Vandelli, W; Vaz-Gil-Lopes, L; Vermeulen, J C; von der Schmitt, H; Wengler, T; Werner, P; Wheeler, S; Wickens, F; Wiedenmann, W; Wiesmann, M; Wu, X; Yasu, Y; Yu, M; Zema, F; Zobernig, H; Computing In High Energy and Nuclear Physics

    2006-01-01

    The ATLAS Data Acquisition (DAQ) and High Level Trigger (HLT) software system will be comprised initially of 2000 PC nodes which take part in the control, event readout, second level trigger and event filter operations. This high number of PCs will only be purchased before data taking in 2007. The large CERN IT LXBATCH facility provided the opportunity to run in July 2005 online functionality tests over a period of 5 weeks on a stepwise increasing farm size from 100 up to 700 PC dual nodes. The interplay between the control and monitoring software with the event readout, event building and the trigger software has been exercised the first time as an integrated system on this large scale. New was also to run algorithms in the online environment for the trigger selection and in the event filter processing tasks on a larger scale. A mechanism has been developed to package the offline software together with the DAQ/HLT software and to distribute it via peer-to-peer software efficiently to this large pc cluster. T...

  15. Testing on a Large Scale running the ATLAS Data Acquisition and High Level Trigger Software on 700 PC Nodes

    CERN Document Server

    Burckhart-Chromek, Doris; Adragna, P; Albuquerque-Portes, M; Alexandrov, L; Amorim, A; Armstrong, S; Badescu, E; Baines, J T M; Barros, N; Beck, H P; Bee, C; Blair, R; Bogaerts, J A C; Bold, T; Bosman, M; Caprini, M; Caramarcu, C; Ciobotaru, M; Comune, G; Corso-Radu, A; Cranfield, R; Crone, G; Dawson, J; Della Pietra, M; Di Mattia, A; Dobinson, Robert W; Dobson, M; Dos Anjos, A; Dotti, A; Drake, G; Ellis, Nick; Ermoline, Y; Ertorer, E; Falciano, S; Ferrari, R; Ferrer, M L; Francis, D; Gadomski, S; Gameiro, S; Garcia-Murillo, R; Garitaonandia, H; Gaudio, G; George, S; Gesualdi-Mello, A; Gorini, B; Green, B; Haas, S; Haberichter, W N; Hadavand, H; Haeberli, C; Haller, J; Hansen, J; Hauser, R; Hillier, S J; Hughes-Jones, R E; Höcker, A; Joos, M; Kazarov, A; Kieft, G; Klous, S; Kohno, T; Kolos, S; Korcyl, K; Kordas, K; Kotov, V; Kugel, A; Landon, M; Lankford, A; Le Vine, M J; Leahu, L; Leahu, M; Lehmann-Miotto, G; Liu, W; Maeno, T; Mapelli, L; Martin, B; Masik, J; McLaren, R; Meessen, C; Meirosu, C; Mineev, M; Misiejuk, A; Morettini, P; Mornacchi, G; Männer, R; Müller, M; Nagasaka, Y; Negri, A; Padilla, C; Pasqualucci, E; Pauly, T; Perera, V; Petersen, J; Pope, B; Pretzl, K; Prigent, D; Roda, C; Ryabov, Yu; Salvatore, D; Schiavi, C; Schlereth, J L; Scholtes, I; Seixas, M; Sloper, J; Sole-Segura, E; Soloviev, I; Spiwoks, R; Stamen, R; Stancu, S; Strong, S; Sushkov, S; Szymocha, T; Tapprogge, S; Teixeira-Dias, P; Torres, R; Touchard, F; Tremblet, L; Van Wasen, J; Vandelli, W; Vaz-Gil-Lopes, L; Vermeulen, J C; Wengler, T; Werner, P; Wheeler, S; Wickens, F; Wiedenmann, W; Wiesmann, M; Wu, X; Yasu, Y; Yu, M; Zema, F; Zobernig, H; von der Schmitt, H; Ünel, G; Computing In High Energy and Nuclear Physics

    2006-01-01

    The ATLAS Data Acquisition (DAQ) and High Level Trigger (HLT) software system will be comprised initially of 2000 PC nodes which take part in the control, event readout, second level trigger and event filter operations. This high number of PCs will only be purchased before data taking in 2007. The large CERN IT LXBATCH facility provided the opportunity to run in July 2005 online functionality tests over a period of 5 weeks on a stepwise increasing farm size from 100 up to 700 PC dual nodes. The interplay between the control and monitoring software with the event readout, event building and the trigger software has been exercised the first time as an integrated system on this large scale. New was also to run algorithms in the online environment for the trigger selection and in the event filter processing tasks on a larger scale. A mechanism has been developed to package the offline software together with the DAQ/HLT software and to distribute it via peer-to-peer software efficiently to this large pc cluster. T...

  16. Conceptual study of calibration software for large scale input accountancy tank

    International Nuclear Information System (INIS)

    Uchikoshi, Seiji; Yasu, Kan-ichi; Watanabe, Yuichi; Matsuda, Yuji; Kawai, Akio; Tamura, Toshiyuki; Shimizu, Hidehiko.

    1996-01-01

    Demonstration experiments for large scale input accountancy tank are going to be under way by Nuclear Material Control Center. Development of calibration software for accountancy system with dip-tube manometer is an important task in the experiments. A conceptual study of the software has been carried out to construct high precision accountancy system. And, the study was based on ANSI N15.19-1989. Items of the study are overall configuration, correction method for influence of bubble formation, function model of calibration, and fitting method for calibration curve. Following remarks are the results of this study. 1) Overall configuration of the software was constructed. 2) It was shown by numerical solution, that the influence of bubble formation can be corrected using period of pressure wave. 3) Two function models of calibration for well capacity and for inner structure volume were prepared from tank design, and good fitness of the model for net capacity (balance of both models) was confirmed by fitting to designed shape of the tank. 4) The necessity of further consideration about both-variables-in-error-model and cumulative-error-model was recognized. We are going to develop a practical software on the basis of the results, and to verify it by the demonstration experiments. (author)

  17. REQUIREMENTS FOR SYSTEMS DEVELOPMENT LIFE CYCLE MODELS FOR LARGE-SCALE DEFENSE SYSTEMS

    Directory of Open Access Journals (Sweden)

    Kadir Alpaslan DEMIR

    2015-10-01

    Full Text Available TLarge-scale defense system projects are strategic for maintaining and increasing the national defense capability. Therefore, governments spend billions of dollars in the acquisition and development of large-scale defense systems. The scale of defense systems is always increasing and the costs to build them are skyrocketing. Today, defense systems are software intensive and they are either a system of systems or a part of it. Historically, the project performances observed in the development of these systems have been signifi cantly poor when compared to other types of projects. It is obvious that the currently used systems development life cycle models are insuffi cient to address today’s challenges of building these systems. Using a systems development life cycle model that is specifi cally designed for largescale defense system developments and is effective in dealing with today’s and near-future challenges will help to improve project performances. The fi rst step in the development a large-scale defense systems development life cycle model is the identifi cation of requirements for such a model. This paper contributes to the body of literature in the fi eld by providing a set of requirements for system development life cycle models for large-scale defense systems. Furthermore, a research agenda is proposed.

  18. Development of solution monitoring software for enhanced safeguards at a large scale reprocessing facility

    Energy Technology Data Exchange (ETDEWEB)

    Van Handenhove, Carl; Breban, Domnica; Creusot, Christophe [International Atomic Energy Agency, Vienna (Austria); Dransart, Pascal; Dechamp, Luc [Joint Research Centre, European Commission, Ispra, Varese, (Italy); Jarde, Eric [Euriware, Equeurdreville (France)

    2011-12-15

    The implementation of an effective and efficient IAEA safeguards approach at large scale reprocessing facilities with large throughput and continuous flow of nuclear material requires the introduction of enhanced safeguards measures to provide added assurance about the absence of diversion of nuclear material and confirmation that the facility is operated as declared. One of the enhanced safeguards measures, a Solution Monitoring and Measurement System (SMMS), comprising data collection instruments, data transmission equipment and an advanced Solution Monitoring Software (SMS), is being implemented at a large scale reprocessing plant in Japan. SMS is designed as a tool to enable automatic calculations of volumes, densities and flow-rates in selected process vessels, including most of the vessels of the main nuclear material stream. This software also includes automatic features to support the inspectorate in verifying inventories and inventory changes. The software also enables one to analyze the flows of nuclear material within the process and of specified 'cycles' of operation, and, in order to provide assurance that the facility is being operated as declared to compare these with those expected (reference signatures). The configuration and parameterization work (especially the analytical and comparative work) for the implementation and configuration of the SMS has been carried out jointly between the IAEA, Euriware-France (the software developer) and the Joint Research Centre (JRC)-Ispra. This paper describes the main features of the SMS, including the principles underlying the automatic analysis functionalities. It then focuses on the collaborative work performed by the JRC-Ispra, Euriware and the IAEA for the parameterization of the software (vessels and cycles of operation), including the current status and the future challenges.

  19. Software Toolchain for Large-Scale RE-NFA Construction on FPGA

    Directory of Open Access Journals (Sweden)

    Yi-Hua E. Yang

    2009-01-01

    and O(n×m memory by our software. A large number of RE-NFAs are placed onto a two-dimensional staged pipeline, allowing scalability to thousands of RE-NFAs with linear area increase and little clock rate penalty due to scaling. On a PC with a 2 GHz Athlon64 processor and 2 GB memory, our prototype software constructs hundreds of RE-NFAs used by Snort in less than 10 seconds. We also designed a benchmark generator which can produce RE-NFAs with configurable pattern complexity parameters, including state count, state fan-in, loop-back and feed-forward distances. Several regular expressions with various complexities are used to test the performance of our RE-NFA construction software.

  20. A measurement system for large, complex software programs

    Science.gov (United States)

    Rone, Kyle Y.; Olson, Kitty M.; Davis, Nathan E.

    1994-01-01

    This paper describes measurement systems required to forecast, measure, and control activities for large, complex software development and support programs. Initial software cost and quality analysis provides the foundation for meaningful management decisions as a project evolves. In modeling the cost and quality of software systems, the relationship between the functionality, quality, cost, and schedule of the product must be considered. This explicit relationship is dictated by the criticality of the software being developed. This balance between cost and quality is a viable software engineering trade-off throughout the life cycle. Therefore, the ability to accurately estimate the cost and quality of software systems is essential to providing reliable software on time and within budget. Software cost models relate the product error rate to the percent of the project labor that is required for independent verification and validation. The criticality of the software determines which cost model is used to estimate the labor required to develop the software. Software quality models yield an expected error discovery rate based on the software size, criticality, software development environment, and the level of competence of the project and developers with respect to the processes being employed.

  1. The cognitive dynamics of computer science cost-effective large scale software development

    CERN Document Server

    De Gyurky, Szabolcs Michael; John Wiley & Sons

    2006-01-01

    This book has three major objectives: To propose an ontology for computer software; To provide a methodology for development of large software systems to cost and schedule that is based on the ontology; To offer an alternative vision regarding the development of truly autonomous systems.

  2. Implementation of highly parallel and large scale GW calculations within the OpenAtom software

    Science.gov (United States)

    Ismail-Beigi, Sohrab

    The need to describe electronic excitations with better accuracy than provided by band structures produced by Density Functional Theory (DFT) has been a long-term enterprise for the computational condensed matter and materials theory communities. In some cases, appropriate theoretical frameworks have existed for some time but have been difficult to apply widely due to computational cost. For example, the GW approximation incorporates a great deal of important non-local and dynamical electronic interaction effects but has been too computationally expensive for routine use in large materials simulations. OpenAtom is an open source massively parallel ab initiodensity functional software package based on plane waves and pseudopotentials (http://charm.cs.uiuc.edu/OpenAtom/) that takes advantage of the Charm + + parallel framework. At present, it is developed via a three-way collaboration, funded by an NSF SI2-SSI grant (ACI-1339804), between Yale (Ismail-Beigi), IBM T. J. Watson (Glenn Martyna) and the University of Illinois at Urbana Champaign (Laxmikant Kale). We will describe the project and our current approach towards implementing large scale GW calculations with OpenAtom. Potential applications of large scale parallel GW software for problems involving electronic excitations in semiconductor and/or metal oxide systems will be also be pointed out.

  3. An interactive display system for large-scale 3D models

    Science.gov (United States)

    Liu, Zijian; Sun, Kun; Tao, Wenbing; Liu, Liman

    2018-04-01

    With the improvement of 3D reconstruction theory and the rapid development of computer hardware technology, the reconstructed 3D models are enlarging in scale and increasing in complexity. Models with tens of thousands of 3D points or triangular meshes are common in practical applications. Due to storage and computing power limitation, it is difficult to achieve real-time display and interaction with large scale 3D models for some common 3D display software, such as MeshLab. In this paper, we propose a display system for large-scale 3D scene models. We construct the LOD (Levels of Detail) model of the reconstructed 3D scene in advance, and then use an out-of-core view-dependent multi-resolution rendering scheme to realize the real-time display of the large-scale 3D model. With the proposed method, our display system is able to render in real time while roaming in the reconstructed scene and 3D camera poses can also be displayed. Furthermore, the memory consumption can be significantly decreased via internal and external memory exchange mechanism, so that it is possible to display a large scale reconstructed scene with over millions of 3D points or triangular meshes in a regular PC with only 4GB RAM.

  4. A Field Study of Scale Economies in Software Maintenance

    OpenAIRE

    Rajiv D. Banker; Sandra A. Slaughter

    1997-01-01

    Software maintenance is a major concern for organizations. Productivity gains in software maintenance can enable redeployment of Information Systems resources to other activities. Thus, it is important to understand how software maintenance productivity can be improved. In this study, we investigate the relationship between project size and software maintenance productivity. We explore scale economies in software maintenance by examining a number of software enhancement projects at a large fi...

  5. Software Reliability Issues Concerning Large and Safety Critical Software Systems

    Science.gov (United States)

    Kamel, Khaled; Brown, Barbara

    1996-01-01

    This research was undertaken to provide NASA with a survey of state-of-the-art techniques using in industrial and academia to provide safe, reliable, and maintainable software to drive large systems. Such systems must match the complexity and strict safety requirements of NASA's shuttle system. In particular, the Launch Processing System (LPS) is being considered for replacement. The LPS is responsible for monitoring and commanding the shuttle during test, repair, and launch phases. NASA built this system in the 1970's using mostly hardware techniques to provide for increased reliability, but it did so often using custom-built equipment, which has not been able to keep up with current technologies. This report surveys the major techniques used in industry and academia to ensure reliability in large and critical computer systems.

  6. Parallel supercomputing: Advanced methods, algorithms, and software for large-scale linear and nonlinear problems

    Energy Technology Data Exchange (ETDEWEB)

    Carey, G.F.; Young, D.M.

    1993-12-31

    The program outlined here is directed to research on methods, algorithms, and software for distributed parallel supercomputers. Of particular interest are finite element methods and finite difference methods together with sparse iterative solution schemes for scientific and engineering computations of very large-scale systems. Both linear and nonlinear problems will be investigated. In the nonlinear case, applications with bifurcation to multiple solutions will be considered using continuation strategies. The parallelizable numerical methods of particular interest are a family of partitioning schemes embracing domain decomposition, element-by-element strategies, and multi-level techniques. The methods will be further developed incorporating parallel iterative solution algorithms with associated preconditioners in parallel computer software. The schemes will be implemented on distributed memory parallel architectures such as the CRAY MPP, Intel Paragon, the NCUBE3, and the Connection Machine. We will also consider other new architectures such as the Kendall-Square (KSQ) and proposed machines such as the TERA. The applications will focus on large-scale three-dimensional nonlinear flow and reservoir problems with strong convective transport contributions. These are legitimate grand challenge class computational fluid dynamics (CFD) problems of significant practical interest to DOE. The methods developed and algorithms will, however, be of wider interest.

  7. Automating large-scale reactor systems

    International Nuclear Information System (INIS)

    Kisner, R.A.

    1985-01-01

    This paper conveys a philosophy for developing automated large-scale control systems that behave in an integrated, intelligent, flexible manner. Methods for operating large-scale systems under varying degrees of equipment degradation are discussed, and a design approach that separates the effort into phases is suggested. 5 refs., 1 fig

  8. PLANNING QUALITY ASSURANCE PROCESSES IN A LARGE SCALE GEOGRAPHICALLY SPREAD HYBRID SOFTWARE DEVELOPMENT PROJECT

    Directory of Open Access Journals (Sweden)

    Святослав Аркадійович МУРАВЕЦЬКИЙ

    2016-02-01

    Full Text Available There have been discussed key points of operational activates in a large scale geographically spread software development projects. A look taken at required QA processes structure in such project. There have been given up to date methods of integration quality assurance processes into software development processes. There have been reviewed existing groups of software development methodologies. Such as sequential, agile and based on RPINCE2. There have been given a condensed overview of quality assurance processes in each group. There have been given a review of common challenges that sequential and agile models are having in case of large geographically spread hybrid software development project. Recommendations were given in order to tackle those challenges.  The conclusions about the best methodology choice and appliance to the particular project have been made.

  9. SIMON: Remote collaboration system based on large scale simulation

    International Nuclear Information System (INIS)

    Sugawara, Akihiro; Kishimoto, Yasuaki

    2003-01-01

    Development of SIMON (SImulation MONitoring) system is described. SIMON aims to investigate many physical phenomena of tokamak type nuclear fusion plasma by simulation and to exchange information and to carry out joint researches with scientists in the world using internet. The characteristics of SIMON are followings; 1) decrease load of simulation by trigger sending method, 2) visualization of simulation results and hierarchical structure of analysis, 3) decrease of number of license by using command line when software is used, 4) improvement of support for using network of simulation data output by use of HTML (Hyper Text Markup Language), 5) avoidance of complex built-in work in client part and 6) small-sized and portable software. The visualization method of large scale simulation, remote collaboration system by HTML, trigger sending method, hierarchical analytical method, introduction into three-dimensional electromagnetic transportation code and technologies of SIMON system are explained. (S.Y.)

  10. Large scale network-centric distributed systems

    CERN Document Server

    Sarbazi-Azad, Hamid

    2014-01-01

    A highly accessible reference offering a broad range of topics and insights on large scale network-centric distributed systems Evolving from the fields of high-performance computing and networking, large scale network-centric distributed systems continues to grow as one of the most important topics in computing and communication and many interdisciplinary areas. Dealing with both wired and wireless networks, this book focuses on the design and performance issues of such systems. Large Scale Network-Centric Distributed Systems provides in-depth coverage ranging from ground-level hardware issu

  11. Fatigue Analysis of Large-scale Wind turbine

    Directory of Open Access Journals (Sweden)

    Zhu Yongli

    2017-01-01

    Full Text Available The paper does research on top flange fatigue damage of large-scale wind turbine generator. It establishes finite element model of top flange connection system with finite element analysis software MSC. Marc/Mentat, analyzes its fatigue strain, implements load simulation of flange fatigue working condition with Bladed software, acquires flange fatigue load spectrum with rain-flow counting method, finally, it realizes fatigue analysis of top flange with fatigue analysis software MSC. Fatigue and Palmgren-Miner linear cumulative damage theory. The analysis result indicates that its result provides new thinking for flange fatigue analysis of large-scale wind turbine generator, and possesses some practical engineering value.

  12. Defining Execution Viewpoints for a Large and Complex Software-Intensive System

    OpenAIRE

    Callo Arias, Trosky B.; America, Pierre; Avgeriou, Paris

    2009-01-01

    An execution view is an important asset for developing large and complex systems. An execution view helps practitioners to describe, analyze, and communicate what a software system does at runtime and how it does it. In this paper, we present an approach to define execution viewpoints for an existing large and complex software-intensive system. This definition approach enables the customization and extension of a set of predefined viewpoints to address the requirements of a specific developme...

  13. Algorithm 873: LSTRS: MATLAB Software for Large-Scale Trust-Region Subproblems and Regularization

    DEFF Research Database (Denmark)

    Rojas Larrazabal, Marielba de la Caridad; Santos, Sandra A.; Sorensen, Danny C.

    2008-01-01

    A MATLAB 6.0 implementation of the LSTRS method is resented. LSTRS was described in Rojas, M., Santos, S.A., and Sorensen, D.C., A new matrix-free method for the large-scale trust-region subproblem, SIAM J. Optim., 11(3):611-646, 2000. LSTRS is designed for large-scale quadratic problems with one...... at each step. LSTRS relies on matrix-vector products only and has low and fixed storage requirements, features that make it suitable for large-scale computations. In the MATLAB implementation, the Hessian matrix of the quadratic objective function can be specified either explicitly, or in the form...... of a matrix-vector multiplication routine. Therefore, the implementation preserves the matrix-free nature of the method. A description of the LSTRS method and of the MATLAB software, version 1.2, is presented. Comparisons with other techniques and applications of the method are also included. A guide...

  14. Software in windows for staple compounding system of microcomputer nuclear mass scale

    International Nuclear Information System (INIS)

    Wang Yanting; Zhang Yongming; Wang Yu; Jin Dongping

    1998-01-01

    The software exploited in windows for staple compounding system of microcomputer nuclear mass scale is described. The staple compounding system is briefly narrated. The software structure and its realizing method are given

  15. Cloud-enabled large-scale land surface model simulations with the NASA Land Information System

    Science.gov (United States)

    Duffy, D.; Vaughan, G.; Clark, M. P.; Peters-Lidard, C. D.; Nijssen, B.; Nearing, G. S.; Rheingrover, S.; Kumar, S.; Geiger, J. V.

    2017-12-01

    Developed by the Hydrological Sciences Laboratory at NASA Goddard Space Flight Center (GSFC), the Land Information System (LIS) is a high-performance software framework for terrestrial hydrology modeling and data assimilation. LIS provides the ability to integrate satellite and ground-based observational products and advanced modeling algorithms to extract land surface states and fluxes. Through a partnership with the National Center for Atmospheric Research (NCAR) and the University of Washington, the LIS model is currently being extended to include the Structure for Unifying Multiple Modeling Alternatives (SUMMA). With the addition of SUMMA in LIS, meaningful simulations containing a large multi-model ensemble will be enabled and can provide advanced probabilistic continental-domain modeling capabilities at spatial scales relevant for water managers. The resulting LIS/SUMMA application framework is difficult for non-experts to install due to the large amount of dependencies on specific versions of operating systems, libraries, and compilers. This has created a significant barrier to entry for domain scientists that are interested in using the software on their own systems or in the cloud. In addition, the requirement to support multiple run time environments across the LIS community has created a significant burden on the NASA team. To overcome these challenges, LIS/SUMMA has been deployed using Linux containers, which allows for an entire software package along with all dependences to be installed within a working runtime environment, and Kubernetes, which orchestrates the deployment of a cluster of containers. Within a cloud environment, users can now easily create a cluster of virtual machines and run large-scale LIS/SUMMA simulations. Installations that have taken weeks and months can now be performed in minutes of time. This presentation will discuss the steps required to create a cloud-enabled large-scale simulation, present examples of its use, and

  16. Study of multi-functional precision optical measuring system for large scale equipment

    Science.gov (United States)

    Jiang, Wei; Lao, Dabao; Zhou, Weihu; Zhang, Wenying; Jiang, Xingjian; Wang, Yongxi

    2017-10-01

    The effective application of high performance measurement technology can greatly improve the large-scale equipment manufacturing ability. Therefore, the geometric parameters measurement, such as size, attitude and position, requires the measurement system with high precision, multi-function, portability and other characteristics. However, the existing measuring instruments, such as laser tracker, total station, photogrammetry system, mostly has single function, station moving and other shortcomings. Laser tracker needs to work with cooperative target, but it can hardly meet the requirement of measurement in extreme environment. Total station is mainly used for outdoor surveying and mapping, it is hard to achieve the demand of accuracy in industrial measurement. Photogrammetry system can achieve a wide range of multi-point measurement, but the measuring range is limited and need to repeatedly move station. The paper presents a non-contact opto-electronic measuring instrument, not only it can work by scanning the measurement path but also measuring the cooperative target by tracking measurement. The system is based on some key technologies, such as absolute distance measurement, two-dimensional angle measurement, automatically target recognition and accurate aiming, precision control, assembly of complex mechanical system and multi-functional 3D visualization software. Among them, the absolute distance measurement module ensures measurement with high accuracy, and the twodimensional angle measuring module provides precision angle measurement. The system is suitable for the case of noncontact measurement of large-scale equipment, it can ensure the quality and performance of large-scale equipment throughout the process of manufacturing and improve the manufacturing ability of large-scale and high-end equipment.

  17. Fires in large scale ventilation systems

    International Nuclear Information System (INIS)

    Gregory, W.S.; Martin, R.A.; White, B.W.; Nichols, B.D.; Smith, P.R.; Leslie, I.H.; Fenton, D.L.; Gunaji, M.V.; Blythe, J.P.

    1991-01-01

    This paper summarizes the experience gained simulating fires in large scale ventilation systems patterned after ventilation systems found in nuclear fuel cycle facilities. The series of experiments discussed included: (1) combustion aerosol loading of 0.61x0.61 m HEPA filters with the combustion products of two organic fuels, polystyrene and polymethylemethacrylate; (2) gas dynamic and heat transport through a large scale ventilation system consisting of a 0.61x0.61 m duct 90 m in length, with dampers, HEPA filters, blowers, etc.; (3) gas dynamic and simultaneous transport of heat and solid particulate (consisting of glass beads with a mean aerodynamic diameter of 10μ) through the large scale ventilation system; and (4) the transport of heat and soot, generated by kerosene pool fires, through the large scale ventilation system. The FIRAC computer code, designed to predict fire-induced transients in nuclear fuel cycle facility ventilation systems, was used to predict the results of experiments (2) through (4). In general, the results of the predictions were satisfactory. The code predictions for the gas dynamics, heat transport, and particulate transport and deposition were within 10% of the experimentally measured values. However, the code was less successful in predicting the amount of soot generation from kerosene pool fires, probably due to the fire module of the code being a one-dimensional zone model. The experiments revealed a complicated three-dimensional combustion pattern within the fire room of the ventilation system. Further refinement of the fire module within FIRAC is needed. (orig.)

  18. Large-Scale Wireless Temperature Monitoring System for Liquefied Petroleum Gas Storage Tanks

    Directory of Open Access Journals (Sweden)

    Guangwen Fan

    2015-09-01

    Full Text Available Temperature distribution is a critical indicator of the health condition for Liquefied Petroleum Gas (LPG storage tanks. In this paper, we present a large-scale wireless temperature monitoring system to evaluate the safety of LPG storage tanks. The system includes wireless sensors networks, high temperature fiber-optic sensors, and monitoring software. Finally, a case study on real-world LPG storage tanks proves the feasibility of the system. The unique features of wireless transmission, automatic data acquisition and management, local and remote access make the developed system a good alternative for temperature monitoring of LPG storage tanks in practical applications.

  19. Large-Scale Wireless Temperature Monitoring System for Liquefied Petroleum Gas Storage Tanks.

    Science.gov (United States)

    Fan, Guangwen; Shen, Yu; Hao, Xiaowei; Yuan, Zongming; Zhou, Zhi

    2015-09-18

    Temperature distribution is a critical indicator of the health condition for Liquefied Petroleum Gas (LPG) storage tanks. In this paper, we present a large-scale wireless temperature monitoring system to evaluate the safety of LPG storage tanks. The system includes wireless sensors networks, high temperature fiber-optic sensors, and monitoring software. Finally, a case study on real-world LPG storage tanks proves the feasibility of the system. The unique features of wireless transmission, automatic data acquisition and management, local and remote access make the developed system a good alternative for temperature monitoring of LPG storage tanks in practical applications.

  20. The Software Reliability of Large Scale Integration Circuit and Very Large Scale Integration Circuit

    OpenAIRE

    Artem Ganiyev; Jan Vitasek

    2010-01-01

    This article describes evaluation method of faultless function of large scale integration circuits (LSI) and very large scale integration circuits (VLSI). In the article there is a comparative analysis of factors which determine faultless of integrated circuits, analysis of already existing methods and model of faultless function evaluation of LSI and VLSI. The main part describes a proposed algorithm and program for analysis of fault rate in LSI and VLSI circuits.

  1. Challenges in Managing Trustworthy Large-scale Digital Science

    Science.gov (United States)

    Evans, B. J. K.

    2017-12-01

    The increased use of large-scale international digital science has opened a number of challenges for managing, handling, using and preserving scientific information. The large volumes of information are driven by three main categories - model outputs including coupled models and ensembles, data products that have been processing to a level of usability, and increasingly heuristically driven data analysis. These data products are increasingly the ones that are usable by the broad communities, and far in excess of the raw instruments data outputs. The data, software and workflows are then shared and replicated to allow broad use at an international scale, which places further demands of infrastructure to support how the information is managed reliably across distributed resources. Users necessarily rely on these underlying "black boxes" so that they are productive to produce new scientific outcomes. The software for these systems depend on computational infrastructure, software interconnected systems, and information capture systems. This ranges from the fundamentals of the reliability of the compute hardware, system software stacks and libraries, and the model software. Due to these complexities and capacity of the infrastructure, there is an increased emphasis of transparency of the approach and robustness of the methods over the full reproducibility. Furthermore, with large volume data management, it is increasingly difficult to store the historical versions of all model and derived data. Instead, the emphasis is on the ability to access the updated products and the reliability by which both previous outcomes are still relevant and can be updated for the new information. We will discuss these challenges and some of the approaches underway that are being used to address these issues.

  2. Staghorn: An Automated Large-Scale Distributed System Analysis Platform

    Energy Technology Data Exchange (ETDEWEB)

    Gabert, Kasimir [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Burns, Ian [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Elliott, Steven [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kallaher, Jenna [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vail, Adam [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-09-01

    Conducting experiments on large-scale distributed computing systems is becoming significantly easier with the assistance of emulation. Researchers can now create a model of a distributed computing environment and then generate a virtual, laboratory copy of the entire system composed of potentially thousands of virtual machines, switches, and software. The use of real software, running at clock rate in full virtual machines, allows experiments to produce meaningful results without necessitating a full understanding of all model components. However, the ability to inspect and modify elements within these models is bound by the limitation that such modifications must compete with the model, either running in or alongside it. This inhibits entire classes of analyses from being conducted upon these models. We developed a mechanism to snapshot an entire emulation-based model as it is running. This allows us to \\freeze time" and subsequently fork execution, replay execution, modify arbitrary parts of the model, or deeply explore the model. This snapshot includes capturing packets in transit and other input/output state along with the running virtual machines. We were able to build this system in Linux using Open vSwitch and Kernel Virtual Machines on top of Sandia's emulation platform Firewheel. This primitive opens the door to numerous subsequent analyses on models, including state space exploration, debugging distributed systems, performance optimizations, improved training environments, and improved experiment repeatability.

  3. Optimization of large-scale heterogeneous system-of-systems models.

    Energy Technology Data Exchange (ETDEWEB)

    Parekh, Ojas; Watson, Jean-Paul; Phillips, Cynthia Ann; Siirola, John; Swiler, Laura Painton; Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Lee, Herbert K. H. (University of California, Santa Cruz, Santa Cruz, CA); Hart, William Eugene; Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Woodruff, David L. (University of California, Davis, Davis, CA)

    2012-01-01

    Decision makers increasingly rely on large-scale computational models to simulate and analyze complex man-made systems. For example, computational models of national infrastructures are being used to inform government policy, assess economic and national security risks, evaluate infrastructure interdependencies, and plan for the growth and evolution of infrastructure capabilities. A major challenge for decision makers is the analysis of national-scale models that are composed of interacting systems: effective integration of system models is difficult, there are many parameters to analyze in these systems, and fundamental modeling uncertainties complicate analysis. This project is developing optimization methods to effectively represent and analyze large-scale heterogeneous system of systems (HSoS) models, which have emerged as a promising approach for describing such complex man-made systems. These optimization methods enable decision makers to predict future system behavior, manage system risk, assess tradeoffs between system criteria, and identify critical modeling uncertainties.

  4. Policy Driven Development: Flexible Policy Insertion for Large Scale Systems.

    Science.gov (United States)

    Demchak, Barry; Krüger, Ingolf

    2012-07-01

    The success of a software system depends critically on how well it reflects and adapts to stakeholder requirements. Traditional development methods often frustrate stakeholders by creating long latencies between requirement articulation and system deployment, especially in large scale systems. One source of latency is the maintenance of policy decisions encoded directly into system workflows at development time, including those involving access control and feature set selection. We created the Policy Driven Development (PDD) methodology to address these development latencies by enabling the flexible injection of decision points into existing workflows at runtime , thus enabling policy composition that integrates requirements furnished by multiple, oblivious stakeholder groups. Using PDD, we designed and implemented a production cyberinfrastructure that demonstrates policy and workflow injection that quickly implements stakeholder requirements, including features not contemplated in the original system design. PDD provides a path to quickly and cost effectively evolve such applications over a long lifetime.

  5. LongLine: Visual Analytics System for Large-scale Audit Logs

    Directory of Open Access Journals (Sweden)

    Seunghoon Yoo

    2018-03-01

    Full Text Available Audit logs are different from other software logs in that they record the most primitive events (i.e., system calls in modern operating systems. Audit logs contain a detailed trace of an operating system, and thus have received great attention from security experts and system administrators. However, the complexity and size of audit logs, which increase in real time, have hindered analysts from understanding and analyzing them. In this paper, we present a novel visual analytics system, LongLine, which enables interactive visual analyses of large-scale audit logs. LongLine lowers the interpretation barrier of audit logs by employing human-understandable representations (e.g., file paths and commands instead of abstract indicators of operating systems (e.g., file descriptors as well as revealing the temporal patterns of the logs in a multi-scale fashion with meaningful granularity of time in mind (e.g., hourly, daily, and weekly. LongLine also streamlines comparative analysis between interesting subsets of logs, which is essential in detecting anomalous behaviors of systems. In addition, LongLine allows analysts to monitor the system state in a streaming fashion, keeping the latency between log creation and visualization less than one minute. Finally, we evaluate our system through a case study and a scenario analysis with security experts.

  6. Tool Support for Parametric Analysis of Large Software Simulation Systems

    Science.gov (United States)

    Schumann, Johann; Gundy-Burlet, Karen; Pasareanu, Corina; Menzies, Tim; Barrett, Tony

    2008-01-01

    The analysis of large and complex parameterized software systems, e.g., systems simulation in aerospace, is very complicated and time-consuming due to the large parameter space, and the complex, highly coupled nonlinear nature of the different system components. Thus, such systems are generally validated only in regions local to anticipated operating points rather than through characterization of the entire feasible operational envelope of the system. We have addressed the factors deterring such an analysis with a tool to support envelope assessment: we utilize a combination of advanced Monte Carlo generation with n-factor combinatorial parameter variations to limit the number of cases, but still explore important interactions in the parameter space in a systematic fashion. Additional test-cases, automatically generated from models (e.g., UML, Simulink, Stateflow) improve the coverage. The distributed test runs of the software system produce vast amounts of data, making manual analysis impossible. Our tool automatically analyzes the generated data through a combination of unsupervised Bayesian clustering techniques (AutoBayes) and supervised learning of critical parameter ranges using the treatment learner TAR3. The tool has been developed around the Trick simulation environment, which is widely used within NASA. We will present this tool with a GN&C (Guidance, Navigation and Control) simulation of a small satellite system.

  7. Status: Large-scale subatmospheric cryogenic systems

    International Nuclear Information System (INIS)

    Peterson, T.

    1989-01-01

    In the late 1960's and early 1970's an interest in testing and operating RF cavities at 1.8K motivated the development and construction of four large (300 Watt) 1.8K refrigeration systems. in the past decade, development of successful superconducting RF cavities and interest in obtaining higher magnetic fields with the improved Niobium-Titanium superconductors has once again created interest in large-scale 1.8K refrigeration systems. The L'Air Liquide plant for Tore Supra is a recently commissioned 300 Watt 1.8K system which incorporates new technology, cold compressors, to obtain the low vapor pressure for low temperature cooling. CEBAF proposes to use cold compressors to obtain 5KW at 2.0K. Magnetic refrigerators of 10 Watt capacity or higher at 1.8K are now being developed. The state of the art of large-scale refrigeration in the range under 4K will be reviewed. 28 refs., 4 figs., 7 tabs

  8. Needs, opportunities, and options for large scale systems research

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, G.L.

    1984-10-01

    The Office of Energy Research was recently asked to perform a study of Large Scale Systems in order to facilitate the development of a true large systems theory. It was decided to ask experts in the fields of electrical engineering, chemical engineering and manufacturing/operations research for their ideas concerning large scale systems research. The author was asked to distribute a questionnaire among these experts to find out their opinions concerning recent accomplishments and future research directions in large scale systems research. He was also requested to convene a conference which included three experts in each area as panel members to discuss the general area of large scale systems research. The conference was held on March 26--27, 1984 in Pittsburgh with nine panel members, and 15 other attendees. The present report is a summary of the ideas presented and the recommendations proposed by the attendees.

  9. Rucio - The next generation of large scale distributed system for ATLAS Data Management

    Science.gov (United States)

    Garonne, V.; Vigne, R.; Stewart, G.; Barisits, M.; eermann, T. B.; Lassnig, M.; Serfon, C.; Goossens, L.; Nairz, A.; Atlas Collaboration

    2014-06-01

    Rucio is the next-generation Distributed Data Management (DDM) system benefiting from recent advances in cloud and "Big Data" computing to address HEP experiments scaling requirements. Rucio is an evolution of the ATLAS DDM system Don Quijote 2 (DQ2), which has demonstrated very large scale data management capabilities with more than 140 petabytes spread worldwide across 130 sites, and accesses from 1,000 active users. However, DQ2 is reaching its limits in terms of scalability, requiring a large number of support staff to operate and being hard to extend with new technologies. Rucio will deal with these issues by relying on a conceptual data model and new technology to ensure system scalability, address new user requirements and employ new automation framework to reduce operational overheads. We present the key concepts of Rucio, including its data organization/representation and a model of how to manage central group and user activities. The Rucio design, and the technology it employs, is described, specifically looking at its RESTful architecture and the various software components it uses. We show also the performance of the system.

  10. Defining Execution Viewpoints for a Large and Complex Software-Intensive System

    NARCIS (Netherlands)

    Callo Arias, Trosky B.; America, Pierre; Avgeriou, Paris

    2009-01-01

    An execution view is an important asset for developing large and complex systems. An execution view helps practitioners to describe, analyze, and communicate what a software system does at runtime and how it does it. In this paper, we present an approach to define execution viewpoints for an

  11. An integrated system for large scale scanning of nuclear emulsions

    Energy Technology Data Exchange (ETDEWEB)

    Bozza, Cristiano, E-mail: kryss@sa.infn.it [University of Salerno and INFN, via Ponte Don Melillo, Fisciano 84084 (Italy); D’Ambrosio, Nicola [Laboratori Nazionali del Gran Sasso, S.S. 17 BIS km 18.910, Assergi (AQ) 67010 (Italy); De Lellis, Giovanni [University of Napoli and INFN, Complesso Universitario di Monte Sant' Angelo, via Cintia Ed. G, Napoli 80126 (Italy); De Serio, Marilisa [University of Bari and INFN, via E. Orabona 4, Bari 70125 (Italy); Di Capua, Francesco [INFN Napoli, Complesso Universitario di Monte Sant' Angelo, via Cintia Ed. G, Napoli 80126 (Italy); Di Crescenzo, Antonia [University of Napoli and INFN, Complesso Universitario di Monte Sant' Angelo, via Cintia Ed. G, Napoli 80126 (Italy); Di Ferdinando, Donato [INFN Bologna, viale B. Pichat 6/2, Bologna 40127 (Italy); Di Marco, Natalia [Laboratori Nazionali del Gran Sasso, S.S. 17 BIS km 18.910, Assergi (AQ) 67010 (Italy); Esposito, Luigi Salvatore [Laboratori Nazionali del Gran Sasso, now at CERN, Geneva (Switzerland); Fini, Rosa Anna [INFN Bari, via E. Orabona 4, Bari 70125 (Italy); Giacomelli, Giorgio [University of Bologna and INFN, viale B. Pichat 6/2, Bologna 40127 (Italy); Grella, Giuseppe [University of Salerno and INFN, via Ponte Don Melillo, Fisciano 84084 (Italy); Ieva, Michela [University of Bari and INFN, via E. Orabona 4, Bari 70125 (Italy); Kose, Umut [INFN Padova, via Marzolo 8, Padova (PD) 35131 (Italy); Longhin, Andrea; Mauri, Nicoletta [INFN Laboratori Nazionali di Frascati, via E. Fermi 40, Frascati (RM) 00044 (Italy); Medinaceli, Eduardo [University of Padova and INFN, via Marzolo 8, Padova (PD) 35131 (Italy); Monacelli, Piero [University of L' Aquila and INFN, via Vetoio Loc. Coppito, L' Aquila (AQ) 67100 (Italy); Muciaccia, Maria Teresa; Pastore, Alessandra [University of Bari and INFN, via E. Orabona 4, Bari 70125 (Italy); and others

    2013-03-01

    The European Scanning System, developed to analyse nuclear emulsions at high speed, has been completed with the development of a high level software infrastructure to automate and support large-scale emulsion scanning. In one year, an average installation is capable of performing data-taking and online analysis on a total surface ranging from few m{sup 2} to tens of m{sup 2}, acquiring many billions of tracks, corresponding to several TB. This paper focuses on the procedures that have been implemented and on their impact on physics measurements. The system proved robust, reliable, fault-tolerant and user-friendly, and seldom needs assistance. A dedicated relational Data Base system is the backbone of the whole infrastructure, storing data themselves and not only catalogues of data files, as in common practice, being a unique case in high-energy physics DAQ systems. The logical organisation of the system is described and a summary is given of the physics measurement that are readily available by automated processing.

  12. An integrated system for large scale scanning of nuclear emulsions

    International Nuclear Information System (INIS)

    Bozza, Cristiano; D’Ambrosio, Nicola; De Lellis, Giovanni; De Serio, Marilisa; Di Capua, Francesco; Di Crescenzo, Antonia; Di Ferdinando, Donato; Di Marco, Natalia; Esposito, Luigi Salvatore; Fini, Rosa Anna; Giacomelli, Giorgio; Grella, Giuseppe; Ieva, Michela; Kose, Umut; Longhin, Andrea; Mauri, Nicoletta; Medinaceli, Eduardo; Monacelli, Piero; Muciaccia, Maria Teresa; Pastore, Alessandra

    2013-01-01

    The European Scanning System, developed to analyse nuclear emulsions at high speed, has been completed with the development of a high level software infrastructure to automate and support large-scale emulsion scanning. In one year, an average installation is capable of performing data-taking and online analysis on a total surface ranging from few m 2 to tens of m 2 , acquiring many billions of tracks, corresponding to several TB. This paper focuses on the procedures that have been implemented and on their impact on physics measurements. The system proved robust, reliable, fault-tolerant and user-friendly, and seldom needs assistance. A dedicated relational Data Base system is the backbone of the whole infrastructure, storing data themselves and not only catalogues of data files, as in common practice, being a unique case in high-energy physics DAQ systems. The logical organisation of the system is described and a summary is given of the physics measurement that are readily available by automated processing

  13. Contributions to large scale and performance tests of the ATLAS online software

    International Nuclear Information System (INIS)

    Badescu, E.; Caprini, M.

    2003-01-01

    One of the sub-system of the Trigger/DAQ system of the future ATLAS experiment is the Online Software system. It encompasses the functionality needed to configure, control and monitor the DAQ. Its architecture is based on a component structure described in the ATLAS Trigger/DAQ technical proposal. Online Software is responsible for control, supervision and internal communication, excluding the event data flow. For the final ATLAS experiment in 2006 it is expected that it will have to control up to 1000 processors. The core components are the run control, process manager, configuration database, inter process communication, message reporting system and information exchange system. The auxiliary components, namely resource manager, online bookkeeper and the integrated graphical user interface were in use for tests. All the components are unit tested for functionality, fault tolerance, performance and scalability. Extended functionality tests are performed at CERN and remote institutes before each official release. The test objective was the verification of the scalability of the system to a configuration containing a large number of nodes. The aim was to study the interaction between the components, to identify critical areas and to investigate the variation and optimization of online system parameters. The timing of the data acquisition transition phases were recorded and analysed. The information on all processes and their relationships, the run control hierarchy in the online system as well as startup and shutdown dependencies are defined in the configuration database data file. Timing measurements were performed for the transitions shown in the paper and defined as follows: Setup: start online server infrastructure; Close: remove online infrastructure; Boot: start all supervised processes; Shutdown: stop all supervised processes; Cold start: start the supervised processes and go to the Running state; Cold stop: reverse of the cold start phase; Luke warm start

  14. Large - scale Rectangular Ruler Automated Verification Device

    Science.gov (United States)

    Chen, Hao; Chang, Luping; Xing, Minjian; Xie, Xie

    2018-03-01

    This paper introduces a large-scale rectangular ruler automated verification device, which consists of photoelectric autocollimator and self-designed mechanical drive car and data automatic acquisition system. The design of mechanical structure part of the device refer to optical axis design, drive part, fixture device and wheel design. The design of control system of the device refer to hardware design and software design, and the hardware mainly uses singlechip system, and the software design is the process of the photoelectric autocollimator and the automatic data acquisition process. This devices can automated achieve vertical measurement data. The reliability of the device is verified by experimental comparison. The conclusion meets the requirement of the right angle test procedure.

  15. CVSgrab : Mining the History of Large Software Projects

    NARCIS (Netherlands)

    Voinea, S.L.; Telea, A.

    2006-01-01

    Many software projects use Software Configuration Management systems to support their development process. Such systems accumulate in time large amounts of information useful for process accounting and auditing. We study how software developers can get insight in this information in order to

  16. Equipment of visualization environment of a large-scale structural analysis system. Visualization using AVS/Express of an ADVENTURE system

    International Nuclear Information System (INIS)

    Miyazaki, Mikiya

    2004-02-01

    The data display work of visualization is done in many research fields, and a lot of special softwares for the specific purposes exist today. But such softwares have an interface to only a small number of solvers. In many simulations, data conversion for visualization software is required between analysis and visualization for the practical use. This report describes the equipment work of the data visualization environment where AVS/Express was installed in corresponding to many requests from the users of the large-scale structural analysis system which is prepared as an ITBL community software. This environment enables to use the ITBL visualization server as a visualization device after the computation on the ITBL computer. Moreover, a lot of use will be expected within the community in the ITBL environment by merging it into ITBL/AVS environment in the future. (author)

  17. Large-scale Intelligent Transporation Systems simulation

    Energy Technology Data Exchange (ETDEWEB)

    Ewing, T.; Canfield, T.; Hannebutte, U.; Levine, D.; Tentner, A.

    1995-06-01

    A prototype computer system has been developed which defines a high-level architecture for a large-scale, comprehensive, scalable simulation of an Intelligent Transportation System (ITS) capable of running on massively parallel computers and distributed (networked) computer systems. The prototype includes the modelling of instrumented ``smart`` vehicles with in-vehicle navigation units capable of optimal route planning and Traffic Management Centers (TMC). The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide 2-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphical user interfaces to support human-factors studies. The prototype has been developed on a distributed system of networked UNIX computers but is designed to run on ANL`s IBM SP-X parallel computer system for large scale problems. A novel feature of our design is that vehicles will be represented by autonomus computer processes, each with a behavior model which performs independent route selection and reacts to external traffic events much like real vehicles. With this approach, one will be able to take advantage of emerging massively parallel processor (MPP) systems.

  18. Towards Portable Large-Scale Image Processing with High-Performance Computing.

    Science.gov (United States)

    Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A

    2018-05-03

    High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software

  19. Improving Software Systems By Flow Control Analysis

    Directory of Open Access Journals (Sweden)

    Piotr Poznanski

    2012-01-01

    Full Text Available Using agile methods during the implementation of the system that meets mission critical requirements can be a real challenge. The change in the system built of dozens or even hundreds of specialized devices with embedded software requires the cooperation of a large group of engineers. This article presents a solution that supports parallel work of groups of system analysts and software developers. Deployment of formal rules to the requirements written in natural language enables using formal analysis of artifacts being a bridge between software and system requirements. Formalism and textual form of requirements allowed the automatic generation of message flow graph for the (sub system, called the “big-picture-model”. Flow diagram analysis helped to avoid a large number of defects whose repair cost in extreme cases could undermine the legitimacy of agile methods in projects of this scale. Retrospectively, a reduction of technical debt was observed. Continuous analysis of the “big picture model” improves the control of the quality parameters of the software architecture. The article also tries to explain why the commercial platform based on UML modeling language may not be sufficient in projects of this complexity.

  20. A fast approach to generate large-scale topographic maps based on new Chinese vehicle-borne Lidar system

    International Nuclear Information System (INIS)

    Youmei, Han; Bogang, Yang

    2014-01-01

    Large -scale topographic maps are important basic information for city and regional planning and management. Traditional large- scale mapping methods are mostly based on artificial mapping and photogrammetry. The traditional mapping method is inefficient and limited by the environments. While the photogrammetry methods(such as low-altitude aerial mapping) is an economical and effective way to map wide and regulate range of large scale topographic map but doesn't work well in the small area due to the high cost of manpower and resources. Recent years, the vehicle-borne LIDAR technology has a rapid development, and its application in surveying and mapping is becoming a new topic. The main objective of this investigation is to explore the potential of vehicle-borne LIDAR technology to be used to fast mapping large scale topographic maps based on new Chinese vehicle-borne LIDAR system. It studied how to use the new Chinese vehicle-borne LIDAR system measurement technology to map large scale topographic maps. After the field data capture, it can be mapped in the office based on the LIDAR data (point cloud) by software which programmed by ourselves. In addition, the detailed process and accuracy analysis were proposed by an actual case. The result show that this new technology provides a new fast method to generate large scale topographic maps, which is high efficient and accuracy compared to traditional methods

  1. Balancing modern Power System with large scale of wind power

    DEFF Research Database (Denmark)

    Basit, Abdul; Altin, Müfit; Hansen, Anca Daniela

    2014-01-01

    Power system operators must ensure robust, secure and reliable power system operation even with a large scale integration of wind power. Electricity generated from the intermittent wind in large propor-tion may impact on the control of power system balance and thus deviations in the power system...... frequency in small or islanded power systems or tie line power flows in interconnected power systems. Therefore, the large scale integration of wind power into the power system strongly concerns the secure and stable grid operation. To ensure the stable power system operation, the evolving power system has...... to be analysed with improved analytical tools and techniques. This paper proposes techniques for the active power balance control in future power systems with the large scale wind power integration, where power balancing model provides the hour-ahead dispatch plan with reduced planning horizon and the real time...

  2. RT-Syn: A real-time software system generator

    Science.gov (United States)

    Setliff, Dorothy E.

    1992-01-01

    This paper presents research into providing highly reusable and maintainable components by using automatic software synthesis techniques. This proposal uses domain knowledge combined with automatic software synthesis techniques to engineer large-scale mission-critical real-time software. The hypothesis centers on a software synthesis architecture that specifically incorporates application-specific (in this case real-time) knowledge. This architecture synthesizes complex system software to meet a behavioral specification and external interaction design constraints. Some examples of these external constraints are communication protocols, precisions, timing, and space limitations. The incorporation of application-specific knowledge facilitates the generation of mathematical software metrics which are used to narrow the design space, thereby making software synthesis tractable. Success has the potential to dramatically reduce mission-critical system life-cycle costs not only by reducing development time, but more importantly facilitating maintenance, modifications, and extensions of complex mission-critical software systems, which are currently dominating life cycle costs.

  3. The Node Monitoring Component of a Scalable Systems Software Environment

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Samuel James [Iowa State Univ., Ames, IA (United States)

    2006-01-01

    This research describes Fountain, a suite of programs used to monitor the resources of a cluster. A cluster is a collection of individual computers that are connected via a high speed communication network. They are traditionally used by users who desire more resources, such as processing power and memory, than any single computer can provide. A common drawback to effectively utilizing such a large-scale system is the management infrastructure, which often does not often scale well as the system grows. Large-scale parallel systems provide new research challenges in the area of systems software, the programs or tools that manage the system from boot-up to running a parallel job. The approach presented in this thesis utilizes a collection of separate components that communicate with each other to achieve a common goal. While systems software comprises a broad array of components, this thesis focuses on the design choices for a node monitoring component. We will describe Fountain, an implementation of the Scalable Systems Software (SSS) node monitor specification. It is targeted at aggregate node monitoring for clusters, focusing on both scalability and fault tolerance as its design goals. It leverages widely used technologies such as XML and HTTP to present an interface to other components in the SSS environment.

  4. A Top-Down Approach to Construct Execution Views of a Large Software-Intensive System

    NARCIS (Netherlands)

    Callo Arias, T.B.; America, P.H.M.; Avgeriou, P.

    2011-01-01

    This paper presents a top-down approach to construct execution views of a large and complex software intensive system. Execution viewsdescribe what the software does at runtime and how it does it. The presented approach represents a reverse architecting solution that follows a set of pre-defined

  5. EON: software for long time simulations of atomic scale systems

    Science.gov (United States)

    Chill, Samuel T.; Welborn, Matthew; Terrell, Rye; Zhang, Liang; Berthet, Jean-Claude; Pedersen, Andreas; Jónsson, Hannes; Henkelman, Graeme

    2014-07-01

    The EON software is designed for simulations of the state-to-state evolution of atomic scale systems over timescales greatly exceeding that of direct classical dynamics. States are defined as collections of atomic configurations from which a minimization of the potential energy gives the same inherent structure. The time evolution is assumed to be governed by rare events, where transitions between states are uncorrelated and infrequent compared with the timescale of atomic vibrations. Several methods for calculating the state-to-state evolution have been implemented in EON, including parallel replica dynamics, hyperdynamics and adaptive kinetic Monte Carlo. Global optimization methods, including simulated annealing, basin hopping and minima hopping are also implemented. The software has a client/server architecture where the computationally intensive evaluations of the interatomic interactions are calculated on the client-side and the state-to-state evolution is managed by the server. The client supports optimization for different computer architectures to maximize computational efficiency. The server is written in Python so that developers have access to the high-level functionality without delving into the computationally intensive components. Communication between the server and clients is abstracted so that calculations can be deployed on a single machine, clusters using a queuing system, large parallel computers using a message passing interface, or within a distributed computing environment. A generic interface to the evaluation of the interatomic interactions is defined so that empirical potentials, such as in LAMMPS, and density functional theory as implemented in VASP and GPAW can be used interchangeably. Examples are given to demonstrate the range of systems that can be modeled, including surface diffusion and island ripening of adsorbed atoms on metal surfaces, molecular diffusion on the surface of ice and global structural optimization of nanoparticles.

  6. Stability and Control of Large-Scale Dynamical Systems A Vector Dissipative Systems Approach

    CERN Document Server

    Haddad, Wassim M

    2011-01-01

    Modern complex large-scale dynamical systems exist in virtually every aspect of science and engineering, and are associated with a wide variety of physical, technological, environmental, and social phenomena, including aerospace, power, communications, and network systems, to name just a few. This book develops a general stability analysis and control design framework for nonlinear large-scale interconnected dynamical systems, and presents the most complete treatment on vector Lyapunov function methods, vector dissipativity theory, and decentralized control architectures. Large-scale dynami

  7. Software for event oriented processing on multiprocessor systems

    International Nuclear Information System (INIS)

    Fischler, M.; Areti, H.; Biel, J.; Bracker, S.; Case, G.; Gaines, I.; Husby, D.; Nash, T.

    1984-08-01

    Computing intensive problems that require the processing of numerous essentially independent events are natural customers for large scale multi-microprocessor systems. This paper describes the software required to support users with such problems in a multiprocessor environment. It is based on experience with and development work aimed at processing very large amounts of high energy physics data

  8. Stability of large scale interconnected dynamical systems

    International Nuclear Information System (INIS)

    Akpan, E.P.

    1993-07-01

    Large scale systems modelled by a system of ordinary differential equations are considered and necessary and sufficient conditions are obtained for the uniform asymptotic connective stability of the systems using the method of cone-valued Lyapunov functions. It is shown that this model significantly improves the existing models. (author). 9 refs

  9. The use of production management techniques in the construction of large scale physics detectors

    CERN Document Server

    Bazan, A; Estrella, F; Kovács, Z; Le Flour, T; Le Goff, J M; Lieunard, S; McClatchey, R; Murray, S; Varga, L Z; Vialle, J P; Zsenei, M

    1999-01-01

    The construction process of detectors for the Large Hadron Collider (LHC) experiments is large scale, heavily constrained by resource availability and evolves with time. As a consequence, changes in detector component design need to be tracked and quickly reflected in the construction process. With similar problems in industry engineers employ so-called Product Data Management (PDM) systems to control access to documented versions of designs and managers employ so- called Workflow Management software (WfMS) to coordinate production work processes. However, PDM and WfMS software are not generally integrated in industry. The scale of LHC experiments, like CMS, demands that industrial production techniques be applied in detector construction. This paper outlines the major functions and applications of the CRISTAL system (Cooperating Repositories and an information System for Tracking Assembly Lifecycles) in use in CMS which successfully integrates PDM and WfMS techniques in managing large scale physics detector ...

  10. Rucio – The next generation of large scale distributed system for ATLAS data management

    International Nuclear Information System (INIS)

    Garonne, V; Vigne, R; Stewart, G; Barisits, M; Eermann, T B; Lassnig, M; Serfon, C; Goossens, L; Nairz, A

    2014-01-01

    Rucio is the next-generation Distributed Data Management (DDM) system benefiting from recent advances in cloud and 'Big Data' computing to address HEP experiments scaling requirements. Rucio is an evolution of the ATLAS DDM system Don Quijote 2 (DQ2), which has demonstrated very large scale data management capabilities with more than 140 petabytes spread worldwide across 130 sites, and accesses from 1,000 active users. However, DQ2 is reaching its limits in terms of scalability, requiring a large number of support staff to operate and being hard to extend with new technologies. Rucio will deal with these issues by relying on a conceptual data model and new technology to ensure system scalability, address new user requirements and employ new automation framework to reduce operational overheads. We present the key concepts of Rucio, including its data organization/representation and a model of how to manage central group and user activities. The Rucio design, and the technology it employs, is described, specifically looking at its RESTful architecture and the various software components it uses. We show also the performance of the system.

  11. Validation of the process control system of an automated large scale manufacturing plant.

    Science.gov (United States)

    Neuhaus, H; Kremers, H; Karrer, T; Traut, R H

    1998-02-01

    The validation procedure for the process control system of a plant for the large scale production of human albumin from plasma fractions is described. A validation master plan is developed, defining the system and elements to be validated, the interfaces with other systems with the validation limits, a general validation concept and supporting documentation. Based on this master plan, the validation protocols are developed. For the validation, the system is subdivided into a field level, which is the equipment part, and an automation level. The automation level is further subdivided into sections according to the different software modules. Based on a risk categorization of the modules, the qualification activities are defined. The test scripts for the different qualification levels (installation, operational and performance qualification) are developed according to a previously performed risk analysis.

  12. Research on a Small Signal Stability Region Boundary Model of the Interconnected Power System with Large-Scale Wind Power

    Directory of Open Access Journals (Sweden)

    Wenying Liu

    2015-03-01

    Full Text Available For the interconnected power system with large-scale wind power, the problem of the small signal stability has become the bottleneck of restricting the sending-out of wind power as well as the security and stability of the whole power system. Around this issue, this paper establishes a small signal stability region boundary model of the interconnected power system with large-scale wind power based on catastrophe theory, providing a new method for analyzing the small signal stability. Firstly, we analyzed the typical characteristics and the mathematic model of the interconnected power system with wind power and pointed out that conventional methods can’t directly identify the topological properties of small signal stability region boundaries. For this problem, adopting catastrophe theory, we established a small signal stability region boundary model of the interconnected power system with large-scale wind power in two-dimensional power injection space and extended it to multiple dimensions to obtain the boundary model in multidimensional power injection space. Thirdly, we analyzed qualitatively the topological property’s changes of the small signal stability region boundary caused by large-scale wind power integration. Finally, we built simulation models by DIgSILENT/PowerFactory software and the final simulation results verified the correctness and effectiveness of the proposed model.

  13. Large Scale GW Calculations on the Cori System

    Science.gov (United States)

    Deslippe, Jack; Del Ben, Mauro; da Jornada, Felipe; Canning, Andrew; Louie, Steven

    The NERSC Cori system, powered by 9000+ Intel Xeon-Phi processors, represents one of the largest HPC systems for open-science in the United States and the world. We discuss the optimization of the GW methodology for this system, including both node level and system-scale optimizations. We highlight multiple large scale (thousands of atoms) case studies and discuss both absolute application performance and comparison to calculations on more traditional HPC architectures. We find that the GW method is particularly well suited for many-core architectures due to the ability to exploit a large amount of parallelism across many layers of the system. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division, as part of the Computational Materials Sciences Program.

  14. Software development and maintenance: An approach for a large accelerator control system

    International Nuclear Information System (INIS)

    Casalegno, L.; Orsini, L.; Sicard, C.H.

    1990-01-01

    Maintenance costs presently form a large part of the total life-cycle cost of a software system. In case of large systems, while the costs of eliminating bugs, fixing analysis and design errors and introducing updates must be taken into account, the coherence of the system as a whole must be maintained while its parts are evolving independently. The need to devise and supply tools to aid programmers in housekeeping and updating has been strongly felt in the case of the LEP preinjector control system. A set of utilities has been implemented to create a safe interface between the programmers and the files containing the control software. Through this interface consistent naming schemes, common compiling and object-building procedures can be enforced, so that development and maintenance staff need not be concerned with the details of executable code generation. Procedures have been built to verify the consistency, generate maintenance diagnostics and automatically update object and executable files, taking into account multiple releases and versions. The tools and the techniques reported in this paper are of general use in the UNIX environment and have already been adopted for other projects. (orig.)

  15. Virtual Systems Pharmacology (ViSP software for mechanistic system-level model simulations

    Directory of Open Access Journals (Sweden)

    Sergey eErmakov

    2014-10-01

    Full Text Available Multiple software programs are available for designing and running large scale system-level pharmacology models used in the drug development process. Depending on the problem, scientists may be forced to use several modeling tools that could increase model development time, IT costs and so on. Therefore it is desirable to have a single platform that allows setting up and running large-scale simulations for the models that have been developed with different modeling tools. We developed a workflow and a software platform in which a model file is compiled into a self-contained executable that is no longer dependent on the software that was used to create the model. At the same time the full model specifics is preserved by presenting all model parameters as input parameters for the executable. This platform was implemented as a model agnostic, therapeutic area agnostic and web-based application with a database back-end that can be used to configure, manage and execute large-scale simulations for multiple models by multiple users. The user interface is designed to be easily configurable to reflect the specifics of the model and the user’s particular needs and the back-end database has been implemented to store and manage all aspects of the systems, such as Models, Virtual Patients, User Interface Settings, and Results. The platform can be adapted and deployed on an existing cluster or cloud computing environment. Its use was demonstrated with a metabolic disease systems pharmacology model that simulates the effects of two antidiabetic drugs, metformin and fasiglifam, in type 2 diabetes mellitus patients.

  16. VESPA: Very large-scale Evolutionary and Selective Pressure Analyses

    Directory of Open Access Journals (Sweden)

    Andrew E. Webb

    2017-06-01

    Full Text Available Background Large-scale molecular evolutionary analyses of protein coding sequences requires a number of preparatory inter-related steps from finding gene families, to generating alignments and phylogenetic trees and assessing selective pressure variation. Each phase of these analyses can represent significant challenges, particularly when working with entire proteomes (all protein coding sequences in a genome from a large number of species. Methods We present VESPA, software capable of automating a selective pressure analysis using codeML in addition to the preparatory analyses and summary statistics. VESPA is written in python and Perl and is designed to run within a UNIX environment. Results We have benchmarked VESPA and our results show that the method is consistent, performs well on both large scale and smaller scale datasets, and produces results in line with previously published datasets. Discussion Large-scale gene family identification, sequence alignment, and phylogeny reconstruction are all important aspects of large-scale molecular evolutionary analyses. VESPA provides flexible software for simplifying these processes along with downstream selective pressure variation analyses. The software automatically interprets results from codeML and produces simplified summary files to assist the user in better understanding the results. VESPA may be found at the following website: http://www.mol-evol.org/VESPA.

  17. Software for microcircuit systems

    International Nuclear Information System (INIS)

    Kunz, P.F.

    1978-10-01

    Modern Large Scale Integration (LSI) microcircuits are meant to be programed in order to control the function that they perform. The basics of microprograming and new microcircuits have already been discussed. In this course, the methods of developing software for these microcircuits are explored. This generally requires a package of support software in order to assemble the microprogram, and also some amount of support software to test the microprograms and to test the microprogramed circuit itself. 15 figures, 2 tables

  18. Large-scale computing with Quantum Espresso

    International Nuclear Information System (INIS)

    Giannozzi, P.; Cavazzoni, C.

    2009-01-01

    This paper gives a short introduction to Quantum Espresso: a distribution of software for atomistic simulations in condensed-matter physics, chemical physics, materials science, and to its usage in large-scale parallel computing.

  19. Large-Scale Systems Control Design via LMI Optimization

    Czech Academy of Sciences Publication Activity Database

    Rehák, Branislav

    2015-01-01

    Roč. 44, č. 3 (2015), s. 247-253 ISSN 1392-124X Institutional support: RVO:67985556 Keywords : Combinatorial linear matrix inequalities * large-scale system * decentralized control Subject RIV: BC - Control Systems Theory Impact factor: 0.633, year: 2015

  20. A Chain Perspective on Large-scale Number Systems

    NARCIS (Netherlands)

    Grijpink, J.H.A.M.

    2012-01-01

    As large-scale number systems gain significance in social and economic life (electronic communication, remote electronic authentication), the correct functioning and the integrity of public number systems take on crucial importance. They are needed to uniquely indicate people, objects or phenomena

  1. Large-scale visualization projects for teaching software engineering.

    Science.gov (United States)

    Müller, Christoph; Reina, Guido; Burch, Michael; Weiskopf, Daniel

    2012-01-01

    The University of Stuttgart's software engineering major complements the traditional computer science major with more practice-oriented education. Two-semester software projects in various application areas offered by the university's different computer science institutes are a successful building block in the curriculum. With this realistic, complex project setting, students experience the practice of software engineering, including software development processes, technologies, and soft skills. In particular, visualization-based projects are popular with students. Such projects offer them the opportunity to gain profound knowledge that would hardly be possible with only regular lectures and homework assignments.

  2. Watchdog - a workflow management system for the distributed analysis of large-scale experimental data.

    Science.gov (United States)

    Kluge, Michael; Friedel, Caroline C

    2018-03-13

    The development of high-throughput experimental technologies, such as next-generation sequencing, have led to new challenges for handling, analyzing and integrating the resulting large and diverse datasets. Bioinformatical analysis of these data commonly requires a number of mutually dependent steps applied to numerous samples for multiple conditions and replicates. To support these analyses, a number of workflow management systems (WMSs) have been developed to allow automated execution of corresponding analysis workflows. Major advantages of WMSs are the easy reproducibility of results as well as the reusability of workflows or their components. In this article, we present Watchdog, a WMS for the automated analysis of large-scale experimental data. Main features include straightforward processing of replicate data, support for distributed computer systems, customizable error detection and manual intervention into workflow execution. Watchdog is implemented in Java and thus platform-independent and allows easy sharing of workflows and corresponding program modules. It provides a graphical user interface (GUI) for workflow construction using pre-defined modules as well as a helper script for creating new module definitions. Execution of workflows is possible using either the GUI or a command-line interface and a web-interface is provided for monitoring the execution status and intervening in case of errors. To illustrate its potentials on a real-life example, a comprehensive workflow and modules for the analysis of RNA-seq experiments were implemented and are provided with the software in addition to simple test examples. Watchdog is a powerful and flexible WMS for the analysis of large-scale high-throughput experiments. We believe it will greatly benefit both users with and without programming skills who want to develop and apply bioinformatical workflows with reasonable overhead. The software, example workflows and a comprehensive documentation are freely

  3. Software for microcircuit systems

    International Nuclear Information System (INIS)

    Kunz, P.F.

    1978-01-01

    Modern Large Scale Integration (LSI) microcircuits are meant to be programmed in order to control the function that they perform. In the previous paper the author has already discussed the basics of microprogramming and have studied in some detail two types of new microcircuits. In this paper, methods of developing software for these microcircuits are explored. This generally requires a package of support software in order to assemble the microprogram, and also some amount of support software to test the microprograms and to test the microprogrammed circuit itself. (Auth.)

  4. A Report on Simulation-Driven Reliability and Failure Analysis of Large-Scale Storage Systems

    Energy Technology Data Exchange (ETDEWEB)

    Wan, Lipeng [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Wang, Feiyi [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Oral, H. Sarp [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Vazhkudai, Sudharshan S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Cao, Qing [Univ. of Tennessee, Knoxville, TN (United States)

    2014-11-01

    High-performance computing (HPC) storage systems provide data availability and reliability using various hardware and software fault tolerance techniques. Usually, reliability and availability are calculated at the subsystem or component level using limited metrics such as, mean time to failure (MTTF) or mean time to data loss (MTTDL). This often means settling on simple and disconnected failure models (such as exponential failure rate) to achieve tractable and close-formed solutions. However, such models have been shown to be insufficient in assessing end-to-end storage system reliability and availability. We propose a generic simulation framework aimed at analyzing the reliability and availability of storage systems at scale, and investigating what-if scenarios. The framework is designed for an end-to-end storage system, accommodating the various components and subsystems, their interconnections, failure patterns and propagation, and performs dependency analysis to capture a wide-range of failure cases. We evaluate the framework against a large-scale storage system that is in production and analyze its failure projections toward and beyond the end of lifecycle. We also examine the potential operational impact by studying how different types of components affect the overall system reliability and availability, and present the preliminary results

  5. Highly Scalable Trip Grouping for Large Scale Collective Transportation Systems

    DEFF Research Database (Denmark)

    Gidofalvi, Gyozo; Pedersen, Torben Bach; Risch, Tore

    2008-01-01

    Transportation-related problems, like road congestion, parking, and pollution, are increasing in most cities. In order to reduce traffic, recent work has proposed methods for vehicle sharing, for example for sharing cabs by grouping "closeby" cab requests and thus minimizing transportation cost...... and utilizing cab space. However, the methods published so far do not scale to large data volumes, which is necessary to facilitate large-scale collective transportation systems, e.g., ride-sharing systems for large cities. This paper presents highly scalable trip grouping algorithms, which generalize previous...

  6. The use of production management techniques in the construction of large scale physics detectors

    International Nuclear Information System (INIS)

    Bazan, A.; Chevenier, G.; Estrella, F.

    1999-01-01

    The construction process of detectors for the Large Hadron Collider (LHC) experiments is large scale, heavily constrained by resource availability and evolves with time. As a consequence, changes in detector component design need to be tracked and quickly reflected in the construction process. With similar problems in industry engineers employ so-called Product Data Management (PDM) systems to control access to documented versions of designs and managers employ so-called Product Data Management (PDM) systems to control access to documented versions of designs and managers employ so-called Workflow Management Software (WfMS) to coordinate production work processes. However, PDM and WfMS software are not generally integrated in industry. The scale of LHC experiments, like CMS, demands that industrial production techniques be applied in detector construction. This paper outlines the major functions and applications of the CRISTAL system (Cooperating Repositories and an Information System for Tracking Assembly Lifecycles) in use in CMS which successfully integrates PDM and WfMS techniques in managing large scale physics detector construction. This is the first time industrial production techniques have been deployed to this extent in detector construction

  7. Software Engineering Support of the Third Round of Scientific Grand Challenge Investigations: Earth System Modeling Software Framework Survey

    Science.gov (United States)

    Talbot, Bryan; Zhou, Shu-Jia; Higgins, Glenn; Zukor, Dorothy (Technical Monitor)

    2002-01-01

    One of the most significant challenges in large-scale climate modeling, as well as in high-performance computing in other scientific fields, is that of effectively integrating many software models from multiple contributors. A software framework facilitates the integration task, both in the development and runtime stages of the simulation. Effective software frameworks reduce the programming burden for the investigators, freeing them to focus more on the science and less on the parallel communication implementation. while maintaining high performance across numerous supercomputer and workstation architectures. This document surveys numerous software frameworks for potential use in Earth science modeling. Several frameworks are evaluated in depth, including Parallel Object-Oriented Methods and Applications (POOMA), Cactus (from (he relativistic physics community), Overture, Goddard Earth Modeling System (GEMS), the National Center for Atmospheric Research Flux Coupler, and UCLA/UCB Distributed Data Broker (DDB). Frameworks evaluated in less detail include ROOT, Parallel Application Workspace (PAWS), and Advanced Large-Scale Integrated Computational Environment (ALICE). A host of other frameworks and related tools are referenced in this context. The frameworks are evaluated individually and also compared with each other.

  8. Methods for Large-Scale Nonlinear Optimization.

    Science.gov (United States)

    1980-05-01

    STANFORD, CALIFORNIA 94305 METHODS FOR LARGE-SCALE NONLINEAR OPTIMIZATION by Philip E. Gill, Waiter Murray, I Michael A. Saunden, and Masgaret H. Wright...typical iteration can be partitioned so that where B is an m X m basise matrix. This partition effectively divides the vari- ables into three classes... attention is given to the standard of the coding or the documentation. A much better way of obtaining mathematical software is from a software library

  9. A review of large-scale solar heating systems in Europe

    International Nuclear Information System (INIS)

    Fisch, M.N.; Guigas, M.; Dalenback, J.O.

    1998-01-01

    Large-scale solar applications benefit from the effect of scale. Compared to small solar domestic hot water (DHW) systems for single-family houses, the solar heat cost can be cut at least in third. The most interesting projects for replacing fossil fuels and the reduction of CO 2 -emissions are solar systems with seasonal storage in combination with gas or biomass boilers. In the framework of the EU-APAS project Large-scale Solar Heating Systems, thirteen existing plants in six European countries have been evaluated. lie yearly solar gains of the systems are between 300 and 550 kWh per m 2 collector area. The investment cost of solar plants with short-term storage varies from 300 up to 600 ECU per m 2 . Systems with seasonal storage show investment costs twice as high. Results of studies concerning the market potential for solar heating plants, taking new collector concepts and industrial production into account, are presented. Site specific studies and predesign of large-scale solar heating plants in six European countries for housing developments show a 50% cost reduction compared to existing projects. The cost-benefit-ratio for the planned systems with long-term storage is between 0.7 and 1.5 ECU per kWh per year. (author)

  10. Report of the Workshop on Petascale Systems Integration for LargeScale Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Kramer, William T.C.; Walter, Howard; New, Gary; Engle, Tom; Pennington, Rob; Comes, Brad; Bland, Buddy; Tomlison, Bob; Kasdorf, Jim; Skinner, David; Regimbal, Kevin

    2007-10-01

    There are significant issues regarding Large Scale System integration that are not being addressed in other forums such as current research portfolios or vendor user groups. Unfortunately, the issues in the area of large-scale system integration often fall into a netherworld; not research, not facilities, not procurement, not operations, not user services. Taken together, these issues along with the impact of sub-optimal integration technology means the time required to deploy, integrate and stabilize large scale system may consume up to 20 percent of the useful life of such systems. Improving the state of the art for large scale systems integration has potential to increase the scientific productivity of these systems. Sites have significant expertise, but there are no easy ways to leverage this expertise among them . Many issues inhibit the sharing of information, including available time and effort, as well as issues with sharing proprietary information. Vendors also benefit in the long run from the solutions to issues detected during site testing and integration. There is a great deal of enthusiasm for making large scale system integration a full-fledged partner along with the other major thrusts supported by funding agencies in the definition, design, and use of a petascale systems. Integration technology and issues should have a full 'seat at the table' as petascale and exascale initiatives and programs are planned. The workshop attendees identified a wide range of issues and suggested paths forward. Pursuing these with funding opportunities and innovation offers the opportunity to dramatically improve the state of large scale system integration.

  11. Design techniques for large scale linear measurement systems

    International Nuclear Information System (INIS)

    Candy, J.V.

    1979-03-01

    Techniques to design measurement schemes for systems modeled by large scale linear time invariant systems, i.e., physical systems modeled by a large number (> 5) of ordinary differential equations, are described. The techniques are based on transforming the physical system model to a coordinate system facilitating the design and then transforming back to the original coordinates. An example of a three-stage, four-species, extraction column used in the reprocessing of spent nuclear fuel elements is presented. The basic ideas are briefly discussed in the case of noisy measurements. An example using a plutonium nitrate storage vessel (reprocessing) with measurement uncertainty is also presented

  12. A new modification of summary-based analysis method for large software system testing

    Directory of Open Access Journals (Sweden)

    A. V. Sidorin

    2015-01-01

    Full Text Available The automated testing tools becoming a frequent practice require thorough computer-aided testing of large software systems, including system inter-component interfaces. To achieve a good coverage, one should overcome scalability problems of different methods of analysis. These problems arise from impossibility to analyze all the execution paths. The objective of this research is to build a method for inter-procedural analysis, which efficiency enables us to analyse large software systems (such as Android OS codebase as a whole for a reasonable time (no more than 4 hours. This article reviews existing methods of software analysis to detect their potential defects. It focuses on the symbolic execution method since it is widely used both in static analysis of source code and in hybrid analysis of object files and intermediate representation (concolic testing. The method of symbolic execution involves separation of a set of input data values into equivalence classes while choosing an execution path. The paper also considers advantages of this method and its shortcomings. One of the main scalability problems is related to inter-procedural analysis. Analysis time grows rapidly if an inlining method is used for inter-procedural analysis. So this work proposes a summary-based analysis method to solve scalability problems. Clang Static Analyzer, an open source static analyzer (a part of the LLVM project, has been chosen as a target system. It allows us to compare performance of inlining and summary-based inter-procedural analysis. A mathematical model for preliminary estimations is described in order to identify possible factors of performance improvement.

  13. Large Scale Self-Organizing Information Distribution System

    National Research Council Canada - National Science Library

    Low, Steven

    2005-01-01

    This project investigates issues in "large-scale" networks. Here "large-scale" refers to networks with large number of high capacity nodes and transmission links, and shared by a large number of users...

  14. Modeling and Coordinated Control Strategy of Large Scale Grid-Connected Wind/Photovoltaic/Energy Storage Hybrid Energy Conversion System

    Directory of Open Access Journals (Sweden)

    Lingguo Kong

    2015-01-01

    Full Text Available An AC-linked large scale wind/photovoltaic (PV/energy storage (ES hybrid energy conversion system for grid-connected application was proposed in this paper. Wind energy conversion system (WECS and PV generation system are the primary power sources of the hybrid system. The ES system, including battery and fuel cell (FC, is used as a backup and a power regulation unit to ensure continuous power supply and to take care of the intermittent nature of wind and photovoltaic resources. Static synchronous compensator (STATCOM is employed to support the AC-linked bus voltage and improve low voltage ride through (LVRT capability of the proposed system. An overall power coordinated control strategy is designed to manage real-power and reactive-power flows among the different energy sources, the storage unit, and the STATCOM system in the hybrid system. A simulation case study carried out on Western System Coordinating Council (WSCC 3-machine 9-bus test system for the large scale hybrid energy conversion system has been developed using the DIgSILENT/Power Factory software platform. The hybrid system performance under different scenarios has been verified by simulation studies using practical load demand profiles and real weather data.

  15. Large-Scale 3D Printing: The Way Forward

    Science.gov (United States)

    Jassmi, Hamad Al; Najjar, Fady Al; Ismail Mourad, Abdel-Hamid

    2018-03-01

    Research on small-scale 3D printing has rapidly evolved, where numerous industrial products have been tested and successfully applied. Nonetheless, research on large-scale 3D printing, directed to large-scale applications such as construction and automotive manufacturing, yet demands a great a great deal of efforts. Large-scale 3D printing is considered an interdisciplinary topic and requires establishing a blended knowledge base from numerous research fields including structural engineering, materials science, mechatronics, software engineering, artificial intelligence and architectural engineering. This review article summarizes key topics of relevance to new research trends on large-scale 3D printing, particularly pertaining (1) technological solutions of additive construction (i.e. the 3D printers themselves), (2) materials science challenges, and (3) new design opportunities.

  16. Large Scale Landslide Database System Established for the Reservoirs in Southern Taiwan

    Science.gov (United States)

    Tsai, Tsai-Tsung; Tsai, Kuang-Jung; Shieh, Chjeng-Lun

    2017-04-01

    Typhoon Morakot seriously attack southern Taiwan awaken the public awareness of large scale landslide disasters. Large scale landslide disasters produce large quantity of sediment due to negative effects on the operating functions of reservoirs. In order to reduce the risk of these disasters within the study area, the establishment of a database for hazard mitigation / disaster prevention is necessary. Real time data and numerous archives of engineering data, environment information, photo, and video, will not only help people make appropriate decisions, but also bring the biggest concern for people to process and value added. The study tried to define some basic data formats / standards from collected various types of data about these reservoirs and then provide a management platform based on these formats / standards. Meanwhile, in order to satisfy the practicality and convenience, the large scale landslide disasters database system is built both provide and receive information abilities, which user can use this large scale landslide disasters database system on different type of devices. IT technology progressed extreme quick, the most modern system might be out of date anytime. In order to provide long term service, the system reserved the possibility of user define data format /standard and user define system structure. The system established by this study was based on HTML5 standard language, and use the responsive web design technology. This will make user can easily handle and develop this large scale landslide disasters database system.

  17. Large Scale Emerging Properties from Non Hamiltonian Complex Systems

    Directory of Open Access Journals (Sweden)

    Marco Bianucci

    2017-06-01

    Full Text Available The concept of “large scale” depends obviously on the phenomenon we are interested in. For example, in the field of foundation of Thermodynamics from microscopic dynamics, the spatial and time large scales are order of fraction of millimetres and microseconds, respectively, or lesser, and are defined in relation to the spatial and time scales of the microscopic systems. In large scale oceanography or global climate dynamics problems the time scales of interest are order of thousands of kilometres, for space, and many years for time, and are compared to the local and daily/monthly times scales of atmosphere and ocean dynamics. In all the cases a Zwanzig projection approach is, at least in principle, an effective tool to obtain class of universal smooth “large scale” dynamics for few degrees of freedom of interest, starting from the complex dynamics of the whole (usually many degrees of freedom system. The projection approach leads to a very complex calculus with differential operators, that is drastically simplified when the basic dynamics of the system of interest is Hamiltonian, as it happens in Foundation of Thermodynamics problems. However, in geophysical Fluid Dynamics, Biology, and in most of the physical problems the building block fundamental equations of motions have a non Hamiltonian structure. Thus, to continue to apply the useful projection approach also in these cases, we exploit the generalization of the Hamiltonian formalism given by the Lie algebra of dissipative differential operators. In this way, we are able to analytically deal with the series of the differential operators stemming from the projection approach applied to these general cases. Then we shall apply this formalism to obtain some relevant results concerning the statistical properties of the El Niño Southern Oscillation (ENSO.

  18. Engineering management of large scale systems

    Science.gov (United States)

    Sanders, Serita; Gill, Tepper L.; Paul, Arthur S.

    1989-01-01

    The organization of high technology and engineering problem solving, has given rise to an emerging concept. Reasoning principles for integrating traditional engineering problem solving with system theory, management sciences, behavioral decision theory, and planning and design approaches can be incorporated into a methodological approach to solving problems with a long range perspective. Long range planning has a great potential to improve productivity by using a systematic and organized approach. Thus, efficiency and cost effectiveness are the driving forces in promoting the organization of engineering problems. Aspects of systems engineering that provide an understanding of management of large scale systems are broadly covered here. Due to the focus and application of research, other significant factors (e.g., human behavior, decision making, etc.) are not emphasized but are considered.

  19. The composing technique of fast and large scale nuclear data acquisition and control system with single chip microcomputers and PC computers

    International Nuclear Information System (INIS)

    Xu Zurun; Wu Shiying; Liu Haitao; Yao Yangsen; Wang Yingguan; Yang Chaowen

    1998-01-01

    The technique of employing single-chip microcomputers and PC computers to compose a fast and large scale nuclear data acquisition and control system was discussed in detail. The optimum composition mode of this kind of system, the acquisition and control circuit unit based on single-chip microcomputers, the real-time communication methods and the software composition under the Windows 3.2 were also described. One, two and three dimensional spectra measured by this system were demonstrated

  20. The composing technique of fast and large scale nuclear data acquisition and control system with single chip microcomputers and PC computers

    International Nuclear Information System (INIS)

    Xu Zurun; Wu Shiying; Liu Haitao; Yao Yangsen; Wang Yingguan; Yang Chaowen

    1997-01-01

    The technique of employing single-chip microcomputers and PC computers to compose a fast and large scale nuclear data acquisition and control system was discussed in detail. The optimum composition mode of this kind of system, the acquisition and control circuit unit based on single-chip microcomputers, the real-time communication methods and the software composition under the Windows 3.2 were also described. One, two and three dimensional spectra measured by this system were demonstrated

  1. Application of neural networks to software quality modeling of a very large telecommunications system.

    Science.gov (United States)

    Khoshgoftaar, T M; Allen, E B; Hudepohl, J P; Aud, S J

    1997-01-01

    Society relies on telecommunications to such an extent that telecommunications software must have high reliability. Enhanced measurement for early risk assessment of latent defects (EMERALD) is a joint project of Nortel and Bell Canada for improving the reliability of telecommunications software products. This paper reports a case study of neural-network modeling techniques developed for the EMERALD system. The resulting neural network is currently in the prototype testing phase at Nortel. Neural-network models can be used to identify fault-prone modules for extra attention early in development, and thus reduce the risk of operational problems with those modules. We modeled a subset of modules representing over seven million lines of code from a very large telecommunications software system. The set consisted of those modules reused with changes from the previous release. The dependent variable was membership in the class of fault-prone modules. The independent variables were principal components of nine measures of software design attributes. We compared the neural-network model with a nonparametric discriminant model and found the neural-network model had better predictive accuracy.

  2. Dynamic Reactive Power Compensation of Large Scale Wind Integrated Power System

    DEFF Research Database (Denmark)

    Rather, Zakir Hussain; Chen, Zhe; Thøgersen, Paul

    2015-01-01

    wind turbines especially wind farms with additional grid support functionalities like dynamic support (e,g dynamic reactive power support etc.) and ii) refurbishment of existing conventional central power plants to synchronous condensers could be one of the efficient, reliable and cost effective option......Due to progressive displacement of conventional power plants by wind turbines, dynamic security of large scale wind integrated power systems gets significantly compromised. In this paper we first highlight the importance of dynamic reactive power support/voltage security in large scale wind...... integrated power systems with least presence of conventional power plants. Then we propose a mixed integer dynamic optimization based method for optimal dynamic reactive power allocation in large scale wind integrated power systems. One of the important aspects of the proposed methodology is that unlike...

  3. Distributed and hierarchical control techniques for large-scale power plant systems

    International Nuclear Information System (INIS)

    Raju, G.V.S.; Kisner, R.A.

    1985-08-01

    In large-scale systems, integrated and coordinated control functions are required to maximize plant availability, to allow maneuverability through various power levels, and to meet externally imposed regulatory limitations. Nuclear power plants are large-scale systems. Prime subsystems are those that contribute directly to the behavior of the plant's ultimate output. The prime subsystems in a nuclear power plant include reactor, primary and intermediate heat transport, steam generator, turbine generator, and feedwater system. This paper describes and discusses the continuous-variable control system developed to supervise prime plant subsystems for optimal control and coordination

  4. Engineering large-scale agent-based systems with consensus

    Science.gov (United States)

    Bokma, A.; Slade, A.; Kerridge, S.; Johnson, K.

    1994-01-01

    The paper presents the consensus method for the development of large-scale agent-based systems. Systems can be developed as networks of knowledge based agents (KBA) which engage in a collaborative problem solving effort. The method provides a comprehensive and integrated approach to the development of this type of system. This includes a systematic analysis of user requirements as well as a structured approach to generating a system design which exhibits the desired functionality. There is a direct correspondence between system requirements and design components. The benefits of this approach are that requirements are traceable into design components and code thus facilitating verification. The use of the consensus method with two major test applications showed it to be successful and also provided valuable insight into problems typically associated with the development of large systems.

  5. Security System Software

    Science.gov (United States)

    1993-01-01

    C Language Integration Production System (CLIPS), a NASA-developed expert systems program, has enabled a security systems manufacturer to design a new generation of hardware. C.CURESystem 1 Plus, manufactured by Software House, is a software based system that is used with a variety of access control hardware at installations around the world. Users can manage large amounts of information, solve unique security problems and control entry and time scheduling. CLIPS acts as an information management tool when accessed by C.CURESystem 1 Plus. It asks questions about the hardware and when given the answer, recommends possible quick solutions by non-expert persons.

  6. Software architecture for the ORNL large-coil test facility data system

    International Nuclear Information System (INIS)

    Blair, E.T.; Baylor, L.R.

    1986-01-01

    The VAX-based data-acquisition system for the International Fusion Superconducting Magnet Test Facility (IFSMTF) at Oak Ridge National Laboratory (ORNL) is a second-generation system that evolved from a PDP-11/60-based system used during the initial phase of facility testing. The VAX-based software represents a layered implementation that provides integrated access to all of the data sources within the system, decoupling end-user data retrieval from various front-end data sources through a combination of software architecture and instrumentation data bases. Independent VAX processes manage the various front-end data sources, each being responsible for controlling, monitoring, acquiring, and disposing data and control parameters for access from the data retrieval software. This paper describes the software architecture and the functionality incorporated into the various layers of the data system

  7. Software architecture for the ORNL large coil test facility data system

    International Nuclear Information System (INIS)

    Blair, E.T.; Baylor, L.R.

    1986-01-01

    The VAX-based data acquisition system for the International Fusion Superconducting Magnet Test Facility (IFSMTF) at Oak Ridge National Laboratory (ORNL) is a second-generation system that evolved from a PDP-11/60-based system used during the initial phase of facility testing. The VAX-based software represents a layered implementation that provides integrated access to all of the data sources within the system, deoupling end-user data retrieval from various front-end data sources through a combination of software architecture and instrumentation data bases. Independent VAX processes manage the various front-end data sources, each being responsible for controlling, monitoring, acquiring and disposing data and control parameters for access from the data retrieval software. This paper describes the software architecture and the functionality incorporated into the various layers of the data system

  8. Testing, development and demonstration of large scale solar district heating systems

    DEFF Research Database (Denmark)

    Furbo, Simon; Fan, Jianhua; Perers, Bengt

    2015-01-01

    In 2013-2014 the project “Testing, development and demonstration of large scale solar district heating systems” was carried out within the Sino-Danish Renewable Energy Development Programme, the so called RED programme jointly developed by the Chinese and Danish governments. In the project Danish...... know how on solar heating plants and solar heating test technology have been transferred from Denmark to China, large solar heating systems have been promoted in China, test capabilities on solar collectors and large scale solar heating systems have been improved in China and Danish-Chinese cooperation...

  9. A new system of labour management in African large-scale agriculture?

    DEFF Research Database (Denmark)

    Gibbon, Peter; Riisgaard, Lone

    2014-01-01

    This paper applies a convention theory (CT) approach to the analysis of labour management systems in African large-scale farming. The reconstruction of previous analyses of high-value crop production on large-scale farms in Africa in terms of CT suggests that, since 1980–95, labour management has...

  10. Potential for large-scale solar collector system to offset carbon-based heating in the Ontario greenhouse sector

    Science.gov (United States)

    Semple, Lucas M.; Carriveau, Rupp; Ting, David S.-K.

    2018-04-01

    In the Ontario greenhouse sector the misalignment of available solar radiation during the summer months and large heating demand during the winter months makes solar thermal collector systems an unviable option without some form of seasonal energy storage. Information obtained from Ontario greenhouse operators has shown that over 20% of annual natural gas usage occurs during the summer months for greenhouse pre-heating prior to sunrise. A transient model of the greenhouse microclimate and indoor conditioning systems is carried out using TRNSYS software and validated with actual natural gas usage data. A large-scale solar thermal collector system is then incorporated and found to reduce the annual heating energy demand by approximately 35%. The inclusion of the collector system correlates to a reduction of about 120 tonnes of CO2 equivalent emissions per acre of greenhouse per year. System payback period is discussed considering the benefits of a future Ontario carbon tax.

  11. Glass badge dosimetry system for large scale personal monitoring

    International Nuclear Information System (INIS)

    Norimichi Juto

    2002-01-01

    Glass Badge using silver activated phosphate glass dosemeter was specially developed for large scale personal monitoring. And dosimetry systems such as an automatic leader and a dose equipment calculation algorithm were developed at once to achieve reasonable personal monitoring. In large scale personal monitoring, both of precision for dosimetry and confidence for lot of personal data handling become very important. The silver activated phosphate glass dosemeter has basically excellent characteristics for dosimetry such as homogeneous and stable sensitivity, negligible fading and so on. Glass Badge was designed to measure 10 keV - 10 MeV range of photon. 300 keV - 3 MeV range of beta, and 0.025 eV - 15 MeV range of neutron by included SSNTD. And developed Glass Badge dosimetry system has not only these basic characteristics but also lot of features to keep good precision for dosimetry and data handling. In this presentation, features of Glass Badge dosimetry systems and examples for practical personal monitoring systems will be presented. (Author)

  12. Parallel Index and Query for Large Scale Data Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Chou, Jerry; Wu, Kesheng; Ruebel, Oliver; Howison, Mark; Qiang, Ji; Prabhat,; Austin, Brian; Bethel, E. Wes; Ryne, Rob D.; Shoshani, Arie

    2011-07-18

    Modern scientific datasets present numerous data management and analysis challenges. State-of-the-art index and query technologies are critical for facilitating interactive exploration of large datasets, but numerous challenges remain in terms of designing a system for process- ing general scientific datasets. The system needs to be able to run on distributed multi-core platforms, efficiently utilize underlying I/O infrastructure, and scale to massive datasets. We present FastQuery, a novel software framework that address these challenges. FastQuery utilizes a state-of-the-art index and query technology (FastBit) and is designed to process mas- sive datasets on modern supercomputing platforms. We apply FastQuery to processing of a massive 50TB dataset generated by a large scale accelerator modeling code. We demonstrate the scalability of the tool to 11,520 cores. Motivated by the scientific need to search for inter- esting particles in this dataset, we use our framework to reduce search time from hours to tens of seconds.

  13. Security and VO management capabilities in a large-scale Grid operating system

    OpenAIRE

    Aziz, Benjamin; Sporea, Ioana

    2014-01-01

    This paper presents a number of security and VO management capabilities in a large-scale distributed Grid operating system. The capabilities formed the basis of the design and implementation of a number of security and VO management services in the system. The main aim of the paper is to provide some idea of the various functionality cases that need to be considered when designing similar large-scale systems in the future.

  14. Implementation of a large-scale hospital information infrastructure for multi-unit health-care services.

    Science.gov (United States)

    Yoo, Sun K; Kim, Dong Keun; Kim, Jung C; Park, Youn Jung; Chang, Byung Chul

    2008-01-01

    With the increase in demand for high quality medical services, the need for an innovative hospital information system has become essential. An improved system has been implemented in all hospital units of the Yonsei University Health System. Interoperability between multi-units required appropriate hardware infrastructure and software architecture. This large-scale hospital information system encompassed PACS (Picture Archiving and Communications Systems), EMR (Electronic Medical Records) and ERP (Enterprise Resource Planning). It involved two tertiary hospitals and 50 community hospitals. The monthly data production rate by the integrated hospital information system is about 1.8 TByte and the total quantity of data produced so far is about 60 TByte. Large scale information exchange and sharing will be particularly useful for telemedicine applications.

  15. Large-scale computing techniques for complex system simulations

    CERN Document Server

    Dubitzky, Werner; Schott, Bernard

    2012-01-01

    Complex systems modeling and simulation approaches are being adopted in a growing number of sectors, including finance, economics, biology, astronomy, and many more. Technologies ranging from distributed computing to specialized hardware are explored and developed to address the computational requirements arising in complex systems simulations. The aim of this book is to present a representative overview of contemporary large-scale computing technologies in the context of complex systems simulations applications. The intention is to identify new research directions in this field and

  16. Software architecture for the ORNL large coil test facility data system

    International Nuclear Information System (INIS)

    Blair, E.T.; Baylor, L.R.

    1986-01-01

    The VAX based data acquisition system for the international fusion superconducting magnetic test facility (IFSMTF) at Oak Ridge National Laboratory (ORNL) is a second generation system that evolved from a PDP-11/60 based system used during the initial phase of facility testing. The VAX based software represents a layered implementation that provides integrated access to all of the data sources within the system, decoupling en-user data retrieval from various front-end data sources through a combination of software architecture and instrumentation data bases. Independent VAX processes manage the various front-end data sources, each being responsible for controlling, monitoring, acquiring, and disposing data and control parameters for access from the data retrieval software

  17. Implementing Large Projects in Software Engineering Courses

    Science.gov (United States)

    Coppit, David

    2006-01-01

    In software engineering education, large projects are widely recognized as a useful way of exposing students to the real-world difficulties of team software development. But large projects are difficult to put into practice. First, educators rarely have additional time to manage software projects. Second, classrooms have inherent limitations that…

  18. Economic viability of large-scale fusion systems

    Energy Technology Data Exchange (ETDEWEB)

    Helsley, Charles E., E-mail: cehelsley@fusionpowercorporation.com; Burke, Robert J.

    2014-01-01

    A typical modern power generation facility has a capacity of about 1 GWe (Gigawatt electric) per unit. This works well for fossil fuel plants and for most fission facilities for it is large enough to support the sophisticated generation infrastructure but still small enough to be accommodated by most utility grid systems. The size of potential fusion power systems may demand a different viewpoint. The compression and heating of the fusion fuel for ignition requires a large driver, even if it is necessary for only a few microseconds or nanoseconds per energy pulse. The economics of large systems, that can effectively use more of the driver capacity, need to be examined. The assumptions used in this model are specific for the Fusion Power Corporation (FPC) SPRFD process but could be generalized for any system. We assume that the accelerator is the most expensive element of the facility and estimate its cost to be $20 billion. Ignition chambers and fuel handling facilities are projected to cost $1.5 billion each with up to 10 to be serviced by one accelerator. At first this seems expensive but that impression has to be tempered by the energy output that is equal to 35 conventional nuclear plants. This means the cost per kWh is actually low. Using the above assumptions and industry data for generators and heat exchange systems, we conclude that a fully utilized fusion system will produce marketable energy at roughly one half the cost of our current means of generating an equivalent amount of energy from conventional fossil fuel and/or fission systems. Even fractionally utilized systems, i.e. systems used at 25% of capacity, can be cost effective in many cases. In conclusion, SPRFD systems can be scaled to a size and configuration that can be economically viable and very competitive in today's energy market. Electricity will be a significant element in the product mix but synthetic fuels and water may also need to be incorporated to make the large system

  19. Economic viability of large-scale fusion systems

    International Nuclear Information System (INIS)

    Helsley, Charles E.; Burke, Robert J.

    2014-01-01

    A typical modern power generation facility has a capacity of about 1 GWe (Gigawatt electric) per unit. This works well for fossil fuel plants and for most fission facilities for it is large enough to support the sophisticated generation infrastructure but still small enough to be accommodated by most utility grid systems. The size of potential fusion power systems may demand a different viewpoint. The compression and heating of the fusion fuel for ignition requires a large driver, even if it is necessary for only a few microseconds or nanoseconds per energy pulse. The economics of large systems, that can effectively use more of the driver capacity, need to be examined. The assumptions used in this model are specific for the Fusion Power Corporation (FPC) SPRFD process but could be generalized for any system. We assume that the accelerator is the most expensive element of the facility and estimate its cost to be $20 billion. Ignition chambers and fuel handling facilities are projected to cost $1.5 billion each with up to 10 to be serviced by one accelerator. At first this seems expensive but that impression has to be tempered by the energy output that is equal to 35 conventional nuclear plants. This means the cost per kWh is actually low. Using the above assumptions and industry data for generators and heat exchange systems, we conclude that a fully utilized fusion system will produce marketable energy at roughly one half the cost of our current means of generating an equivalent amount of energy from conventional fossil fuel and/or fission systems. Even fractionally utilized systems, i.e. systems used at 25% of capacity, can be cost effective in many cases. In conclusion, SPRFD systems can be scaled to a size and configuration that can be economically viable and very competitive in today's energy market. Electricity will be a significant element in the product mix but synthetic fuels and water may also need to be incorporated to make the large system economically

  20. Large-scale digitizer system, analog converters

    International Nuclear Information System (INIS)

    Althaus, R.F.; Lee, K.L.; Kirsten, F.A.; Wagner, L.J.

    1976-10-01

    Analog to digital converter circuits that are based on the sharing of common resources, including those which are critical to the linearity and stability of the individual channels, are described. Simplicity of circuit composition is valued over other more costly approaches. These are intended to be applied in a large-scale processing and digitizing system for use with high-energy physics detectors such as drift-chambers or phototube-scintillator arrays. Signal distribution techniques are of paramount importance in maintaining adequate signal-to-noise ratio. Noise in both amplitude and time-jitter senses is held sufficiently low so that conversions with 10-bit charge resolution and 12-bit time resolution are achieved

  1. Innovation Initiatives in Large Software Companies

    DEFF Research Database (Denmark)

    Edison, Henry; Wang, Xiaofeng; Jabangwe, Ronald

    2018-01-01

    empirical studies on innovation initiative in the context of large software companies. A total of 7 studies are conducted in the context of large software companies, which reported 5 types of initiatives: intrapreneurship, bootlegging, internal venture, spin-off and crowdsourcing. Our study offers three......Context: To keep the competitive advantage and adapt to changes in the market and technology, companies need to innovate in an organised, purposeful and systematic manner. However, due to their size and complexity, large companies tend to focus on the structure in maintaining their business, which...... can potentially lower their agility to innovate. Objective:The aims of this study are to provide an overview of the current research on innovation initiatives and to identify the challenges of implementing those initiatives in the context of large software companies. Method: The investigation...

  2. Simple Crosscutting Concerns Are Not So Simple : Analysing Variability in Large-Scale Idioms-Based Implementations

    NARCIS (Netherlands)

    Bruntink, M.; Van Deursen, A.; d’Hondt, M.; Tourwé, T.

    2007-01-01

    This paper describes a method for studying idioms-based implementations of crosscutting concerns, and our experiences with it in the context of a real-world, large-scale embedded software system. In particular, we analyse a seemingly simple concern, tracing, and show that it exhibits significant

  3. Workshop Report on Additive Manufacturing for Large-Scale Metal Components - Development and Deployment of Metal Big-Area-Additive-Manufacturing (Large-Scale Metals AM) System

    Energy Technology Data Exchange (ETDEWEB)

    Babu, Sudarsanam Suresh [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Manufacturing Demonstration Facility; Love, Lonnie J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Manufacturing Demonstration Facility; Peter, William H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Manufacturing Demonstration Facility; Dehoff, Ryan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Manufacturing Demonstration Facility

    2016-05-01

    Additive manufacturing (AM) is considered an emerging technology that is expected to transform the way industry can make low-volume, high value complex structures. This disruptive technology promises to replace legacy manufacturing methods for the fabrication of existing components in addition to bringing new innovation for new components with increased functional and mechanical properties. This report outlines the outcome of a workshop on large-scale metal additive manufacturing held at Oak Ridge National Laboratory (ORNL) on March 11, 2016. The charter for the workshop was outlined by the Department of Energy (DOE) Advanced Manufacturing Office program manager. The status and impact of the Big Area Additive Manufacturing (BAAM) for polymer matrix composites was presented as the background motivation for the workshop. Following, the extension of underlying technology to low-cost metals was proposed with the following goals: (i) High deposition rates (approaching 100 lbs/h); (ii) Low cost (<$10/lbs) for steel, iron, aluminum, nickel, as well as, higher cost titanium, (iii) large components (major axis greater than 6 ft) and (iv) compliance of property requirements. The above concept was discussed in depth by representatives from different industrial sectors including welding, metal fabrication machinery, energy, construction, aerospace and heavy manufacturing. In addition, DOE’s newly launched High Performance Computing for Manufacturing (HPC4MFG) program was reviewed. This program will apply thermo-mechanical models to elucidate deeper understanding of the interactions between design, process, and materials during additive manufacturing. Following these presentations, all the attendees took part in a brainstorming session where everyone identified the top 10 challenges in large-scale metal AM from their own perspective. The feedback was analyzed and grouped in different categories including, (i) CAD to PART software, (ii) selection of energy source, (iii

  4. Distributed system for large-scale remote research

    International Nuclear Information System (INIS)

    Ueshima, Yutaka

    2002-01-01

    In advanced photon research, large-scale simulations and high-resolution observations are powerfull tools. In numerical and real experiments, the real-time visualization and steering system is considered as a hopeful method of data analysis. This approach is valid in the typical analysis at one time or low cost experiment and simulation. In research of an unknown problem, it is necessary that the output data be analyzed many times because conclusive analysis is difficult at one time. Consequently, output data should be filed to refer and analyze at any time. To support research, we need the automatic functions, transporting data files from data generator to data storage, analyzing data, tracking history of data handling, and so on. The supporting system will be a functionally distributed system. (author)

  5. Improving Large-scale Storage System Performance via Topology-aware and Balanced Data Placement

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Feiyi [ORNL; Oral, H Sarp [ORNL; Vazhkudai, Sudharshan S [ORNL

    2014-01-01

    With the advent of big data, the I/O subsystems of large-scale compute clusters are becoming a center of focus, with more applications putting greater demands on end-to-end I/O performance. These subsystems are often complex in design. They comprise of multiple hardware and software layers to cope with the increasing capacity, capability and scalability requirements of data intensive applications. The sharing nature of storage resources and the intrinsic interactions across these layers make it to realize user-level, end-to-end performance gains a great challenge. We propose a topology-aware resource load balancing strategy to improve per-application I/O performance. We demonstrate the effectiveness of our algorithm on an extreme-scale compute cluster, Titan, at the Oak Ridge Leadership Computing Facility (OLCF). Our experiments with both synthetic benchmarks and a real-world application show that, even under congestion, our proposed algorithm can improve large-scale application I/O performance significantly, resulting in both the reduction of application run times and higher resolution simulation runs.

  6. Data management strategies for multinational large-scale systems biology projects.

    Science.gov (United States)

    Wruck, Wasco; Peuker, Martin; Regenbrecht, Christian R A

    2014-01-01

    Good accessibility of publicly funded research data is essential to secure an open scientific system and eventually becomes mandatory [Wellcome Trust will Penalise Scientists Who Don't Embrace Open Access. The Guardian 2012]. By the use of high-throughput methods in many research areas from physics to systems biology, large data collections are increasingly important as raw material for research. Here, we present strategies worked out by international and national institutions targeting open access to publicly funded research data via incentives or obligations to share data. Funding organizations such as the British Wellcome Trust therefore have developed data sharing policies and request commitment to data management and sharing in grant applications. Increased citation rates are a profound argument for sharing publication data. Pre-publication sharing might be rewarded by a data citation credit system via digital object identifiers (DOIs) which have initially been in use for data objects. Besides policies and incentives, good practice in data management is indispensable. However, appropriate systems for data management of large-scale projects for example in systems biology are hard to find. Here, we give an overview of a selection of open-source data management systems proved to be employed successfully in large-scale projects.

  7. Constructing large scale SCI-based processing systems by switch elements

    International Nuclear Information System (INIS)

    Wu, B.; Kristiansen, E.; Skaali, B.; Bogaerts, A.; Divia, R.; Mueller, H.

    1993-05-01

    The goal of this paper is to study some of the design criteria for the switch elements to form the interconnection of large scale SCI-based processing systems. The approved IEEE standard 1596 makes it possible to couple up to 64K nodes together. In order to connect thousands of nodes to construct large scale SCI-based processing systems, one has to interconnect these nodes by switch elements to form different topologies. A summary of the requirements and key points of interconnection networks and switches is presented. Two models of the SCI switch elements are proposed. The authors investigate several examples of systems constructed for 4-switches with simulations and the results are analyzed. Some issues and enhancements are discussed to provide the ideas behind the switch design that can improve performance and reduce latency. 29 refs., 11 figs., 3 tabs

  8. Automated Bug Assignment: Ensemble-based Machine Learning in Large Scale Industrial Contexts

    OpenAIRE

    Jonsson, Leif; Borg, Markus; Broman, David; Sandahl, Kristian; Eldh, Sigrid; Runeson, Per

    2016-01-01

    Bug report assignment is an important part of software maintenance. In particular, incorrect assignments of bug reports to development teams can be very expensive in large software development projects. Several studies propose automating bug assignment techniques using machine learning in open source software contexts, but no study exists for large-scale proprietary projects in industry. The goal of this study is to evaluate automated bug assignment techniques that are based on machine learni...

  9. Next generation hyper-scale software and hardware systems for big data analytics

    CERN Multimedia

    CERN. Geneva

    2013-01-01

    Building on foundational technologies such as many-core systems, non-volatile memories and photonic interconnects, we describe some current technologies and future research to create real-time, big data analytics, IT infrastructure. We will also briefly describe some of our biologically-inspired software and hardware architecture for creating radically new hyper-scale cognitive computing systems. About the speaker Rich Friedrich is the director of Strategic Innovation and Research Services (SIRS) at HP Labs. In this strategic role, he is responsible for research investments in nano-technology, exascale computing, cyber security, information management, cloud computing, immersive interaction, sustainability, social computing and commercial digital printing. Rich's philosophy is to fuse strategy and inspiration to create compelling capabilities for next generation information devices, systems and services. Using essential insights gained from the metaphysics of innnovation, he effectively leads ...

  10. Design and development of virtual TXP control system software

    International Nuclear Information System (INIS)

    Wang Yunwei; Leng Shan; Liu Zhisheng; Wang Qiang; Shang Yanxia

    2008-01-01

    Taking distributed control system (DCS) of Siemens TELEPERM-XP (TXP) as the simulation object,Virtual TXP (VTXP) control system based on Virtual DCS with high fidelity and reliability was designed and developed on the platform of Windows. In the process of development, the method of object-oriented modeling and modularization program design are adopted, C++ language and technologies such as multithreading, ActiveX control, Socket network communication are used, to realize the wide range dynamic simulation and recreate the functions of the hardware and software of real TXP. This paper puts emphasis on the design and realization of Control server and Communication server. The development of Virtual TXP control system software is with great effect on the construction of simulation system and the design, commission, verification and maintenance of control system in large-scale power plants, nuclear power plants and combined cycle power plants. (authors)

  11. Parameter and State Estimation of Large-Scale Complex Systems Using Python Tools

    Directory of Open Access Journals (Sweden)

    M. Anushka S. Perera

    2015-07-01

    Full Text Available This paper discusses the topics related to automating parameter, disturbance and state estimation analysis of large-scale complex nonlinear dynamic systems using free programming tools. For large-scale complex systems, before implementing any state estimator, the system should be analyzed for structural observability and the structural observability analysis can be automated using Modelica and Python. As a result of structural observability analysis, the system may be decomposed into subsystems where some of them may be observable --- with respect to parameter, disturbances, and states --- while some may not. The state estimation process is carried out for those observable subsystems and the optimum number of additional measurements are prescribed for unobservable subsystems to make them observable. In this paper, an industrial case study is considered: the copper production process at Glencore Nikkelverk, Kristiansand, Norway. The copper production process is a large-scale complex system. It is shown how to implement various state estimators, in Python, to estimate parameters and disturbances, in addition to states, based on available measurements.

  12. A convex optimization approach for solving large scale linear systems

    Directory of Open Access Journals (Sweden)

    Debora Cores

    2017-01-01

    Full Text Available The well-known Conjugate Gradient (CG method minimizes a strictly convex quadratic function for solving large-scale linear system of equations when the coefficient matrix is symmetric and positive definite. In this work we present and analyze a non-quadratic convex function for solving any large-scale linear system of equations regardless of the characteristics of the coefficient matrix. For finding the global minimizers, of this new convex function, any low-cost iterative optimization technique could be applied. In particular, we propose to use the low-cost globally convergent Spectral Projected Gradient (SPG method, which allow us to extend this optimization approach for solving consistent square and rectangular linear system, as well as linear feasibility problem, with and without convex constraints and with and without preconditioning strategies. Our numerical results indicate that the new scheme outperforms state-of-the-art iterative techniques for solving linear systems when the symmetric part of the coefficient matrix is indefinite, and also for solving linear feasibility problems.

  13. Managing large-scale models: DBS

    International Nuclear Information System (INIS)

    1981-05-01

    A set of fundamental management tools for developing and operating a large scale model and data base system is presented. Based on experience in operating and developing a large scale computerized system, the only reasonable way to gain strong management control of such a system is to implement appropriate controls and procedures. Chapter I discusses the purpose of the book. Chapter II classifies a broad range of generic management problems into three groups: documentation, operations, and maintenance. First, system problems are identified then solutions for gaining management control are disucssed. Chapters III, IV, and V present practical methods for dealing with these problems. These methods were developed for managing SEAS but have general application for large scale models and data bases

  14. Large scale cluster computing workshop

    International Nuclear Information System (INIS)

    Dane Skow; Alan Silverman

    2002-01-01

    Recent revolutions in computer hardware and software technologies have paved the way for the large-scale deployment of clusters of commodity computers to address problems heretofore the domain of tightly coupled SMP processors. Near term projects within High Energy Physics and other computing communities will deploy clusters of scale 1000s of processors and be used by 100s to 1000s of independent users. This will expand the reach in both dimensions by an order of magnitude from the current successful production facilities. The goals of this workshop were: (1) to determine what tools exist which can scale up to the cluster sizes foreseen for the next generation of HENP experiments (several thousand nodes) and by implication to identify areas where some investment of money or effort is likely to be needed. (2) To compare and record experimences gained with such tools. (3) To produce a practical guide to all stages of planning, installing, building and operating a large computing cluster in HENP. (4) To identify and connect groups with similar interest within HENP and the larger clustering community

  15. Self-* and Adaptive Mechanisms for Large Scale Distributed Systems

    Science.gov (United States)

    Fragopoulou, P.; Mastroianni, C.; Montero, R.; Andrjezak, A.; Kondo, D.

    Large-scale distributed computing systems and infrastructure, such as Grids, P2P systems and desktop Grid platforms, are decentralized, pervasive, and composed of a large number of autonomous entities. The complexity of these systems is such that human administration is nearly impossible and centralized or hierarchical control is highly inefficient. These systems need to run on highly dynamic environments, where content, network topologies and workloads are continuously changing. Moreover, they are characterized by the high degree of volatility of their components and the need to provide efficient service management and to handle efficiently large amounts of data. This paper describes some of the areas for which adaptation emerges as a key feature, namely, the management of computational Grids, the self-management of desktop Grid platforms and the monitoring and healing of complex applications. It also elaborates on the use of bio-inspired algorithms to achieve self-management. Related future trends and challenges are described.

  16. The impact of continuous integration on other software development practices: a large-scale empirical study

    NARCIS (Netherlands)

    Zhao, Y.; Serebrenik, A.; Zhou, Y.; Filkov, V.; Vasilescu, B.N.

    2017-01-01

    Continuous Integration (CI) has become a disruptive innovation in software development: with proper tool support and adoption, positive effects have been demonstrated for pull request throughput and scaling up of project sizes. As any other innovation, adopting CI implies adapting existing practices

  17. Using Agent Base Models to Optimize Large Scale Network for Large System Inventories

    Science.gov (United States)

    Shameldin, Ramez Ahmed; Bowling, Shannon R.

    2010-01-01

    The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.

  18. Small-scale fixed wing airplane software verification flight test

    Science.gov (United States)

    Miller, Natasha R.

    The increased demand for micro Unmanned Air Vehicles (UAV) driven by military requirements, commercial use, and academia is creating a need for the ability to quickly and accurately conduct low Reynolds Number aircraft design. There exist several open source software programs that are free or inexpensive that can be used for large scale aircraft design, but few software programs target the realm of low Reynolds Number flight. XFLR5 is an open source, free to download, software program that attempts to take into consideration viscous effects that occur at low Reynolds Number in airfoil design, 3D wing design, and 3D airplane design. An off the shelf, remote control airplane was used as a test bed to model in XFLR5 and then compared to flight test collected data. Flight test focused on the stability modes of the 3D plane, specifically the phugoid mode. Design and execution of the flight tests were accomplished for the RC airplane using methodology from full scale military airplane test procedures. Results from flight test were not conclusive in determining the accuracy of the XFLR5 software program. There were several sources of uncertainty that did not allow for a full analysis of the flight test results. An off the shelf drone autopilot was used as a data collection device for flight testing. The precision and accuracy of the autopilot is unknown. Potential future work should investigate flight test methods for small scale UAV flight.

  19. Large-scale solar heat

    Energy Technology Data Exchange (ETDEWEB)

    Tolonen, J.; Konttinen, P.; Lund, P. [Helsinki Univ. of Technology, Otaniemi (Finland). Dept. of Engineering Physics and Mathematics

    1998-12-31

    In this project a large domestic solar heating system was built and a solar district heating system was modelled and simulated. Objectives were to improve the performance and reduce costs of a large-scale solar heating system. As a result of the project the benefit/cost ratio can be increased by 40 % through dimensioning and optimising the system at the designing stage. (orig.)

  20. Participatory Design and the Challenges of Large-Scale Systems

    DEFF Research Database (Denmark)

    Simonsen, Jesper; Hertzum, Morten

    2008-01-01

    With its 10th biannual anniversary conference, Participatory Design (PD) is leaving its teens and must now be considered ready to join the adult world. In this article we encourage the PD community to think big: PD should engage in large-scale information-systems development and opt for a PD...

  1. Participatory Design of Large-Scale Information Systems

    DEFF Research Database (Denmark)

    Simonsen, Jesper; Hertzum, Morten

    2008-01-01

    into a PD process model that (1) emphasizes PD experiments as transcending traditional prototyping by evaluating fully integrated systems exposed to real work practices; (2) incorporates improvisational change management including anticipated, emergent, and opportunity-based change; and (3) extends initial...... design and development into a sustained and ongoing stepwise implementation that constitutes an overall technology-driven organizational change. The process model is presented through a largescale PD experiment in the Danish healthcare sector. We reflect on our experiences from this experiment......In this article we discuss how to engage in large-scale information systems development by applying a participatory design (PD) approach that acknowledges the unique situated work practices conducted by the domain experts of modern organizations. We reconstruct the iterative prototyping approach...

  2. Secure File Allocation and Caching in Large-scale Distributed Systems

    DEFF Research Database (Denmark)

    Di Mauro, Alessio; Mei, Alessandro; Jajodia, Sushil

    2012-01-01

    In this paper, we present a file allocation and caching scheme that guarantees high assurance, availability, and load balancing in a large-scale distributed file system that can support dynamic updates of authorization policies. The scheme uses fragmentation and replication to store files with hi......-balancing, and reducing delay of read operations. The system offers a trade-off-between performance and security that is dynamically tunable according to the current level of threat. We validate our mechanisms with extensive simulations in an Internet-like network.......In this paper, we present a file allocation and caching scheme that guarantees high assurance, availability, and load balancing in a large-scale distributed file system that can support dynamic updates of authorization policies. The scheme uses fragmentation and replication to store files with high...... security requirements in a system composed of a majority of low-security servers. We develop mechanisms to fragment files, to allocate them into multiple servers, and to cache them as close as possible to their readers while preserving the security requirement of the files, providing load...

  3. Software Tools For Large Scale Interactive Hydrodynamic Modeling

    NARCIS (Netherlands)

    Donchyts, G.; Baart, F.; van Dam, A; Jagers, B; van der Pijl, S.; Piasecki, M.

    2014-01-01

    Developing easy-to-use software that combines components for simultaneous visualization, simulation and interaction is a great challenge. Mainly, because it involves a number of disciplines, like computational fluid dynamics, computer graphics, high-performance computing. One of the main

  4. Financing a large-scale picture archival and communication system.

    Science.gov (United States)

    Goldszal, Alberto F; Bleshman, Michael H; Bryan, R Nick

    2004-01-01

    An attempt to finance a large-scale multi-hospital picture archival and communication system (PACS) solely based on cost savings from current film operations is reported. A modified Request for Proposal described the technical requirements, PACS architecture, and performance targets. The Request for Proposal was complemented by a set of desired financial goals-the main one being the ability to use film savings to pay for the implementation and operation of the PACS. Financing of the enterprise-wide PACS was completed through an operating lease agreement including all PACS equipment, implementation, service, and support for an 8-year term, much like a complete outsourcing. Equipment refreshes, both hardware and software, are included. Our agreement also linked the management of the digital imaging operation (PACS) and the traditional film printing, shifting the operational risks of continued printing and costs related to implementation delays to the PACS vendor. An additional optimization step provided the elimination of the negative film budget variances in the beginning of the project when PACS costs tend to be higher than film and film-related expenses. An enterprise-wide PACS has been adopted to achieve clinical workflow improvements and cost savings. PACS financing was solely based on film savings, which included the entire digital solution (PACS) and any residual film printing. These goals were achieved with simultaneous elimination of any over-budget scenarios providing a non-negative cash flow in each year of an 8-year term.

  5. Experimental research control software system

    International Nuclear Information System (INIS)

    Cohn, I A; Kovalenko, A G; Vystavkin, A N

    2014-01-01

    A software system, intended for automation of a small scale research, has been developed. The software allows one to control equipment, acquire and process data by means of simple scripts. The main purpose of that development is to increase experiment automation easiness, thus significantly reducing experimental setup automation efforts. In particular, minimal programming skills are required and supervisors have no reviewing troubles. Interactions between scripts and equipment are managed automatically, thus allowing to run multiple scripts simultaneously. Unlike well-known data acquisition commercial software systems, the control is performed by an imperative scripting language. This approach eases complex control and data acquisition algorithms implementation. A modular interface library performs interaction with external interfaces. While most widely used interfaces are already implemented, a simple framework is developed for fast implementations of new software and hardware interfaces. While the software is in continuous development with new features being implemented, it is already used in our laboratory for automation of a helium-3 cryostat control and data acquisition. The software is open source and distributed under Gnu Public License.

  6. Experimental research control software system

    Science.gov (United States)

    Cohn, I. A.; Kovalenko, A. G.; Vystavkin, A. N.

    2014-05-01

    A software system, intended for automation of a small scale research, has been developed. The software allows one to control equipment, acquire and process data by means of simple scripts. The main purpose of that development is to increase experiment automation easiness, thus significantly reducing experimental setup automation efforts. In particular, minimal programming skills are required and supervisors have no reviewing troubles. Interactions between scripts and equipment are managed automatically, thus allowing to run multiple scripts simultaneously. Unlike well-known data acquisition commercial software systems, the control is performed by an imperative scripting language. This approach eases complex control and data acquisition algorithms implementation. A modular interface library performs interaction with external interfaces. While most widely used interfaces are already implemented, a simple framework is developed for fast implementations of new software and hardware interfaces. While the software is in continuous development with new features being implemented, it is already used in our laboratory for automation of a helium-3 cryostat control and data acquisition. The software is open source and distributed under Gnu Public License.

  7. Large-Scale Optimization for Bayesian Inference in Complex Systems

    Energy Technology Data Exchange (ETDEWEB)

    Willcox, Karen [MIT; Marzouk, Youssef [MIT

    2013-11-12

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of the SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to

  8. The linac control system for the large-scale synchrotron radiation facility (SPring-8)

    Energy Technology Data Exchange (ETDEWEB)

    Sakaki, Hironao; Yoshikawa, Hiroshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Itoh, Yuichi [Atomic Energy General Services Corporation, Tokai, Ibaraki (Japan); Terashima, Yasushi [Information Technology System Co., Ltd. (ITECS), Tokyo (Japan)

    2000-09-01

    The linac for large-scale synchrotron radiation facilities has been operated since August of 1996. The linac deal with the user requests without any big troubles. In this report, the control system development policy, details, and the operation for the linac are presented. It is also described so that these experiences can be used for control system of a large scale proton accelerators which will be developed in the High Intensity Proton Accelerator Project. (author)

  9. An Axiomatic Analysis Approach for Large-Scale Disaster-Tolerant Systems Modeling

    Directory of Open Access Journals (Sweden)

    Theodore W. Manikas

    2011-02-01

    Full Text Available Disaster tolerance in computing and communications systems refers to the ability to maintain a degree of functionality throughout the occurrence of a disaster. We accomplish the incorporation of disaster tolerance within a system by simulating various threats to the system operation and identifying areas for system redesign. Unfortunately, extremely large systems are not amenable to comprehensive simulation studies due to the large computational complexity requirements. To address this limitation, an axiomatic approach that decomposes a large-scale system into smaller subsystems is developed that allows the subsystems to be independently modeled. This approach is implemented using a data communications network system example. The results indicate that the decomposition approach produces simulation responses that are similar to the full system approach, but with greatly reduced simulation time.

  10. Comparing direct and iterative equation solvers in a large structural analysis software system

    Science.gov (United States)

    Poole, E. L.

    1991-01-01

    Two direct Choleski equation solvers and two iterative preconditioned conjugate gradient (PCG) equation solvers used in a large structural analysis software system are described. The two direct solvers are implementations of the Choleski method for variable-band matrix storage and sparse matrix storage. The two iterative PCG solvers include the Jacobi conjugate gradient method and an incomplete Choleski conjugate gradient method. The performance of the direct and iterative solvers is compared by solving several representative structural analysis problems. Some key factors affecting the performance of the iterative solvers relative to the direct solvers are identified.

  11. Political consultation and large-scale research

    International Nuclear Information System (INIS)

    Bechmann, G.; Folkers, H.

    1977-01-01

    Large-scale research and policy consulting have an intermediary position between sociological sub-systems. While large-scale research coordinates science, policy, and production, policy consulting coordinates science, policy and political spheres. In this very position, large-scale research and policy consulting lack of institutional guarantees and rational back-ground guarantee which are characteristic for their sociological environment. This large-scale research can neither deal with the production of innovative goods under consideration of rentability, nor can it hope for full recognition by the basis-oriented scientific community. Policy consulting knows neither the competence assignment of the political system to make decisions nor can it judge succesfully by the critical standards of the established social science, at least as far as the present situation is concerned. This intermediary position of large-scale research and policy consulting has, in three points, a consequence supporting the thesis which states that this is a new form of institutionalization of science: These are: 1) external control, 2) the organization form, 3) the theoretical conception of large-scale research and policy consulting. (orig.) [de

  12. Large-scale stochasticity in Hamiltonian systems

    International Nuclear Information System (INIS)

    Escande, D.F.

    1982-01-01

    Large scale stochasticity (L.S.S.) in Hamiltonian systems is defined on the paradigm Hamiltonian H(v,x,t) =v 2 /2-M cos x-P cos k(x-t) which describes the motion of one particle in two electrostatic waves. A renormalization transformation Tsub(r) is described which acts as a microscope that focusses on a given KAM (Kolmogorov-Arnold-Moser) torus in phase space. Though approximate, Tsub(r) yields the threshold of L.S.S. in H with an error of 5-10%. The universal behaviour of KAM tori is predicted: for instance the scale invariance of KAM tori and the critical exponent of the Lyapunov exponent of Cantori. The Fourier expansion of KAM tori is computed and several conjectures by L. Kadanoff and S. Shenker are proved. Chirikov's standard mapping for stochastic layers is derived in a simpler way and the width of the layers is computed. A simpler renormalization scheme for these layers is defined. A Mathieu equation for describing the stability of a discrete family of cycles is derived. When combined with Tsub(r), it allows to prove the link between KAM tori and nearby cycles, conjectured by J. Greene and, in particular, to compute the mean residue of a torus. The fractal diagrams defined by G. Schmidt are computed. A sketch of a methodology for computing the L.S.S. threshold in any two-degree-of-freedom Hamiltonian system is given. (Auth.)

  13. Guide to verification and validation of the SCALE-4 radiation shielding software

    Energy Technology Data Exchange (ETDEWEB)

    Broadhead, B.L.; Emmett, M.B.; Tang, J.S.

    1996-12-01

    Whenever a decision is made to newly install the SCALE radiation shielding software on a computer system, the user should run a set of verification and validation (V&V) test cases to demonstrate that the software is properly installed and functioning correctly. This report is intended to serve as a guide for this V&V in that it specifies test cases to run and gives expected results. The report describes the V&V that has been performed for the radiation shielding software in a version of SCALE-4. This report provides documentation of sample problems which are recommended for use in the V&V of the SCALE-4 system for all releases. The results reported in this document are from the SCALE-4.2P version which was run on an IBM RS/6000 work-station. These results verify that the SCALE-4 radiation shielding software has been correctly installed and is functioning properly. A set of problems for use by other shielding codes (e.g., MCNP, TWOTRAN, MORSE) performing similar V&V are discussed. A validation has been performed for XSDRNPM and MORSE-SGC6 utilizing SASI and SAS4 shielding sequences and the SCALE 27-18 group (27N-18COUPLE) cross-section library for typical nuclear reactor spent fuel sources and a variety of transport package geometries. The experimental models used for the validation were taken from two previous applications of the SASI and SAS4 methods.

  14. Guide to verification and validation of the SCALE-4 radiation shielding software

    International Nuclear Information System (INIS)

    Broadhead, B.L.; Emmett, M.B.; Tang, J.S.

    1996-12-01

    Whenever a decision is made to newly install the SCALE radiation shielding software on a computer system, the user should run a set of verification and validation (V ampersand V) test cases to demonstrate that the software is properly installed and functioning correctly. This report is intended to serve as a guide for this V ampersand V in that it specifies test cases to run and gives expected results. The report describes the V ampersand V that has been performed for the radiation shielding software in a version of SCALE-4. This report provides documentation of sample problems which are recommended for use in the V ampersand V of the SCALE-4 system for all releases. The results reported in this document are from the SCALE-4.2P version which was run on an IBM RS/6000 work-station. These results verify that the SCALE-4 radiation shielding software has been correctly installed and is functioning properly. A set of problems for use by other shielding codes (e.g., MCNP, TWOTRAN, MORSE) performing similar V ampersand V are discussed. A validation has been performed for XSDRNPM and MORSE-SGC6 utilizing SASI and SAS4 shielding sequences and the SCALE 27-18 group (27N-18COUPLE) cross-section library for typical nuclear reactor spent fuel sources and a variety of transport package geometries. The experimental models used for the validation were taken from two previous applications of the SASI and SAS4 methods

  15. Decentralized Large-Scale Power Balancing

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad

    2013-01-01

    problem is formulated as a centralized large-scale optimization problem but is then decomposed into smaller subproblems that are solved locally by each unit connected to an aggregator. For large-scale systems the method is faster than solving the full problem and can be distributed to include an arbitrary...

  16. Experimental performance evaluation of software defined networking (SDN) based data communication networks for large scale flexi-grid optical networks.

    Science.gov (United States)

    Zhao, Yongli; He, Ruiying; Chen, Haoran; Zhang, Jie; Ji, Yuefeng; Zheng, Haomian; Lin, Yi; Wang, Xinbo

    2014-04-21

    Software defined networking (SDN) has become the focus in the current information and communication technology area because of its flexibility and programmability. It has been introduced into various network scenarios, such as datacenter networks, carrier networks, and wireless networks. Optical transport network is also regarded as an important application scenario for SDN, which is adopted as the enabling technology of data communication networks (DCN) instead of general multi-protocol label switching (GMPLS). However, the practical performance of SDN based DCN for large scale optical networks, which is very important for the technology selection in the future optical network deployment, has not been evaluated up to now. In this paper we have built a large scale flexi-grid optical network testbed with 1000 virtual optical transport nodes to evaluate the performance of SDN based DCN, including network scalability, DCN bandwidth limitation, and restoration time. A series of network performance parameters including blocking probability, bandwidth utilization, average lightpath provisioning time, and failure restoration time have been demonstrated under various network environments, such as with different traffic loads and different DCN bandwidths. The demonstration in this work can be taken as a proof for the future network deployment.

  17. Large-scale gene function analysis with the PANTHER classification system.

    Science.gov (United States)

    Mi, Huaiyu; Muruganujan, Anushya; Casagrande, John T; Thomas, Paul D

    2013-08-01

    The PANTHER (protein annotation through evolutionary relationship) classification system (http://www.pantherdb.org/) is a comprehensive system that combines gene function, ontology, pathways and statistical analysis tools that enable biologists to analyze large-scale, genome-wide data from sequencing, proteomics or gene expression experiments. The system is built with 82 complete genomes organized into gene families and subfamilies, and their evolutionary relationships are captured in phylogenetic trees, multiple sequence alignments and statistical models (hidden Markov models or HMMs). Genes are classified according to their function in several different ways: families and subfamilies are annotated with ontology terms (Gene Ontology (GO) and PANTHER protein class), and sequences are assigned to PANTHER pathways. The PANTHER website includes a suite of tools that enable users to browse and query gene functions, and to analyze large-scale experimental data with a number of statistical tests. It is widely used by bench scientists, bioinformaticians, computer scientists and systems biologists. In the 2013 release of PANTHER (v.8.0), in addition to an update of the data content, we redesigned the website interface to improve both user experience and the system's analytical capability. This protocol provides a detailed description of how to analyze genome-wide experimental data with the PANTHER classification system.

  18. Studies of Sub-Synchronous Oscillations in Large-Scale Wind Farm Integrated System

    Science.gov (United States)

    Yue, Liu; Hang, Mend

    2018-01-01

    With the rapid development and construction of large-scale wind farms and grid-connected operation, the series compensation wind power AC transmission is gradually becoming the main way of power usage and improvement of wind power availability and grid stability, but the integration of wind farm will change the SSO (Sub-Synchronous oscillation) damping characteristics of synchronous generator system. Regarding the above SSO problem caused by integration of large-scale wind farms, this paper focusing on doubly fed induction generator (DFIG) based wind farms, aim to summarize the SSO mechanism in large-scale wind power integrated system with series compensation, which can be classified as three types: sub-synchronous control interaction (SSCI), sub-synchronous torsional interaction (SSTI), sub-synchronous resonance (SSR). Then, SSO modelling and analysis methods are categorized and compared by its applicable areas. Furthermore, this paper summarizes the suppression measures of actual SSO projects based on different control objectives. Finally, the research prospect on this field is explored.

  19. Tri-track: free software for large-scale particle tracking.

    Science.gov (United States)

    Vallotton, Pascal; Olivier, Sandra

    2013-04-01

    The ability to correctly track objects in time-lapse sequences is important in many applications of microscopy. Individual object motions typically display a level of dynamic regularity reflecting the existence of an underlying physics or biology. Best results are obtained when this local information is exploited. Additionally, if the particle number is known to be approximately constant, a large number of tracking scenarios may be rejected on the basis that they are not compatible with a known maximum particle velocity. This represents information of a global nature, which should ideally be exploited too. Some time ago, we devised an efficient algorithm that exploited both types of information. The tracking task was reduced to a max-flow min-cost problem instance through a novel graph structure that comprised vertices representing objects from three consecutive image frames. The algorithm is explained here for the first time. A user-friendly implementation is provided, and the specific relaxation mechanism responsible for the method's effectiveness is uncovered. The software is particularly competitive for complex dynamics such as dense antiparallel flows, or in situations where object displacements are considerable. As an application, we characterize a remarkable vortex structure formed by bacteria engaged in interstitial motility.

  20. Hierarchical hybrid control of manipulators: Artificial intelligence in large scale integrated circuits

    Science.gov (United States)

    Greene, P. H.

    1972-01-01

    Both in practical engineering and in control of muscular systems, low level subsystems automatically provide crude approximations to the proper response. Through low level tuning of these approximations, the proper response variant can emerge from standardized high level commands. Such systems are expressly suited to emerging large scale integrated circuit technology. A computer, using symbolic descriptions of subsystem responses, can select and shape responses of low level digital or analog microcircuits. A mathematical theory that reveals significant informational units in this style of control and software for realizing such information structures are formulated.

  1. Off-line software for large experimental setups

    International Nuclear Information System (INIS)

    Bruyant, F.

    1983-07-01

    The purpose of this report is to emphasize the importance of Off-line software for large experimental setups in High Energy Physics. Simple notions of program structuring, data structuring and software organization are discussed in the context of the software developped for the European Hybrid Spectrometer. (author)

  2. Thermal System Analysis and Optimization of Large-Scale Compressed Air Energy Storage (CAES

    Directory of Open Access Journals (Sweden)

    Zhongguang Fu

    2015-08-01

    Full Text Available As an important solution to issues regarding peak load and renewable energy resources on grids, large-scale compressed air energy storage (CAES power generation technology has recently become a popular research topic in the area of large-scale industrial energy storage. At present, the combination of high-expansion ratio turbines with advanced gas turbine technology is an important breakthrough in energy storage technology. In this study, a new gas turbine power generation system is coupled with current CAES technology. Moreover, a thermodynamic cycle system is optimized by calculating for the parameters of a thermodynamic system. Results show that the thermal efficiency of the new system increases by at least 5% over that of the existing system.

  3. Human visual system automatically represents large-scale sequential regularities.

    Science.gov (United States)

    Kimura, Motohiro; Widmann, Andreas; Schröger, Erich

    2010-03-04

    Our brain recordings reveal that large-scale sequential regularities defined across non-adjacent stimuli can be automatically represented in visual sensory memory. To show that, we adopted an auditory paradigm developed by Sussman, E., Ritter, W., and Vaughan, H. G. Jr. (1998). Predictability of stimulus deviance and the mismatch negativity. NeuroReport, 9, 4167-4170, Sussman, E., and Gumenyuk, V. (2005). Organization of sequential sounds in auditory memory. NeuroReport, 16, 1519-1523 to the visual domain by presenting task-irrelevant infrequent luminance-deviant stimuli (D, 20%) inserted among task-irrelevant frequent stimuli being of standard luminance (S, 80%) in randomized (randomized condition, SSSDSSSSSDSSSSD...) and fixed manners (fixed condition, SSSSDSSSSDSSSSD...). Comparing the visual mismatch negativity (visual MMN), an event-related brain potential (ERP) index of memory-mismatch processes in human visual sensory system, revealed that visual MMN elicited by deviant stimuli was reduced in the fixed compared to the randomized condition. Thus, the large-scale sequential regularity being present in the fixed condition (SSSSD) must have been represented in visual sensory memory. Interestingly, this effect did not occur in conditions with stimulus-onset asynchronies (SOAs) of 480 and 800 ms but was confined to the 160-ms SOA condition supporting the hypothesis that large-scale regularity extraction was based on perceptual grouping of the five successive stimuli defining the regularity. 2010 Elsevier B.V. All rights reserved.

  4. Magnetic Properties of Large-Scale Nanostructured Graphene Systems

    DEFF Research Database (Denmark)

    Gregersen, Søren Schou

    The on-going progress in two-dimensional (2D) materials and nanostructure fabrication motivates the study of altered and combined materials. Graphene—the most studied material of the 2D family—displays unique electronic and spintronic properties. Exceptionally high electron mobilities, that surpass...... those in conventional materials such as silicon, make graphene a very interesting material for high-speed electronics. Simultaneously, long spin-diffusion lengths and spin-life times makes graphene an eligible spin-transport channel. In this thesis, we explore fundamental features of nanostructured...... graphene systems using large-scale modeling techniques. Graphene perforations, or antidots, have received substantial interest in the prospect of opening large band gaps in the otherwise gapless graphene. Motivated by recent improvements of fabrication processes, such as forming graphene antidots and layer...

  5. Rucio - The next generation large scale distributed system for ATLAS Data Management

    CERN Document Server

    Beermann, T; The ATLAS collaboration; Lassnig, M; Barisits, M; Vigne, R; Serfon, C; Stewart, G A; Goossens, L; Nairz, A; Molfetas, A

    2014-01-01

    Rucio is the next-generation Distributed Data Management (DDM) system benefiting from recent advances in cloud and "Big Data" computing to address the ATLAS experiment scaling requirements. Rucio is an evolution of the ATLAS DDM system Don Quijote 2 (DQ2), which has demonstrated very large scale data management capabilities with more than 150 petabytes spread worldwide across 130 sites, and accesses from 1,000 active users. However, DQ2 is reaching its limits in terms of scalability, requiring a large number of support staff to operate and being hard to extend with new technologies. Rucio will deal with these issues by relying on new technologies to ensure system scalability, address new user requirements and employ a new automation framework to reduce operational overheads.

  6. Incipient multiple fault diagnosis in real time with applications to large-scale systems

    International Nuclear Information System (INIS)

    Chung, H.Y.; Bien, Z.; Park, J.H.; Seon, P.H.

    1994-01-01

    By using a modified signed directed graph (SDG) together with the distributed artificial neutral networks and a knowledge-based system, a method of incipient multi-fault diagnosis is presented for large-scale physical systems with complex pipes and instrumentations such as valves, actuators, sensors, and controllers. The proposed method is designed so as to (1) make a real-time incipient fault diagnosis possible for large-scale systems, (2) perform the fault diagnosis not only in the steady-state case but also in the transient case as well by using a concept of fault propagation time, which is newly adopted in the SDG model, (3) provide with highly reliable diagnosis results and explanation capability of faults diagnosed as in an expert system, and (4) diagnose the pipe damage such as leaking, break, or throttling. This method is applied for diagnosis of a pressurizer in the Kori Nuclear Power Plant (NPP) unit 2 in Korea under a transient condition, and its result is reported to show satisfactory performance of the method for the incipient multi-fault diagnosis of such a large-scale system in a real-time manner

  7. Remote collaboration system based on large scale simulation

    International Nuclear Information System (INIS)

    Kishimoto, Yasuaki; Sugahara, Akihiro; Li, J.Q.

    2008-01-01

    Large scale simulation using super-computer, which generally requires long CPU time and produces large amount of data, has been extensively studied as a third pillar in various advanced science fields in parallel to theory and experiment. Such a simulation is expected to lead new scientific discoveries through elucidation of various complex phenomena, which are hardly identified only by conventional theoretical and experimental approaches. In order to assist such large simulation studies for which many collaborators working at geographically different places participate and contribute, we have developed a unique remote collaboration system, referred to as SIMON (simulation monitoring system), which is based on client-server system control introducing an idea of up-date processing, contrary to that of widely used post-processing. As a key ingredient, we have developed a trigger method, which transmits various requests for the up-date processing from the simulation (client) running on a super-computer to a workstation (server). Namely, the simulation running on a super-computer actively controls the timing of up-date processing. The server that has received the requests from the ongoing simulation such as data transfer, data analyses, and visualizations, etc. starts operations according to the requests during the simulation. The server makes the latest results available to web browsers, so that the collaborators can monitor the results at any place and time in the world. By applying the system to a specific simulation project of laser-matter interaction, we have confirmed that the system works well and plays an important role as a collaboration platform on which many collaborators work with one another

  8. Final Report: Enabling Exascale Hardware and Software Design through Scalable System Virtualization

    Energy Technology Data Exchange (ETDEWEB)

    Bridges, Patrick G.

    2015-02-01

    In this grant, we enhanced the Palacios virtual machine monitor to increase its scalability and suitability for addressing exascale system software design issues. This included a wide range of research on core Palacios features, large-scale system emulation, fault injection, perfomrance monitoring, and VMM extensibility. This research resulted in large number of high-impact publications in well-known venues, the support of a number of students, and the graduation of two Ph.D. students and one M.S. student. In addition, our enhanced version of the Palacios virtual machine monitor has been adopted as a core element of the Hobbes operating system under active DOE-funded research and development.

  9. Output Control Technologies for a Large-scale PV System Considering Impacts on a Power Grid

    Science.gov (United States)

    Kuwayama, Akira

    The mega-solar demonstration project named “Verification of Grid Stabilization with Large-scale PV Power Generation systems” had been completed in March 2011 at Wakkanai, the northernmost city of Japan. The major objectives of this project were to evaluate adverse impacts of large-scale PV power generation systems connected to the power grid and develop output control technologies with integrated battery storage system. This paper describes the outline and results of this project. These results show the effectiveness of battery storage system and also proposed output control methods for a large-scale PV system to ensure stable operation of power grids. NEDO, New Energy and Industrial Technology Development Organization of Japan conducted this project and HEPCO, Hokkaido Electric Power Co., Inc managed the overall project.

  10. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems

    OpenAIRE

    Abadi, Martín; Agarwal, Ashish; Barham, Paul; Brevdo, Eugene; Chen, Zhifeng; Citro, Craig; Corrado, Greg S.; Davis, Andy; Dean, Jeffrey; Devin, Matthieu; Ghemawat, Sanjay; Goodfellow, Ian; Harp, Andrew; Irving, Geoffrey; Isard, Michael

    2016-01-01

    TensorFlow is an interface for expressing machine learning algorithms, and an implementation for executing such algorithms. A computation expressed using TensorFlow can be executed with little or no change on a wide variety of heterogeneous systems, ranging from mobile devices such as phones and tablets up to large-scale distributed systems of hundreds of machines and thousands of computational devices such as GPU cards. The system is flexible and can be used to express a wide variety of algo...

  11. Software application for quality control protocol of mammography systems

    International Nuclear Information System (INIS)

    Kjosevski, Vladimir; Gershan, Vesna; Ginovska, Margarita; Spasevska, Hristina

    2010-01-01

    Considering the fact that the Quality Control of the technological process of the mammographic system involves testing of a large number of parameters, it is clearly evident that there is a need for using the information technology for gathering, processing and storing of all the parameters that are result of this process. The main goal of this software application is facilitation and automation of the gathering, processing, storing and presenting process of the data related to the qualification of the physical and technical parameters during the quality control of the technological process of the mammographic system. The software application along with its user interface and database has been made with the Microsoft Access 2003 application which is part of the Microsoft Office 2003 software packet and has been chosen as a platform for developing because it is the most commonly used office application today among the computer users in the country. This is important because it will provide the end users a familiar environment to work in, without the need for additional training and improving the computer skills that they posses. Most importantly, the software application is easy to use, fast in calculating the parameters needed and it is an excellent way to store and display the results. There is a possibility for up scaling this software solution so it can be used by many different users at the same time over the Internet. It is highly recommended that this system is implemented as soon as possible in the quality control process of the mammographic systems due to its many advantages.(Author)

  12. Large scale gas chromatographic demonstration system for hydrogen isotope separation

    International Nuclear Information System (INIS)

    Cheh, C.H.

    1988-01-01

    A large scale demonstration system was designed for a throughput of 3 mol/day equimolar mixture of H,D, and T. The demonstration system was assembled and an experimental program carried out. This project was funded by Kernforschungszentrum Karlsruhe, Canadian Fusion Fuel Technology Projects and Ontario Hydro Research Division. Several major design innovations were successfully implemented in the demonstration system and are discussed in detail. Many experiments were carried out in the demonstration system to study the performance of the system to separate hydrogen isotopes at high throughput. Various temperature programming schemes were tested, heart-cutting operation was evaluated, and very large (up to 138 NL/injection) samples were separated in the system. The results of the experiments showed that the specially designed column performed well as a chromatographic column and good separation could be achieved even when a 138 NL sample was injected

  13. Backup flexibility classes in emerging large-scale renewable electricity systems

    International Nuclear Information System (INIS)

    Schlachtberger, D.P.; Becker, S.; Schramm, S.; Greiner, M.

    2016-01-01

    Highlights: • Flexible backup demand in a European wind and solar based power system is modelled. • Three flexibility classes are defined based on production and consumption timescales. • Seasonal backup capacities are shown to be only used below 50% renewable penetration. • Large-scale transmission between countries can reduce fast flexible capacities. - Abstract: High shares of intermittent renewable power generation in a European electricity system will require flexible backup power generation on the dominant diurnal, synoptic, and seasonal weather timescales. The same three timescales are already covered by today’s dispatchable electricity generation facilities, which are able to follow the typical load variations on the intra-day, intra-week, and seasonal timescales. This work aims to quantify the changing demand for those three backup flexibility classes in emerging large-scale electricity systems, as they transform from low to high shares of variable renewable power generation. A weather-driven modelling is used, which aggregates eight years of wind and solar power generation data as well as load data over Germany and Europe, and splits the backup system required to cover the residual load into three flexibility classes distinguished by their respective maximum rates of change of power output. This modelling shows that the slowly flexible backup system is dominant at low renewable shares, but its optimized capacity decreases and drops close to zero once the average renewable power generation exceeds 50% of the mean load. The medium flexible backup capacities increase for modest renewable shares, peak at around a 40% renewable share, and then continuously decrease to almost zero once the average renewable power generation becomes larger than 100% of the mean load. The dispatch capacity of the highly flexible backup system becomes dominant for renewable shares beyond 50%, and reach their maximum around a 70% renewable share. For renewable shares

  14. Design of an omnidirectional single-point photodetector for large-scale spatial coordinate measurement

    Science.gov (United States)

    Xie, Hongbo; Mao, Chensheng; Ren, Yongjie; Zhu, Jigui; Wang, Chao; Yang, Lei

    2017-10-01

    In high precision and large-scale coordinate measurement, one commonly used approach to determine the coordinate of a target point is utilizing the spatial trigonometric relationships between multiple laser transmitter stations and the target point. A light receiving device at the target point is the key element in large-scale coordinate measurement systems. To ensure high-resolution and highly sensitive spatial coordinate measurement, a high-performance and miniaturized omnidirectional single-point photodetector (OSPD) is greatly desired. We report one design of OSPD using an aspheric lens, which achieves an enhanced reception angle of -5 deg to 45 deg in vertical and 360 deg in horizontal. As the heart of our OSPD, the aspheric lens is designed in a geometric model and optimized by LightTools Software, which enables the reflection of a wide-angle incident light beam into the single-point photodiode. The performance of home-made OSPD is characterized with working distances from 1 to 13 m and further analyzed utilizing developed a geometric model. The experimental and analytic results verify that our device is highly suitable for large-scale coordinate metrology. The developed device also holds great potential in various applications such as omnidirectional vision sensor, indoor global positioning system, and optical wireless communication systems.

  15. Review of Dynamic Modeling and Simulation of Large Scale Belt Conveyor System

    Science.gov (United States)

    He, Qing; Li, Hong

    Belt conveyor is one of the most important devices to transport bulk-solid material for long distance. Dynamic analysis is the key to decide whether the design is rational in technique, safe and reliable in running, feasible in economy. It is very important to study dynamic properties, improve efficiency and productivity, guarantee conveyor safe, reliable and stable running. The dynamic researches and applications of large scale belt conveyor are discussed. The main research topics, the state-of-the-art of dynamic researches on belt conveyor are analyzed. The main future works focus on dynamic analysis, modeling and simulation of main components and whole system, nonlinear modeling, simulation and vibration analysis of large scale conveyor system.

  16. Large-Scale Traveling Weather Systems in Mars’ Southern Extratropics

    Science.gov (United States)

    Hollingsworth, Jeffery L.; Kahre, Melinda A.

    2017-10-01

    Between late fall and early spring, Mars’ middle- and high-latitude atmosphere supports strong mean equator-to-pole temperature contrasts and an accompanying mean westerly polar vortex. Observations from both the MGS Thermal Emission Spectrometer (TES) and the MRO Mars Climate Sounder (MCS) indicate that a mean baroclinicity-barotropicity supports intense, large-scale eastward traveling weather systems (i.e., transient synoptic-period waves). Such extratropical weather disturbances are critical components of the global circulation as they serve as agents in the transport of heat and momentum, and generalized scalar/tracer quantities (e.g., atmospheric dust, water-vapor and ice clouds). The character of such traveling extratropical synoptic disturbances in Mars' southern hemisphere during late winter through early spring is investigated using a moderately high-resolution Mars global climate model (Mars GCM). This Mars GCM imposes interactively-lifted and radiatively-active dust based on a threshold value of the surface stress. The model exhibits a reasonable "dust cycle" (i.e., globally averaged, a dustier atmosphere during southern spring and summer occurs). Compared to the northern-hemisphere counterparts, the southern synoptic-period weather disturbances and accompanying frontal waves have smaller meridional and zonal scales, and are far less intense. Influences of the zonally asymmetric (i.e., east-west varying) topography on southern large-scale weather are investigated, in addition to large-scale up-slope/down-slope flows and the diurnal cycle. A southern storm zone in late winter and early spring presents in the western hemisphere via orographic influences from the Tharsis highlands, and the Argyre and Hellas impact basins. Geographically localized transient-wave activity diagnostics are constructed that illuminate dynamical differences amongst the simulations and these are presented.

  17. Large-Scale Traveling Weather Systems in Mars Southern Extratropics

    Science.gov (United States)

    Hollingsworth, Jeffery L.; Kahre, Melinda A.

    2017-01-01

    Between late fall and early spring, Mars' middle- and high-latitude atmosphere supports strong mean equator-to-pole temperature contrasts and an accompanying mean westerly polar vortex. Observations from both the MGS Thermal Emission Spectrometer (TES) and the MRO Mars Climate Sounder (MCS) indicate that a mean baroclinicity-barotropicity supports intense, large-scale eastward traveling weather systems (i.e., transient synoptic-period waves). Such extratropical weather disturbances are critical components of the global circulation as they serve as agents in the transport of heat and momentum, and generalized scalar/tracer quantities (e.g., atmospheric dust, water-vapor and ice clouds). The character of such traveling extratropical synoptic disturbances in Mars' southern hemisphere during late winter through early spring is investigated using a moderately high-resolution Mars global climate model (Mars GCM). This Mars GCM imposes interactively-lifted and radiatively-active dust based on a threshold value of the surface stress. The model exhibits a reasonable "dust cycle" (i.e., globally averaged, a dustier atmosphere during southern spring and summer occurs). Compared to the northern-hemisphere counterparts, the southern synoptic-period weather disturbances and accompanying frontal waves have smaller meridional and zonal scales, and are far less intense. Influences of the zonally asymmetric (i.e., east-west varying) topography on southern large-scale weather are investigated, in addition to large-scale up-slope/down-slope flows and the diurnal cycle. A southern storm zone in late winter and early spring presents in the western hemisphere via orographic influences from the Tharsis highlands, and the Argyre and Hellas impact basins. Geographically localized transient-wave activity diagnostics are constructed that illuminate dynamical differences amongst the simulations and these are presented.

  18. Large scale electrolysers

    International Nuclear Information System (INIS)

    B Bello; M Junker

    2006-01-01

    Hydrogen production by water electrolysis represents nearly 4 % of the world hydrogen production. Future development of hydrogen vehicles will require large quantities of hydrogen. Installation of large scale hydrogen production plants will be needed. In this context, development of low cost large scale electrolysers that could use 'clean power' seems necessary. ALPHEA HYDROGEN, an European network and center of expertise on hydrogen and fuel cells, has performed for its members a study in 2005 to evaluate the potential of large scale electrolysers to produce hydrogen in the future. The different electrolysis technologies were compared. Then, a state of art of the electrolysis modules currently available was made. A review of the large scale electrolysis plants that have been installed in the world was also realized. The main projects related to large scale electrolysis were also listed. Economy of large scale electrolysers has been discussed. The influence of energy prices on the hydrogen production cost by large scale electrolysis was evaluated. (authors)

  19. eXascale PRogramming Environment and System Software (XPRESS)

    Energy Technology Data Exchange (ETDEWEB)

    Chapman, Barbara [Univ. of Houston, TX (United States); Gabriel, Edgar [Univ. of Houston, TX (United States)

    2015-11-30

    Exascale systems, with a thousand times the compute capacity of today’s leading edge petascale computers, are expected to emerge during the next decade. Their software systems will need to facilitate the exploitation of exceptional amounts of concurrency in applications, and ensure that jobs continue to run despite the occurrence of system failures and other kinds of hard and soft errors. Adapting computations at runtime to cope with changes in the execution environment, as well as to improve power and performance characteristics, is likely to become the norm. As a result, considerable innovation is required to develop system support to meet the needs of future computing platforms. The XPRESS project aims to develop and prototype a revolutionary software system for extreme-­scale computing for both exascale and strong­scaled problems. The XPRESS collaborative research project will advance the state-­of-­the-­art in high performance computing and enable exascale computing for current and future DOE mission-­critical applications and supporting systems. The goals of the XPRESS research project are to: A. enable exascale performance capability for DOE applications, both current and future, B. develop and deliver a practical computing system software X-­stack, OpenX, for future practical DOE exascale computing systems, and C. provide programming methods and environments for effective means of expressing application and system software for portable exascale system execution.

  20. A Classification Framework for Large-Scale Face Recognition Systems

    OpenAIRE

    Zhou, Ziheng; Deravi, Farzin

    2009-01-01

    This paper presents a generic classification framework for large-scale face recognition systems. Within the framework, a data sampling strategy is proposed to tackle the data imbalance when image pairs are sampled from thousands of face images for preparing a training dataset. A modified kernel Fisher discriminant classifier is proposed to make it computationally feasible to train the kernel-based classification method using tens of thousands of training samples. The framework is tested in an...

  1. TensorFlow: A system for large-scale machine learning

    OpenAIRE

    Abadi, Martín; Barham, Paul; Chen, Jianmin; Chen, Zhifeng; Davis, Andy; Dean, Jeffrey; Devin, Matthieu; Ghemawat, Sanjay; Irving, Geoffrey; Isard, Michael; Kudlur, Manjunath; Levenberg, Josh; Monga, Rajat; Moore, Sherry; Murray, Derek G.

    2016-01-01

    TensorFlow is a machine learning system that operates at large scale and in heterogeneous environments. TensorFlow uses dataflow graphs to represent computation, shared state, and the operations that mutate that state. It maps the nodes of a dataflow graph across many machines in a cluster, and within a machine across multiple computational devices, including multicore CPUs, general-purpose GPUs, and custom designed ASICs known as Tensor Processing Units (TPUs). This architecture gives flexib...

  2. Modeling the impact of large-scale energy conversion systems on global climate

    International Nuclear Information System (INIS)

    Williams, J.

    There are three energy options which could satisfy a projected energy requirement of about 30 TW and these are the solar, nuclear and (to a lesser extent) coal options. Climate models can be used to assess the impact of large scale deployment of these options. The impact of waste heat has been assessed using energy balance models and general circulation models (GCMs). Results suggest that the impacts are significant when the heat imput is very high and studies of more realistic scenarios are required. Energy balance models, radiative-convective models and a GCM have been used to study the impact of doubling the atmospheric CO 2 concentration. State-of-the-art models estimate a surface temperature increase of 1.5-3.0 0 C with large amplification near the poles, but much uncertainty remains. Very few model studies have been made of the impact of particles on global climate, more information on the characteristics of particle input are required. The impact of large-scale deployment of solar energy conversion systems has received little attention but model studies suggest that large scale changes in surface characteristics associated with such systems (surface heat balance, roughness and hydrological characteristics and ocean surface temperature) could have significant global climatic effects. (Auth.)

  3. Achieving Agility and Stability in Large-Scale Software Development

    Science.gov (United States)

    2013-01-16

    temporary team is assigned to prepare layers and frameworks for future feature teams. Presentation Layer Domain Layer Data Access Layer...http://www.sei.cmu.edu/training/ elearning ~ Software Engineering Institute CarnegieMellon

  4. A system for automatic evaluation of simulation software

    Science.gov (United States)

    Ryan, J. P.; Hodges, B. C.

    1976-01-01

    Within the field of computer software, simulation and verification are complementary processes. Simulation methods can be used to verify software by performing variable range analysis. More general verification procedures, such as those described in this paper, can be implicitly, viewed as attempts at modeling the end-product software. From software requirement methodology, each component of the verification system has some element of simulation to it. Conversely, general verification procedures can be used to analyze simulation software. A dynamic analyzer is described which can be used to obtain properly scaled variables for an analog simulation, which is first digitally simulated. In a similar way, it is thought that the other system components and indeed the whole system itself have the potential of being effectively used in a simulation environment.

  5. The method of measurement and synchronization control for large-scale complex loading system

    International Nuclear Information System (INIS)

    Liao Min; Li Pengyuan; Hou Binglin; Chi Chengfang; Zhang Bo

    2012-01-01

    With the development of modern industrial technology, measurement and control system was widely used in high precision, complex industrial control equipment and large-tonnage loading device. The measurement and control system is often used to analyze the distribution of stress and displacement in the complex bearing load or the complex nature of the mechanical structure itself. In ITER GS mock-up with 5 flexible plates, for each load combination, detect and measure potential slippage between the central flexible plate and the neighboring spacers is necessary as well as the potential slippage between each pre-stressing bar and its neighboring plate. The measurement and control system consists of seven sets of EDC controller and board, computer system, 16-channel quasi-dynamic strain gauge, 25 sets of displacement sensors, 7 sets of load and displacement sensors in the cylinders. This paper demonstrates the principles and methods of EDC220 digital controller to achieve synchronization control, and R and D process of multi-channel loading control software and measurement software. (authors)

  6. GEnomes Management Application (GEM.app): a new software tool for large-scale collaborative genome analysis.

    Science.gov (United States)

    Gonzalez, Michael A; Lebrigio, Rafael F Acosta; Van Booven, Derek; Ulloa, Rick H; Powell, Eric; Speziani, Fiorella; Tekin, Mustafa; Schüle, Rebecca; Züchner, Stephan

    2013-06-01

    Novel genes are now identified at a rapid pace for many Mendelian disorders, and increasingly, for genetically complex phenotypes. However, new challenges have also become evident: (1) effectively managing larger exome and/or genome datasets, especially for smaller labs; (2) direct hands-on analysis and contextual interpretation of variant data in large genomic datasets; and (3) many small and medium-sized clinical and research-based investigative teams around the world are generating data that, if combined and shared, will significantly increase the opportunities for the entire community to identify new genes. To address these challenges, we have developed GEnomes Management Application (GEM.app), a software tool to annotate, manage, visualize, and analyze large genomic datasets (https://genomics.med.miami.edu/). GEM.app currently contains ∼1,600 whole exomes from 50 different phenotypes studied by 40 principal investigators from 15 different countries. The focus of GEM.app is on user-friendly analysis for nonbioinformaticians to make next-generation sequencing data directly accessible. Yet, GEM.app provides powerful and flexible filter options, including single family filtering, across family/phenotype queries, nested filtering, and evaluation of segregation in families. In addition, the system is fast, obtaining results within 4 sec across ∼1,200 exomes. We believe that this system will further enhance identification of genetic causes of human disease. © 2013 Wiley Periodicals, Inc.

  7. Optimization of large-scale industrial systems : an emerging method

    Energy Technology Data Exchange (ETDEWEB)

    Hammache, A.; Aube, F.; Benali, M.; Cantave, R. [Natural Resources Canada, Varennes, PQ (Canada). CANMET Energy Technology Centre

    2006-07-01

    This paper reviewed optimization methods of large-scale industrial production systems and presented a novel systematic multi-objective and multi-scale optimization methodology. The methodology was based on a combined local optimality search with global optimality determination, and advanced system decomposition and constraint handling. The proposed method focused on the simultaneous optimization of the energy, economy and ecology aspects of industrial systems (E{sup 3}-ISO). The aim of the methodology was to provide guidelines for decision-making strategies. The approach was based on evolutionary algorithms (EA) with specifications including hybridization of global optimality determination with a local optimality search; a self-adaptive algorithm to account for the dynamic changes of operating parameters and design variables occurring during the optimization process; interactive optimization; advanced constraint handling and decomposition strategy; and object-oriented programming and parallelization techniques. Flowcharts of the working principles of the basic EA were presented. It was concluded that the EA uses a novel decomposition and constraint handling technique to enhance the Pareto solution search procedure for multi-objective problems. 6 refs., 9 figs.

  8. Software Defined Optics and Networking for Large Scale Data Centers

    DEFF Research Database (Denmark)

    Mehmeri, Victor; Andrus, Bogdan-Mihai; Tafur Monroy, Idelfonso

    Big data imposes correlations of large amounts of information between numerous systems and databases. This leads to large dynamically changing flows and traffic patterns between clusters and server racks that result in a decrease of the quality of transmission and degraded application performance....... Highly interconnected topologies combined with flexible, on demand network configuration can become a solution to the ever-increasing dynamic traffic...

  9. Versatile synchronized real-time MEG hardware controller for large-scale fast data acquisition

    Science.gov (United States)

    Sun, Limin; Han, Menglai; Pratt, Kevin; Paulson, Douglas; Dinh, Christoph; Esch, Lorenz; Okada, Yoshio; Hämäläinen, Matti

    2017-05-01

    Versatile controllers for accurate, fast, and real-time synchronized acquisition of large-scale data are useful in many areas of science, engineering, and technology. Here, we describe the development of a controller software based on a technique called queued state machine for controlling the data acquisition (DAQ) hardware, continuously acquiring a large amount of data synchronized across a large number of channels (>400) at a fast rate (up to 20 kHz/channel) in real time, and interfacing with applications for real-time data analysis and display of electrophysiological data. This DAQ controller was developed specifically for a 384-channel pediatric whole-head magnetoencephalography (MEG) system, but its architecture is useful for wide applications. This controller running in a LabVIEW environment interfaces with microprocessors in the MEG sensor electronics to control their real-time operation. It also interfaces with a real-time MEG analysis software via transmission control protocol/internet protocol, to control the synchronous acquisition and transfer of the data in real time from >400 channels to acquisition and analysis workstations. The successful implementation of this controller for an MEG system with a large number of channels demonstrates the feasibility of employing the present architecture in several other applications.

  10. High-Performance Monitoring Architecture for Large-Scale Distributed Systems Using Event Filtering

    Science.gov (United States)

    Maly, K.

    1998-01-01

    Monitoring is an essential process to observe and improve the reliability and the performance of large-scale distributed (LSD) systems. In an LSD environment, a large number of events is generated by the system components during its execution or interaction with external objects (e.g. users or processes). Monitoring such events is necessary for observing the run-time behavior of LSD systems and providing status information required for debugging, tuning and managing such applications. However, correlated events are generated concurrently and could be distributed in various locations in the applications environment which complicates the management decisions process and thereby makes monitoring LSD systems an intricate task. We propose a scalable high-performance monitoring architecture for LSD systems to detect and classify interesting local and global events and disseminate the monitoring information to the corresponding end- points management applications such as debugging and reactive control tools to improve the application performance and reliability. A large volume of events may be generated due to the extensive demands of the monitoring applications and the high interaction of LSD systems. The monitoring architecture employs a high-performance event filtering mechanism to efficiently process the large volume of event traffic generated by LSD systems and minimize the intrusiveness of the monitoring process by reducing the event traffic flow in the system and distributing the monitoring computation. Our architecture also supports dynamic and flexible reconfiguration of the monitoring mechanism via its Instrumentation and subscription components. As a case study, we show how our monitoring architecture can be utilized to improve the reliability and the performance of the Interactive Remote Instruction (IRI) system which is a large-scale distributed system for collaborative distance learning. The filtering mechanism represents an Intrinsic component integrated

  11. Large-scale computer networks and the future of legal knowledge-based systems

    NARCIS (Netherlands)

    Leenes, R.E.; Svensson, Jorgen S.; Hage, J.C.; Bench-Capon, T.J.M.; Cohen, M.J.; van den Herik, H.J.

    1995-01-01

    In this paper we investigate the relation between legal knowledge-based systems and large-scale computer networks such as the Internet. On the one hand, researchers of legal knowledge-based systems have claimed huge possibilities, but despite the efforts over the last twenty years, the number of

  12. A Nuclear Scale System Based on LabVIEW

    International Nuclear Information System (INIS)

    Liu Shixing; Gu Qindong

    2009-01-01

    Nuclear mass scales measure the weight of materials which absorb and attenuate the nuclear radiation when the low energy γ-ray through it and is a non-contact continuous measurement device with simple structure and reliable operation. LabVIEW as a graphical programming language is a standard data acquisition and instrument control software. Based on the principle of nuclear mass scale measuring system, monitoring software for nuclear scale system is designed using LabVIEW programming environment. Software architecture mainly composed of three basic modules which include the monitoring software, databases and Web services. It achieves measurement data acquisition, status monitoring, and data management and has networking functions. (authors)

  13. System Dynamics Simulation of Large-Scale Generation System for Designing Wind Power Policy in China

    Directory of Open Access Journals (Sweden)

    Linna Hou

    2015-01-01

    Full Text Available This paper focuses on the impacts of renewable energy policy on a large-scale power generation system, including thermal power, hydropower, and wind power generation. As one of the most important clean energy, wind energy has been rapidly developed in the world. But in recent years there is a serious waste of wind power equipment and investment in China leading to many problems in the industry from wind power planning to its integration. One way overcoming the difficulty is to analyze the influence of wind power policy on a generation system. This paper builds a system dynamics (SD model of energy generation to simulate the results of wind energy generation policies based on a complex system. And scenario analysis method is used to compare the effectiveness and efficiency of these policies. The case study shows that the combinations of lower portfolio goal and higher benchmark price and those of higher portfolio goal and lower benchmark price have large differences in both effectiveness and efficiency. On the other hand, the combinations of uniformly lower or higher portfolio goal and benchmark price have similar efficiency, but different effectiveness. Finally, an optimal policy combination can be chosen on the basis of policy analysis in the large-scale power system.

  14. Sensemaking in a Value Based Context for Large Scale Complex Engineered Systems

    Science.gov (United States)

    Sikkandar Basha, Nazareen

    The design and the development of Large-Scale Complex Engineered Systems (LSCES) requires the involvement of multiple teams and numerous levels of the organization and interactions with large numbers of people and interdisciplinary departments. Traditionally, requirements-driven Systems Engineering (SE) is used in the design and development of these LSCES. The requirements are used to capture the preferences of the stakeholder for the LSCES. Due to the complexity of the system, multiple levels of interactions are required to elicit the requirements of the system within the organization. Since LSCES involves people and interactions between the teams and interdisciplinary departments, it should be socio-technical in nature. The elicitation of the requirements of most large-scale system projects are subjected to creep in time and cost due to the uncertainty and ambiguity of requirements during the design and development. In an organization structure, the cost and time overrun can occur at any level and iterate back and forth thus increasing the cost and time. To avoid such creep past researches have shown that rigorous approaches such as value based designing can be used to control it. But before the rigorous approaches can be used, the decision maker should have a proper understanding of requirements creep and the state of the system when the creep occurs. Sensemaking is used to understand the state of system when the creep occurs and provide a guidance to decision maker. This research proposes the use of the Cynefin framework, sensemaking framework which can be used in the design and development of LSCES. It can aide in understanding the system and decision making to minimize the value gap due to requirements creep by eliminating ambiguity which occurs during design and development. A sample hierarchical organization is used to demonstrate the state of the system at the occurrence of requirements creep in terms of cost and time using the Cynefin framework. These

  15. Implicit solvers for large-scale nonlinear problems

    International Nuclear Information System (INIS)

    Keyes, David E; Reynolds, Daniel R; Woodward, Carol S

    2006-01-01

    Computational scientists are grappling with increasingly complex, multi-rate applications that couple such physical phenomena as fluid dynamics, electromagnetics, radiation transport, chemical and nuclear reactions, and wave and material propagation in inhomogeneous media. Parallel computers with large storage capacities are paving the way for high-resolution simulations of coupled problems; however, hardware improvements alone will not prove enough to enable simulations based on brute-force algorithmic approaches. To accurately capture nonlinear couplings between dynamically relevant phenomena, often while stepping over rapid adjustments to quasi-equilibria, simulation scientists are increasingly turning to implicit formulations that require a discrete nonlinear system to be solved for each time step or steady state solution. Recent advances in iterative methods have made fully implicit formulations a viable option for solution of these large-scale problems. In this paper, we overview one of the most effective iterative methods, Newton-Krylov, for nonlinear systems and point to software packages with its implementation. We illustrate the method with an example from magnetically confined plasma fusion and briefly survey other areas in which implicit methods have bestowed important advantages, such as allowing high-order temporal integration and providing a pathway to sensitivity analyses and optimization. Lastly, we overview algorithm extensions under development motivated by current SciDAC applications

  16. Implementation of a Large Scale Control System for a High-Energy Physics Detector: The CMS Silicon Strip Tracker

    CERN Document Server

    Masetti, Lorenzo; Fischer, Peter

    2011-01-01

    Control systems for modern High-Energy Physics (HEP) detectors are large distributed software systems managing a significant data volume and implementing complex operational procedures. The control software for the LHC experiments at CERN is built on top of a commercial software used in industrial automation. However, HEP specific requirements call for extended functionalities. This thesis focuses on the design and implementation of the control system for the CMS Silicon Strip Tracker but presents some general strategies that have been applied in other contexts. Specific design solutions are developed to ensure acceptable response times and to provide the operator with an effective summary of the status of the devices. Detector safety is guaranteed by proper configuration of independent hardware systems. A software protection mechanism is used to avoid the widespread intervention of the hardware safety and to inhibit dangerous commands. A wizard approach allows non expert operators to recover error situations...

  17. A Dynamic Optimization Strategy for the Operation of Large Scale Seawater Reverses Osmosis System

    Directory of Open Access Journals (Sweden)

    Aipeng Jiang

    2014-01-01

    Full Text Available In this work, an efficient strategy was proposed for efficient solution of the dynamic model of SWRO system. Since the dynamic model is formulated by a set of differential-algebraic equations, simultaneous strategies based on collocations on finite element were used to transform the DAOP into large scale nonlinear programming problem named Opt2. Then, simulation of RO process and storage tanks was carried element by element and step by step with fixed control variables. All the obtained values of these variables then were used as the initial value for the optimal solution of SWRO system. Finally, in order to accelerate the computing efficiency and at the same time to keep enough accuracy for the solution of Opt2, a simple but efficient finite element refinement rule was used to reduce the scale of Opt2. The proposed strategy was applied to a large scale SWRO system with 8 RO plants and 4 storage tanks as case study. Computing result shows that the proposed strategy is quite effective for optimal operation of the large scale SWRO system; the optimal problem can be successfully solved within decades of iterations and several minutes when load and other operating parameters fluctuate.

  18. The Convergence of High Performance Computing and Large Scale Data Analytics

    Science.gov (United States)

    Duffy, D.; Bowen, M. K.; Thompson, J. H.; Yang, C. P.; Hu, F.; Wills, B.

    2015-12-01

    As the combinations of remote sensing observations and model outputs have grown, scientists are increasingly burdened with both the necessity and complexity of large-scale data analysis. Scientists are increasingly applying traditional high performance computing (HPC) solutions to solve their "Big Data" problems. While this approach has the benefit of limiting data movement, the HPC system is not optimized to run analytics, which can create problems that permeate throughout the HPC environment. To solve these issues and to alleviate some of the strain on the HPC environment, the NASA Center for Climate Simulation (NCCS) has created the Advanced Data Analytics Platform (ADAPT), which combines both HPC and cloud technologies to create an agile system designed for analytics. Large, commonly used data sets are stored in this system in a write once/read many file system, such as Landsat, MODIS, MERRA, and NGA. High performance virtual machines are deployed and scaled according to the individual scientist's requirements specifically for data analysis. On the software side, the NCCS and GMU are working with emerging commercial technologies and applying them to structured, binary scientific data in order to expose the data in new ways. Native NetCDF data is being stored within a Hadoop Distributed File System (HDFS) enabling storage-proximal processing through MapReduce while continuing to provide accessibility of the data to traditional applications. Once the data is stored within HDFS, an additional indexing scheme is built on top of the data and placed into a relational database. This spatiotemporal index enables extremely fast mappings of queries to data locations to dramatically speed up analytics. These are some of the first steps toward a single unified platform that optimizes for both HPC and large-scale data analysis, and this presentation will elucidate the resulting and necessary exascale architectures required for future systems.

  19. Large-scale production of lentiviral vector in a closed system hollow fiber bioreactor

    Directory of Open Access Journals (Sweden)

    Jonathan Sheu

    Full Text Available Lentiviral vectors are widely used in the field of gene therapy as an effective method for permanent gene delivery. While current methods of producing small scale vector batches for research purposes depend largely on culture flasks, the emergence and popularity of lentiviral vectors in translational, preclinical and clinical research has demanded their production on a much larger scale, a task that can be difficult to manage with the numbers of producer cell culture flasks required for large volumes of vector. To generate a large scale, partially closed system method for the manufacturing of clinical grade lentiviral vector suitable for the generation of induced pluripotent stem cells (iPSCs, we developed a method employing a hollow fiber bioreactor traditionally used for cell expansion. We have demonstrated the growth, transfection, and vector-producing capability of 293T producer cells in this system. Vector particle RNA titers after subsequent vector concentration yielded values comparable to lentiviral iPSC induction vector batches produced using traditional culture methods in 225 cm2 flasks (T225s and in 10-layer cell factories (CF10s, while yielding a volume nearly 145 times larger than the yield from a T225 flask and nearly three times larger than the yield from a CF10. Employing a closed system hollow fiber bioreactor for vector production offers the possibility of manufacturing large quantities of gene therapy vector while minimizing reagent usage, equipment footprint, and open system manipulation.

  20. Distributed large-scale dimensional metrology new insights

    CERN Document Server

    Franceschini, Fiorenzo; Maisano, Domenico

    2011-01-01

    Focuses on the latest insights into and challenges of distributed large scale dimensional metrology Enables practitioners to study distributed large scale dimensional metrology independently Includes specific examples of the development of new system prototypes

  1. Software engineering principles applied to large healthcare information systems--a case report.

    Science.gov (United States)

    Nardon, Fabiane Bizinella; de A Moura, Lincoln

    2007-01-01

    São Paulo is the largest city in Brazil and one of the largest cities in the world. In 2004, São Paulo City Department of Health decided to implement a Healthcare Information System to support managing healthcare services and provide an ambulatory health record. The resulting information system is one of the largest public healthcare information systems ever built, with more than 2 million lines of code. Although statistics shows that most software projects fail, and the risks for the São Paulo initiative were enormous, the information system was completed on-time and on-budget. In this paper, we discuss the software engineering principles adopted that allowed to accomplish that project's goals, hoping that sharing the experience of this project will help other healthcare information systems initiatives to succeed.

  2. A large scale software system for simulation and design optimization of mechanical systems

    Science.gov (United States)

    Dopker, Bernhard; Haug, Edward J.

    1989-01-01

    The concept of an advanced integrated, networked simulation and design system is outlined. Such an advanced system can be developed utilizing existing codes without compromising the integrity and functionality of the system. An example has been used to demonstrate the applicability of the concept of the integrated system outlined here. The development of an integrated system can be done incrementally. Initial capabilities can be developed and implemented without having a detailed design of the global system. Only a conceptual global system must exist. For a fully integrated, user friendly design system, further research is needed in the areas of engineering data bases, distributed data bases, and advanced user interface design.

  3. Simulation software support (S3) system a software testing and debugging tool

    International Nuclear Information System (INIS)

    Burgess, D.C.; Mahjouri, F.S.

    1990-01-01

    The largest percentage of technical effort in the software development process is accounted for debugging and testing. It is not unusual for a software development organization to spend over 50% of the total project effort on testing. In the extreme, testing of human-rated software (e.g., nuclear reactor monitoring, training simulator) can cost three to five times as much as all other software engineering steps combined. The Simulation Software Support (S 3 ) System, developed by the Link-Miles Simulation Corporation is ideally suited for real-time simulation applications which involve a large database with models programmed in FORTRAN. This paper will focus on testing elements of the S 3 system. In this paper system support software utilities are provided which enable the loading and execution of modules in the development environment. These elements include the Linking/Loader (LLD) for dynamically linking program modules and loading them into memory and the interactive executive (IEXEC) for controlling the execution of the modules. Features of the Interactive Symbolic Debugger (SD) and the Real Time Executive (RTEXEC) to support the unit and integrated testing will be explored

  4. Large-scale multimedia modeling applications

    International Nuclear Information System (INIS)

    Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.

    1995-08-01

    Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications

  5. Multi-Scale Three-Dimensional Variational Data Assimilation System for Coastal Ocean Prediction

    Science.gov (United States)

    Li, Zhijin; Chao, Yi; Li, P. Peggy

    2012-01-01

    A multi-scale three-dimensional variational data assimilation system (MS-3DVAR) has been formulated and the associated software system has been developed for improving high-resolution coastal ocean prediction. This system helps improve coastal ocean prediction skill, and has been used in support of operational coastal ocean forecasting systems and field experiments. The system has been developed to improve the capability of data assimilation for assimilating, simultaneously and effectively, sparse vertical profiles and high-resolution remote sensing surface measurements into coastal ocean models, as well as constraining model biases. In this system, the cost function is decomposed into two separate units for the large- and small-scale components, respectively. As such, data assimilation is implemented sequentially from large to small scales, the background error covariance is constructed to be scale-dependent, and a scale-dependent dynamic balance is incorporated. This scheme then allows effective constraining large scales and model bias through assimilating sparse vertical profiles, and small scales through assimilating high-resolution surface measurements. This MS-3DVAR enhances the capability of the traditional 3DVAR for assimilating highly heterogeneously distributed observations, such as along-track satellite altimetry data, and particularly maximizing the extraction of information from limited numbers of vertical profile observations.

  6. Finite-Time Stability of Large-Scale Systems with Interval Time-Varying Delay in Interconnection

    Directory of Open Access Journals (Sweden)

    T. La-inchua

    2017-01-01

    Full Text Available We investigate finite-time stability of a class of nonlinear large-scale systems with interval time-varying delays in interconnection. Time-delay functions are continuous but not necessarily differentiable. Based on Lyapunov stability theory and new integral bounding technique, finite-time stability of large-scale systems with interval time-varying delays in interconnection is derived. The finite-time stability criteria are delays-dependent and are given in terms of linear matrix inequalities which can be solved by various available algorithms. Numerical examples are given to illustrate effectiveness of the proposed method.

  7. Automated software configuration in the MONSOON system

    Science.gov (United States)

    Daly, Philip N.; Buchholz, Nick C.; Moore, Peter C.

    2004-09-01

    MONSOON is the next generation OUV-IR controller project being developed at NOAO. The design is flexible, emphasizing code re-use, maintainability and scalability as key factors. The software needs to support widely divergent detector systems ranging from multi-chip mosaics (for LSST, QUOTA, ODI and NEWFIRM) down to large single or multi-detector laboratory development systems. In order for this flexibility to be effective and safe, the software must be able to configure itself to the requirements of the attached detector system at startup. The basic building block of all MONSOON systems is the PAN-DHE pair which make up a single data acquisition node. In this paper we discuss the software solutions used in the automatic PAN configuration system.

  8. Particle physics and polyedra proximity calculation for hazard simulations in large-scale industrial plants

    Science.gov (United States)

    Plebe, Alice; Grasso, Giorgio

    2016-12-01

    This paper describes a system developed for the simulation of flames inside an open-source 3D computer graphic software, Blender, with the aim of analyzing in virtual reality scenarios of hazards in large-scale industrial plants. The advantages of Blender are of rendering at high resolution the very complex structure of large industrial plants, and of embedding a physical engine based on smoothed particle hydrodynamics. This particle system is used to evolve a simulated fire. The interaction of this fire with the components of the plant is computed using polyhedron separation distance, adopting a Voronoi-based strategy that optimizes the number of feature distance computations. Results on a real oil and gas refining industry are presented.

  9. Hierarchical, decentralized control system for large-scale smart-structures

    International Nuclear Information System (INIS)

    Algermissen, Stephan; Fröhlich, Tim; Monner, Hans Peter

    2014-01-01

    Active control of sound and vibration has gained much attention in all kinds of industries in the past decade. Future prospects for maximizing airline passenger comfort are especially promising. The objectives of recent research projects in this area are the reduction of noise transmission through thin walled structures such as fuselages, linings or interior elements. Besides different external noise sources, such as the turbulent boundary layer, rotor or jet noise, the actuator and sensor placement as well as different control concepts are addressed. Mostly, the work is focused on a single panel or section of the fuselage, neglecting the fact that for effective noise reduction the entire fuselage has to be taken into account. Nevertheless, extending the scope of an active system from a single panel to the entire fuselage increases the effort for control hardware dramatically. This paper presents a control concept for large structures using distributed control nodes. Each node has the capability to execute a vibration or noise controller for a specific part or section of the fuselage. For maintenance, controller tuning or performance measurement, all nodes are connected to a host computer via Universal Serial Bus (USB). This topology allows a partitioning and distributing of tasks. The nodes execute the low-level control functions. High-level tasks like maintenance, system identification and control synthesis are operated by the host using streamed data from the nodes. By choosing low-price nodes, a very cost effective way of implementing an active system for large structures is realized. Besides the system identification and controller synthesis on the host computer, a detailed view on the hardware and software concept for the nodes is given. Finally, the results of an experimental test of a system running a robust vibration controller at an active panel demonstrator are shown. (paper)

  10. Research and development of system to utilize photovoltaic energy. Study on large-scale PV power supply system; Taiyoko hatsuden riyo system no kenkyu kaihatsu. Taiyo energy kyokyu system no chosa kenkyu

    Energy Technology Data Exchange (ETDEWEB)

    Tatsuta, M [New Energy and Industrial Technology Development Organization, Tokyo (Japan)

    1994-12-01

    This paper reports the study results on large-scale PV power supply systems in fiscal 1994. (1) On optimization of large-scale systems, the conceptual design of the model system was carried out which supposes a large-scale integrated PV power generation system in desert area. As a result, a pair of 250kW generation system was designed as minimum one consisting power unit. Its frame and construction method were designed considering weather conditions in the inland of China. (2) On optimization of large-scale transmission systems, as large-scale power transmission systems for PV power generation, the following were studied: AC aerial transmission, DC aerial transmission, superconducting transmission, hydrogen gas pipeline, and LH2 tanker transport. (3) On the influence of large-scale systems, it was estimated that emission control is expected by substituting PV power generation for coal fired power generation, the negative influence on natural environment cannot be supposed, and the favorable economic effect is expected as influence on social environment. 4 tabs.

  11. Hierarchical modeling and robust synthesis for the preliminary design of large scale complex systems

    Science.gov (United States)

    Koch, Patrick Nathan

    Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: (1) Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis, (2) Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration, and (3) Noise modeling techniques for implementing robust preliminary design when approximate models are employed. The method developed and associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system; the turbofan system-level problem is partitioned into engine cycle and configuration design and a compressor module is integrated for more detailed subsystem-level design exploration, improving system evaluation.

  12. Solving large scale unit dilemma in electricity system by applying commutative law

    Science.gov (United States)

    Legino, Supriadi; Arianto, Rakhmat

    2018-03-01

    The conventional system, pooling resources with large centralized power plant interconnected as a network. provides a lot of advantages compare to the isolated one include optimizing efficiency and reliability. However, such a large plant need a huge capital. In addition, more problems emerged to hinder the construction of big power plant as well as its associated transmission lines. By applying commutative law of math, ab = ba, for all a,b €-R, the problem associated with conventional system as depicted above, can be reduced. The idea of having small unit but many power plants, namely “Listrik Kerakyatan,” abbreviated as LK provides both social and environmental benefit that could be capitalized by using proper assumption. This study compares the cost and benefit of LK to those of conventional system, using simulation method to prove that LK offers alternative solution to answer many problems associated with the large system. Commutative Law of Algebra can be used as a simple mathematical model to analyze whether the LK system as an eco-friendly distributed generation can be applied to solve various problems associated with a large scale conventional system. The result of simulation shows that LK provides more value if its plants operate in less than 11 hours as peaker power plant or load follower power plant to improve load curve balance of the power system. The result of simulation indicates that the investment cost of LK plant should be optimized in order to minimize the plant investment cost. This study indicates that the benefit of economies of scale principle does not always apply to every condition, particularly if the portion of intangible cost and benefit is relatively high.

  13. Adaptation of a software development methodology to the implementation of a large-scale data acquisition and control system. [for Deep Space Network

    Science.gov (United States)

    Madrid, G. A.; Westmoreland, P. T.

    1983-01-01

    A progress report is presented on a program to upgrade the existing NASA Deep Space Network in terms of a redesigned computer-controlled data acquisition system for channelling tracking, telemetry, and command data between a California-based control center and three signal processing centers in Australia, California, and Spain. The methodology for the improvements is oriented towards single subsystem development with consideration for a multi-system and multi-subsystem network of operational software. Details of the existing hardware configurations and data transmission links are provided. The program methodology includes data flow design, interface design and coordination, incremental capability availability, increased inter-subsystem developmental synthesis and testing, system and network level synthesis and testing, and system verification and validation. The software has been implemented thus far to a 65 percent completion level, and the methodology being used to effect the changes, which will permit enhanced tracking and communication with spacecraft, has been concluded to feature effective techniques.

  14. Compensating active power imbalances in power system with large-scale wind power penetration

    DEFF Research Database (Denmark)

    Basit, Abdul; Hansen, Anca Daniela; Altin, Müfit

    2016-01-01

    Large-scale wind power penetration can affectthe supply continuity in the power system. This is a matterof high priority to investigate, as more regulating reservesand specified control strategies for generation control arerequired in the future power system with even more highwind power penetrat...

  15. Assessment of present and future large-scale semiconductor detector systems

    International Nuclear Information System (INIS)

    Spieler, H.G.; Haller, E.E.

    1984-11-01

    The performance of large-scale semiconductor detector systems is assessed with respect to their theoretical potential and to the practical limitations imposed by processing techniques, readout electronics and radiation damage. In addition to devices which detect reaction products directly, the analysis includes photodetectors for scintillator arrays. Beyond present technology we also examine currently evolving structures and techniques which show potential for producing practical devices in the foreseeable future

  16. Next Generation Cloud-based Science Data Systems and Their Implications on Data and Software Stewardship, Preservation, and Provenance

    Science.gov (United States)

    Hua, H.; Manipon, G.; Starch, M.

    2017-12-01

    NASA's upcoming missions are expected to be generating data volumes at least an order of magnitude larger than current missions. A significant increase in data processing, data rates, data volumes, and long-term data archive capabilities are needed. Consequently, new challenges are emerging that impact traditional data and software management approaches. At large-scales, next generation science data systems are exploring the move onto cloud computing paradigms to support these increased needs. New implications such as costs, data movement, collocation of data systems & archives, and moving processing closer to the data, may result in changes to the stewardship, preservation, and provenance of science data and software. With more science data systems being on-boarding onto cloud computing facilities, we can expect more Earth science data records to be both generated and kept in the cloud. But at large scales, the cost of processing and storing global data may impact architectural and system designs. Data systems will trade the cost of keeping data in the cloud with the data life-cycle approaches of moving "colder" data back to traditional on-premise facilities. How will this impact data citation and processing software stewardship? What are the impacts of cloud-based on-demand processing and its affect on reproducibility and provenance. Similarly, with more science processing software being moved onto cloud, virtual machines, and container based approaches, more opportunities arise for improved stewardship and preservation. But will the science community trust data reprocessed years or decades later? We will also explore emerging questions of the stewardship of the science data system software that is generating the science data records both during and after the life of mission.

  17. Creating Large Scale Database Servers

    International Nuclear Information System (INIS)

    Becla, Jacek

    2001-01-01

    The BaBar experiment at the Stanford Linear Accelerator Center (SLAC) is designed to perform a high precision investigation of the decays of the B-meson produced from electron-positron interactions. The experiment, started in May 1999, will generate approximately 300TB/year of data for 10 years. All of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). To date, over 70TB of data have been placed in Objectivity/DB, making it one of the largest databases in the world. Providing access to such a large quantity of data through a database server is a daunting task. A full-scale testbed environment had to be developed to tune various software parameters and a fundamental change had to occur in the AMS architecture to allow it to scale past several hundred terabytes of data. Additionally, several protocol extensions had to be implemented to provide practical access to large quantities of data. This paper will describe the design of the database and the changes that we needed to make in the AMS for scalability reasons and how the lessons we learned would be applicable to virtually any kind of database server seeking to operate in the Petabyte region

  18. Creating Large Scale Database Servers

    Energy Technology Data Exchange (ETDEWEB)

    Becla, Jacek

    2001-12-14

    The BaBar experiment at the Stanford Linear Accelerator Center (SLAC) is designed to perform a high precision investigation of the decays of the B-meson produced from electron-positron interactions. The experiment, started in May 1999, will generate approximately 300TB/year of data for 10 years. All of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). To date, over 70TB of data have been placed in Objectivity/DB, making it one of the largest databases in the world. Providing access to such a large quantity of data through a database server is a daunting task. A full-scale testbed environment had to be developed to tune various software parameters and a fundamental change had to occur in the AMS architecture to allow it to scale past several hundred terabytes of data. Additionally, several protocol extensions had to be implemented to provide practical access to large quantities of data. This paper will describe the design of the database and the changes that we needed to make in the AMS for scalability reasons and how the lessons we learned would be applicable to virtually any kind of database server seeking to operate in the Petabyte region.

  19. Improved decomposition–coordination and discrete differential dynamic programming for optimization of large-scale hydropower system

    International Nuclear Information System (INIS)

    Li, Chunlong; Zhou, Jianzhong; Ouyang, Shuo; Ding, Xiaoling; Chen, Lu

    2014-01-01

    Highlights: • Optimization of large-scale hydropower system in the Yangtze River basin. • Improved decomposition–coordination and discrete differential dynamic programming. • Generating initial solution randomly to reduce generation time. • Proposing relative coefficient for more power generation. • Proposing adaptive bias corridor technology to enhance convergence speed. - Abstract: With the construction of major hydro plants, more and more large-scale hydropower systems are taking shape gradually, which brings up a challenge to optimize these systems. Optimization of large-scale hydropower system (OLHS), which is to determine water discharges or water levels of overall hydro plants for maximizing total power generation when subjecting to lots of constrains, is a high dimensional, nonlinear and coupling complex problem. In order to solve the OLHS problem effectively, an improved decomposition–coordination and discrete differential dynamic programming (IDC–DDDP) method is proposed in this paper. A strategy that initial solution is generated randomly is adopted to reduce generation time. Meanwhile, a relative coefficient based on maximum output capacity is proposed for more power generation. Moreover, an adaptive bias corridor technology is proposed to enhance convergence speed. The proposed method is applied to long-term optimal dispatches of large-scale hydropower system (LHS) in the Yangtze River basin. Compared to other methods, IDC–DDDP has competitive performances in not only total power generation but also convergence speed, which provides a new method to solve the OLHS problem

  20. Economic Model Predictive Control for Large-Scale and Distributed Energy Systems

    DEFF Research Database (Denmark)

    Standardi, Laura

    Sources (RESs) in the smart grids is increasing. These energy sources bring uncertainty to the production due to their fluctuations. Hence,smart grids need suitable control systems that are able to continuously balance power production and consumption.  We apply the Economic Model Predictive Control (EMPC......) strategy to optimise the economic performances of the energy systems and to balance the power production and consumption. In the case of large-scale energy systems, the electrical grid connects a high number of power units. Because of this, the related control problem involves a high number of variables......In this thesis, we consider control strategies for large and distributed energy systems that are important for the implementation of smart grid technologies.  An electrical grid has to ensure reliability and avoid long-term interruptions in the power supply. Moreover, the share of Renewable Energy...

  1. Automated Cryocooler Monitor and Control System Software

    Science.gov (United States)

    Britchcliffe, Michael J.; Conroy, Bruce L.; Anderson, Paul E.; Wilson, Ahmad

    2011-01-01

    This software is used in an automated cryogenic control system developed to monitor and control the operation of small-scale cryocoolers. The system was designed to automate the cryogenically cooled low-noise amplifier system described in "Automated Cryocooler Monitor and Control System" (NPO-47246), NASA Tech Briefs, Vol. 35, No. 5 (May 2011), page 7a. The software contains algorithms necessary to convert non-linear output voltages from the cryogenic diode-type thermometers and vacuum pressure and helium pressure sensors, to temperature and pressure units. The control function algorithms use the monitor data to control the cooler power, vacuum solenoid, vacuum pump, and electrical warm-up heaters. The control algorithms are based on a rule-based system that activates the required device based on the operating mode. The external interface is Web-based. It acts as a Web server, providing pages for monitor, control, and configuration. No client software from the external user is required.

  2. Algorithm 896: LSA: Algorithms for Large-Scale Optimization

    Czech Academy of Sciences Publication Activity Database

    Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan

    2009-01-01

    Roč. 36, č. 3 (2009), 16-1-16-29 ISSN 0098-3500 R&D Pro jects: GA AV ČR IAA1030405; GA ČR GP201/06/P397 Institutional research plan: CEZ:AV0Z10300504 Keywords : algorithms * design * large-scale optimization * large-scale nonsmooth optimization * large-scale nonlinear least squares * large-scale nonlinear minimax * large-scale systems of nonlinear equations * sparse pro blems * partially separable pro blems * limited-memory methods * discrete Newton methods * quasi-Newton methods * primal interior-point methods Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.904, year: 2009

  3. Energy System Analysis of Large-Scale Integration of Wind Power

    International Nuclear Information System (INIS)

    Lund, Henrik

    2003-11-01

    The paper presents the results of two research projects conducted by Aalborg University and financed by the Danish Energy Research Programme. Both projects include the development of models and system analysis with focus on large-scale integration of wind power into different energy systems. Market reactions and ability to exploit exchange on the international market for electricity by locating exports in hours of high prices are included in the analyses. This paper focuses on results which are valid for energy systems in general. The paper presents the ability of different energy systems and regulation strategies to integrate wind power, The ability is expressed by three factors: One factor is the degree of electricity excess production caused by fluctuations in wind and CHP heat demands. The other factor is the ability to utilise wind power to reduce CO 2 emission in the system. And the third factor is the ability to benefit from exchange of electricity on the market. Energy systems and regulation strategies are analysed in the range of a wind power input from 0 to 100% of the electricity demand. Based on the Danish energy system, in which 50 per cent of the electricity demand is produced in CHP, a number of future energy systems with CO 2 reduction potentials are analysed, i.e. systems with more CHP, systems using electricity for transportation (battery or hydrogen vehicles) and systems with fuel-cell technologies. For the present and such potential future energy systems different regulation strategies have been analysed, i.e. the inclusion of small CHP plants into the regulation task of electricity balancing and grid stability and investments in electric heating, heat pumps and heat storage capacity. Also the potential of energy management has been analysed. The results of the analyses make it possible to compare short-term and long-term potentials of different strategies of large-scale integration of wind power

  4. Final Report: Large-Scale Optimization for Bayesian Inference in Complex Systems

    Energy Technology Data Exchange (ETDEWEB)

    Ghattas, Omar [The University of Texas at Austin

    2013-10-15

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimiza- tion) Project focuses on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimiza- tion and inversion methods. Our research is directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. Our efforts are integrated in the context of a challenging testbed problem that considers subsurface reacting flow and transport. The MIT component of the SAGUARO Project addresses the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas-Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to- observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as "reduce then sample" and "sample then reduce." In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.

  5. Model Predictive Control for Flexible Power Consumption of Large-Scale Refrigeration Systems

    DEFF Research Database (Denmark)

    Shafiei, Seyed Ehsan; Stoustrup, Jakob; Rasmussen, Henrik

    2014-01-01

    A model predictive control (MPC) scheme is introduced to directly control the electrical power consumption of large-scale refrigeration systems. Deviation from the baseline of the consumption is corresponded to the storing and delivering of thermal energy. By virtue of such correspondence...

  6. A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations

    Science.gov (United States)

    Demir, I.; Agliamzanov, R.

    2014-12-01

    Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.

  7. The Ragnarok Architectural Software Configuration Management Model

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak

    1999-01-01

    The architecture is the fundamental framework for designing and implementing large scale software, and the ability to trace and control its evolution is essential. However, many traditional software configuration management tools view 'software' merely as a set of files, not as an architecture....... This introduces an unfortunate impedance mismatch between the design domain (architecture level) and configuration management domain (file level.) This paper presents a software configuration management model that allows tight version control and configuration management of the architecture of a software system...

  8. Large-scale, high-performance and cloud-enabled multi-model analytics experiments in the context of the Earth System Grid Federation

    Science.gov (United States)

    Fiore, S.; Płóciennik, M.; Doutriaux, C.; Blanquer, I.; Barbera, R.; Williams, D. N.; Anantharaj, V. G.; Evans, B. J. K.; Salomoni, D.; Aloisio, G.

    2017-12-01

    The increased models resolution in the development of comprehensive Earth System Models is rapidly leading to very large climate simulations output that pose significant scientific data management challenges in terms of data sharing, processing, analysis, visualization, preservation, curation, and archiving.Large scale global experiments for Climate Model Intercomparison Projects (CMIP) have led to the development of the Earth System Grid Federation (ESGF), a federated data infrastructure which has been serving the CMIP5 experiment, providing access to 2PB of data for the IPCC Assessment Reports. In such a context, running a multi-model data analysis experiment is very challenging, as it requires the availability of a large amount of data related to multiple climate models simulations and scientific data management tools for large-scale data analytics. To address these challenges, a case study on climate models intercomparison data analysis has been defined and implemented in the context of the EU H2020 INDIGO-DataCloud project. The case study has been tested and validated on CMIP5 datasets, in the context of a large scale, international testbed involving several ESGF sites (LLNL, ORNL and CMCC), one orchestrator site (PSNC) and one more hosting INDIGO PaaS services (UPV). Additional ESGF sites, such as NCI (Australia) and a couple more in Europe, are also joining the testbed. The added value of the proposed solution is summarized in the following: it implements a server-side paradigm which limits data movement; it relies on a High-Performance Data Analytics (HPDA) stack to address performance; it exploits the INDIGO PaaS layer to support flexible, dynamic and automated deployment of software components; it provides user-friendly web access based on the INDIGO Future Gateway; and finally it integrates, complements and extends the support currently available through ESGF. Overall it provides a new "tool" for climate scientists to run multi-model experiments. At the

  9. Emerging large-scale solar heating applications

    International Nuclear Information System (INIS)

    Wong, W.P.; McClung, J.L.

    2009-01-01

    Currently the market for solar heating applications in Canada is dominated by outdoor swimming pool heating, make-up air pre-heating and domestic water heating in homes, commercial and institutional buildings. All of these involve relatively small systems, except for a few air pre-heating systems on very large buildings. Together these applications make up well over 90% of the solar thermal collectors installed in Canada during 2007. These three applications, along with the recent re-emergence of large-scale concentrated solar thermal for generating electricity, also dominate the world markets. This paper examines some emerging markets for large scale solar heating applications, with a focus on the Canadian climate and market. (author)

  10. Emerging large-scale solar heating applications

    Energy Technology Data Exchange (ETDEWEB)

    Wong, W.P.; McClung, J.L. [Science Applications International Corporation (SAIC Canada), Ottawa, Ontario (Canada)

    2009-07-01

    Currently the market for solar heating applications in Canada is dominated by outdoor swimming pool heating, make-up air pre-heating and domestic water heating in homes, commercial and institutional buildings. All of these involve relatively small systems, except for a few air pre-heating systems on very large buildings. Together these applications make up well over 90% of the solar thermal collectors installed in Canada during 2007. These three applications, along with the recent re-emergence of large-scale concentrated solar thermal for generating electricity, also dominate the world markets. This paper examines some emerging markets for large scale solar heating applications, with a focus on the Canadian climate and market. (author)

  11. Large-scale visualization system for grid environment

    International Nuclear Information System (INIS)

    Suzuki, Yoshio

    2007-01-01

    Center for Computational Science and E-systems of Japan Atomic Energy Agency (CCSE/JAEA) has been conducting R and Ds of distributed computing (grid computing) environments: Seamless Thinking Aid (STA), Information Technology Based Laboratory (ITBL) and Atomic Energy Grid InfraStructure (AEGIS). In these R and Ds, we have developed the visualization technology suitable for the distributed computing environment. As one of the visualization tools, we have developed the Parallel Support Toolkit (PST) which can execute the visualization process parallely on a computer. Now, we improve PST to be executable simultaneously on multiple heterogeneous computers using Seamless Thinking Aid Message Passing Interface (STAMPI). STAMPI, we have developed in these R and Ds, is the MPI library executable on a heterogeneous computing environment. The improvement realizes the visualization of extremely large-scale data and enables more efficient visualization processes in a distributed computing environment. (author)

  12. The Utility of Open Source Software in Military Systems

    National Research Council Canada - National Science Library

    Esperon, Agustin I; Munoz, Jose P; Tanneau, Jean M

    2005-01-01

    .... The companies involved were THALES and GMV. The MILOS project aimed to demonstrate benefits of Open Source Software in large software based military systems, by casting off constraints inherent to traditional proprietary COTS and by taking...

  13. Tradeoffs between quality-of-control and quality-of-service in large-scale nonlinear networked control systems

    NARCIS (Netherlands)

    Borgers, D. P.; Geiselhart, R.; Heemels, W. P. M. H.

    2017-01-01

    In this paper we study input-to-state stability (ISS) of large-scale networked control systems (NCSs) in which sensors, controllers and actuators are connected via multiple (local) communication networks which operate asynchronously and independently of each other. We model the large-scale NCS as an

  14. Linux software for large topology optimization problems

    DEFF Research Database (Denmark)

    evolving product, which allows a parallel solution of the PDE, it lacks the important feature that the matrix-generation part of the computations is localized to each processor. This is well-known to be critical for obtaining a useful speedup on a Linux cluster and it motivates the search for a COMSOL......-like package for large topology optimization problems. One candidate for such software is developed for Linux by Sandia Nat’l Lab in the USA being the Sundance system. Sundance also uses a symbolic representation of the PDE and a scalable numerical solution is achieved by employing the underlying Trilinos...

  15. NASA Data Acquisition System Software Development for Rocket Propulsion Test Facilities

    Science.gov (United States)

    Herbert, Phillip W., Sr.; Elliot, Alex C.; Graves, Andrew R.

    2015-01-01

    Current NASA propulsion test facilities include Stennis Space Center in Mississippi, Marshall Space Flight Center in Alabama, Plum Brook Station in Ohio, and White Sands Test Facility in New Mexico. Within and across these centers, a diverse set of data acquisition systems exist with different hardware and software platforms. The NASA Data Acquisition System (NDAS) is a software suite designed to operate and control many critical aspects of rocket engine testing. The software suite combines real-time data visualization, data recording to a variety formats, short-term and long-term acquisition system calibration capabilities, test stand configuration control, and a variety of data post-processing capabilities. Additionally, data stream conversion functions exist to translate test facility data streams to and from downstream systems, including engine customer systems. The primary design goals for NDAS are flexibility, extensibility, and modularity. Providing a common user interface for a variety of hardware platforms helps drive consistency and error reduction during testing. In addition, with an understanding that test facilities have different requirements and setups, the software is designed to be modular. One engine program may require real-time displays and data recording; others may require more complex data stream conversion, measurement filtering, or test stand configuration management. The NDAS suite allows test facilities to choose which components to use based on their specific needs. The NDAS code is primarily written in LabVIEW, a graphical, data-flow driven language. Although LabVIEW is a general-purpose programming language; large-scale software development in the language is relatively rare compared to more commonly used languages. The NDAS software suite also makes extensive use of a new, advanced development framework called the Actor Framework. The Actor Framework provides a level of code reuse and extensibility that has previously been difficult

  16. Solution approach for a large scale personnel transport system for a large company in Latin America

    Energy Technology Data Exchange (ETDEWEB)

    Garzón-Garnica, Eduardo-Arturo; Caballero-Morales, Santiago-Omar; Martínez-Flores, José-Luis

    2017-07-01

    The present paper focuses on the modelling and solution of a large-scale personnel transportation system in Mexico where many routes and vehicles are currently used to service 525 points. The routing system proposed can be applied to many cities in the Latin-American region. Design/methodology/approach: This system was modelled as a VRP model considering the use of real-world transit times, and the fact that routes start at the farthest point from the destination center. Experiments were performed on different sized sets of service points. As the size of the instances was increased, the performance of the heuristic method was assessed in comparison with the results of an exact algorithm, the results remaining very close between both. When the size of the instance was full-scale and the exact algorithm took too much time to solve the problem, then the heuristic algorithm provided a feasible solution. Supported by the validation with smaller scale instances, where the difference between both solutions was close to a 6%, the full –scale solution obtained with the heuristic algorithm was considered to be within that same range. Findings: The proposed modelling and solving method provided a solution that would produce significant savings in the daily operation of the routes. Originality/value: The urban distribution of the cities in Latin America is unique to other regions in the world. The general layout of the large cities in this region includes a small town center, usually antique, and a somewhat disordered outer region. The lack of a vehicle-centered urban planning poses distinct challenges for vehicle routing problems in the region. The use of a heuristic VRP combined with the results of an exact VRP, allowed the obtention of an improved routing plan specific to the requirements of the region.

  17. Solution approach for a large scale personnel transport system for a large company in Latin America

    International Nuclear Information System (INIS)

    Garzón-Garnica, Eduardo-Arturo; Caballero-Morales, Santiago-Omar; Martínez-Flores, José-Luis

    2017-01-01

    The present paper focuses on the modelling and solution of a large-scale personnel transportation system in Mexico where many routes and vehicles are currently used to service 525 points. The routing system proposed can be applied to many cities in the Latin-American region. Design/methodology/approach: This system was modelled as a VRP model considering the use of real-world transit times, and the fact that routes start at the farthest point from the destination center. Experiments were performed on different sized sets of service points. As the size of the instances was increased, the performance of the heuristic method was assessed in comparison with the results of an exact algorithm, the results remaining very close between both. When the size of the instance was full-scale and the exact algorithm took too much time to solve the problem, then the heuristic algorithm provided a feasible solution. Supported by the validation with smaller scale instances, where the difference between both solutions was close to a 6%, the full –scale solution obtained with the heuristic algorithm was considered to be within that same range. Findings: The proposed modelling and solving method provided a solution that would produce significant savings in the daily operation of the routes. Originality/value: The urban distribution of the cities in Latin America is unique to other regions in the world. The general layout of the large cities in this region includes a small town center, usually antique, and a somewhat disordered outer region. The lack of a vehicle-centered urban planning poses distinct challenges for vehicle routing problems in the region. The use of a heuristic VRP combined with the results of an exact VRP, allowed the obtention of an improved routing plan specific to the requirements of the region.

  18. Solution approach for a large scale personnel transport system for a large company in Latin America

    Directory of Open Access Journals (Sweden)

    Eduardo-Arturo Garzón-Garnica

    2017-10-01

    Full Text Available Purpose: The present paper focuses on the modelling and solution of a large-scale personnel transportation system in Mexico where many routes and vehicles are currently used to service 525 points. The routing system proposed can be applied to many cities in the Latin-American region. Design/methodology/approach: This system was modelled as a VRP model considering the use of real-world transit times, and the fact that routes start at the farthest point from the destination center. Experiments were performed on different sized sets of service points. As the size of the instances was increased, the performance of the heuristic method was assessed in comparison with the results of an exact algorithm, the results remaining very close between both.  When the size of the instance was full-scale and the exact algorithm took too much time to solve the problem, then the heuristic algorithm provided a feasible solution. Supported by the validation with smaller scale instances, where the difference between both solutions was close to a 6%, the full –scale solution obtained with the heuristic algorithm was considered to be within that same range. Findings: The proposed modelling and solving method provided a solution that would produce significant savings in the daily operation of the routes. Originality/value: The urban distribution of the cities in Latin America is unique to other regions in the world. The general layout of the large cities in this region includes a small town center, usually antique, and a somewhat disordered outer region. The lack of a vehicle-centered urban planning poses distinct challenges for vehicle routing problems in the region. The use of a heuristic VRP combined with the results of an exact VRP, allowed the obtention of an improved routing plan specific to the requirements of the region.

  19. Software system safety

    Science.gov (United States)

    Uber, James G.

    1988-01-01

    Software itself is not hazardous, but since software and hardware share common interfaces there is an opportunity for software to create hazards. Further, these software systems are complex, and proven methods for the design, analysis, and measurement of software safety are not yet available. Some past software failures, future NASA software trends, software engineering methods, and tools and techniques for various software safety analyses are reviewed. Recommendations to NASA are made based on this review.

  20. Ergatis: a web interface and scalable software system for bioinformatics workflows

    Science.gov (United States)

    Orvis, Joshua; Crabtree, Jonathan; Galens, Kevin; Gussman, Aaron; Inman, Jason M.; Lee, Eduardo; Nampally, Sreenath; Riley, David; Sundaram, Jaideep P.; Felix, Victor; Whitty, Brett; Mahurkar, Anup; Wortman, Jennifer; White, Owen; Angiuoli, Samuel V.

    2010-01-01

    Motivation: The growth of sequence data has been accompanied by an increasing need to analyze data on distributed computer clusters. The use of these systems for routine analysis requires scalable and robust software for data management of large datasets. Software is also needed to simplify data management and make large-scale bioinformatics analysis accessible and reproducible to a wide class of target users. Results: We have developed a workflow management system named Ergatis that enables users to build, execute and monitor pipelines for computational analysis of genomics data. Ergatis contains preconfigured components and template pipelines for a number of common bioinformatics tasks such as prokaryotic genome annotation and genome comparisons. Outputs from many of these components can be loaded into a Chado relational database. Ergatis was designed to be accessible to a broad class of users and provides a user friendly, web-based interface. Ergatis supports high-throughput batch processing on distributed compute clusters and has been used for data management in a number of genome annotation and comparative genomics projects. Availability: Ergatis is an open-source project and is freely available at http://ergatis.sourceforge.net Contact: jorvis@users.sourceforge.net PMID:20413634

  1. A new practice-driven approach to develop software in a cyber-physical system environment

    Science.gov (United States)

    Jiang, Yiping; Chen, C. L. Philip; Duan, Junwei

    2016-02-01

    Cyber-physical system (CPS) is an emerging area, which cannot work efficiently without proper software handling of the data and business logic. Software and middleware is the soul of the CPS. The software development of CPS is a critical issue because of its complicity in a large scale realistic system. Furthermore, object-oriented approach (OOA) is often used to develop CPS software, which needs some improvements according to the characteristics of CPS. To develop software in a CPS environment, a new systematic approach is proposed in this paper. It comes from practice, and has been evolved from software companies. It consists of (A) Requirement analysis in event-oriented way, (B) architecture design in data-oriented way, (C) detailed design and coding in object-oriented way and (D) testing in event-oriented way. It is a new approach based on OOA; the difference when compared with OOA is that the proposed approach has different emphases and measures in every stage. It is more accord with the characteristics of event-driven CPS. In CPS software development, one should focus on the events more than the functions or objects. A case study of a smart home system is designed to reveal the effectiveness of the approach. It shows that the approach is also easy to be operated in the practice owing to some simplifications. The running result illustrates the validity of this approach.

  2. A 3D Sphere Culture System Containing Functional Polymers for Large-Scale Human Pluripotent Stem Cell Production

    Directory of Open Access Journals (Sweden)

    Tomomi G. Otsuji

    2014-05-01

    Full Text Available Utilizing human pluripotent stem cells (hPSCs in cell-based therapy and drug discovery requires large-scale cell production. However, scaling up conventional adherent cultures presents challenges of maintaining a uniform high quality at low cost. In this regard, suspension cultures are a viable alternative, because they are scalable and do not require adhesion surfaces. 3D culture systems such as bioreactors can be exploited for large-scale production. However, the limitations of current suspension culture methods include spontaneous fusion between cell aggregates and suboptimal passaging methods by dissociation and reaggregation. 3D culture systems that dynamically stir carrier beads or cell aggregates should be refined to reduce shearing forces that damage hPSCs. Here, we report a simple 3D sphere culture system that incorporates mechanical passaging and functional polymers. This setup resolves major problems associated with suspension culture methods and dynamic stirring systems and may be optimal for applications involving large-scale hPSC production.

  3. Event management for large scale event-driven digital hardware spiking neural networks.

    Science.gov (United States)

    Caron, Louis-Charles; D'Haene, Michiel; Mailhot, Frédéric; Schrauwen, Benjamin; Rouat, Jean

    2013-09-01

    The interest in brain-like computation has led to the design of a plethora of innovative neuromorphic systems. Individually, spiking neural networks (SNNs), event-driven simulation and digital hardware neuromorphic systems get a lot of attention. Despite the popularity of event-driven SNNs in software, very few digital hardware architectures are found. This is because existing hardware solutions for event management scale badly with the number of events. This paper introduces the structured heap queue, a pipelined digital hardware data structure, and demonstrates its suitability for event management. The structured heap queue scales gracefully with the number of events, allowing the efficient implementation of large scale digital hardware event-driven SNNs. The scaling is linear for memory, logarithmic for logic resources and constant for processing time. The use of the structured heap queue is demonstrated on a field-programmable gate array (FPGA) with an image segmentation experiment and a SNN of 65,536 neurons and 513,184 synapses. Events can be processed at the rate of 1 every 7 clock cycles and a 406×158 pixel image is segmented in 200 ms. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. On distributed wavefront reconstruction for large-scale adaptive optics systems.

    Science.gov (United States)

    de Visser, Cornelis C; Brunner, Elisabeth; Verhaegen, Michel

    2016-05-01

    The distributed-spline-based aberration reconstruction (D-SABRE) method is proposed for distributed wavefront reconstruction with applications to large-scale adaptive optics systems. D-SABRE decomposes the wavefront sensor domain into any number of partitions and solves a local wavefront reconstruction problem on each partition using multivariate splines. D-SABRE accuracy is within 1% of a global approach with a speedup that scales quadratically with the number of partitions. The D-SABRE is compared to the distributed cumulative reconstruction (CuRe-D) method in open-loop and closed-loop simulations using the YAO adaptive optics simulation tool. D-SABRE accuracy exceeds CuRe-D for low levels of decomposition, and D-SABRE proved to be more robust to variations in the loop gain.

  5. Understanding water delivery performance in a large-scale irrigation system in Peru

    NARCIS (Netherlands)

    Vos, J.M.C.

    2005-01-01

    During a two-year field study the performance of the water delivery was evaluated in a large-scale irrigation system on the north coast of Peru. Flow measurements were carried out along the main canals, along two secondary canals, and in two tertiary blocks in the Chancay-Lambayeque irrigation

  6. PASSIM – an open source software system for managing information in biomedical studies

    Directory of Open Access Journals (Sweden)

    Neogi Sudeshna

    2007-02-01

    Full Text Available Abstract Background One of the crucial aspects of day-to-day laboratory information management is collection, storage and retrieval of information about research subjects and biomedical samples. An efficient link between sample data and experiment results is absolutely imperative for a successful outcome of a biomedical study. Currently available software solutions are largely limited to large-scale, expensive commercial Laboratory Information Management Systems (LIMS. Acquiring such LIMS indeed can bring laboratory information management to a higher level, but often implies sufficient investment of time, effort and funds, which are not always available. There is a clear need for lightweight open source systems for patient and sample information management. Results We present a web-based tool for submission, management and retrieval of sample and research subject data. The system secures confidentiality by separating anonymized sample information from individuals' records. It is simple and generic, and can be customised for various biomedical studies. Information can be both entered and accessed using the same web interface. User groups and their privileges can be defined. The system is open-source and is supplied with an on-line tutorial and necessary documentation. It has proven to be successful in a large international collaborative project. Conclusion The presented system closes the gap between the need and the availability of lightweight software solutions for managing information in biomedical studies involving human research subjects.

  7. Large scale continuous integration and delivery : Making great software better and faster

    NARCIS (Netherlands)

    Stahl, Daniel

    2017-01-01

    Since the inception of continuous integration, and later continuous delivery, the methods of producing software in the industry have changed dramatically over the last two decades. Automated, rapid and frequent compilation, integration, testing, analysis, packaging and delivery of new software

  8. Time-Efficient Cloning Attacks Identification in Large-Scale RFID Systems

    Directory of Open Access Journals (Sweden)

    Ju-min Zhao

    2017-01-01

    Full Text Available Radio Frequency Identification (RFID is an emerging technology for electronic labeling of objects for the purpose of automatically identifying, categorizing, locating, and tracking the objects. But in their current form RFID systems are susceptible to cloning attacks that seriously threaten RFID applications but are hard to prevent. Existing protocols aimed at detecting whether there are cloning attacks in single-reader RFID systems. In this paper, we investigate the cloning attacks identification in the multireader scenario and first propose a time-efficient protocol, called the time-efficient Cloning Attacks Identification Protocol (CAIP to identify all cloned tags in multireaders RFID systems. We evaluate the performance of CAIP through extensive simulations. The results show that CAIP can identify all the cloned tags in large-scale RFID systems fairly fast with required accuracy.

  9. Large-scale theoretical calculations in molecular science - design of a large computer system for molecular science and necessary conditions for future computers

    Energy Technology Data Exchange (ETDEWEB)

    Kashiwagi, H [Institute for Molecular Science, Okazaki, Aichi (Japan)

    1982-06-01

    A large computer system was designed and established for molecular science under the leadership of molecular scientists. Features of the computer system are an automated operation system and an open self-service system. Large-scale theoretical calculations have been performed to solve many problems in molecular science, using the computer system. Necessary conditions for future computers are discussed on the basis of this experience.

  10. Large-scale theoretical calculations in molecular science - design of a large computer system for molecular science and necessary conditions for future computers

    International Nuclear Information System (INIS)

    Kashiwagi, H.

    1982-01-01

    A large computer system was designed and established for molecular science under the leadership of molecular scientists. Features of the computer system are an automated operation system and an open self-service system. Large-scale theoretical calculations have been performed to solve many problems in molecular science, using the computer system. Necessary conditions for future computers are discussed on the basis of this experience. (orig.)

  11. Rucio - The next generation of large scale distributed system for ATLAS Data Management

    CERN Document Server

    Garonne, V; The ATLAS collaboration; Beermann, T; Goossens, L; Lassnig, M; Nairz, A; Stewart, GA; Vigne, V; Serfon, C

    2013-01-01

    Rucio is the next-generation Distributed Data Management(DDM) system benefiting from recent advances in cloud and "Big Data" computing to address HEP experiments scaling requirements. Rucio is an evolution of the ATLAS DDM system Don Quijote 2 (DQ2), which has demonstrated very large scale data management capabilities with more than 140 petabytes spread worldwide across 130 sites, and accesses from 1,000 active users. However, DQ2 is reaching its limits in terms of scalability, requiring a large number of support staff to operate and being hard to extend with new technologies. Rucio will address these issues by relying on a conceptual data model and new technology to ensure system scalability, address new user requirements and employ new automation framework to reduce operational overheads. We present the key concepts of Rucio, including its data organization/representation and a model of how ATLAS central group and user activities will be managed. The Rucio design, and the technology it employs, is described...

  12. Rucio - The next generation of large scale distributed system for ATLAS Data Management

    CERN Document Server

    Garonne, V; The ATLAS collaboration; Beermann, T; Goossens, L; Lassnig, M; Nairz, A; Stewart, GA; Vigne, V; Serfon, C

    2014-01-01

    Rucio is the next-generation Distributed Data Management(DDM) system benefiting from recent advances in cloud and ”Big Data” computing to address HEP experiments scaling requirements. Rucio is an evolution of the ATLAS DDM system Don Quijote 2 (DQ2), which has demonstrated very large scale data management capabilities with more than 140 petabytes spread worldwide across 130 sites, and accesses from 1,000 active users. However, DQ2 is reaching its limits in terms of scalability, requiring a large number of support staff to operate and being hard to extend with new technologies. Rucio will address these issues by relying on a conceptual data model and new technology to ensure system scalability, address new user requirements and employ new automation framework to reduce operational overheads. We present the key concepts of Rucio, including its data organization/representation and a model of how ATLAS central group and user activities will be managed. The Rucio design, and the technology it employs, is descr...

  13. SMES-UPS for large-scaled SC magnet system of LHD

    International Nuclear Information System (INIS)

    Yamada, Shuichi; Mito, T.; Chikaraishi, H.; Nishimura, A.; Kojima, H.; Nakanishi, Y.; Uede, T.; Satow, T.; Motojima, O.

    2003-01-01

    The LHD is an SC experimental fusion device of heliotron type. Eight sets of the helium compressors with total electric power of 3.5 MW are installed in the cryogenic system. The analytical studies of the SMES-UPS for the compressors under the deep voltage sag are reported in this paper. The amplitude and frequency of the voltage decrease gradually by the regenerating effect of the induction motors. The SMES-UPS system proposed in this report has the following functions; (1) variable frequency control, (2) regulations by ACR and AVR, and (3) rapid isolation and synchronous reconnection from the loads to grid line. We have demonstrated that SMES was useful for the large-scaled cryogenic system of the experimental fusion device

  14. Model of large scale man-machine systems with an application to vessel traffic control

    NARCIS (Netherlands)

    Wewerinke, P.H.; van der Ent, W.I.; ten Hove, D.

    1989-01-01

    Mathematical models are discussed to deal with complex large-scale man-machine systems such as vessel (air, road) traffic and process control systems. Only interrelationships between subsystems are assumed. Each subsystem is controlled by a corresponding human operator (HO). Because of the

  15. Software Engineering Infrastructure in a Large Virtual Campus

    Science.gov (United States)

    Cristobal, Jesus; Merino, Jorge; Navarro, Antonio; Peralta, Miguel; Roldan, Yolanda; Silveira, Rosa Maria

    2011-01-01

    Purpose: The design, construction and deployment of a large virtual campus are a complex issue. Present virtual campuses are made of several software applications that complement e-learning platforms. In order to develop and maintain such virtual campuses, a complex software engineering infrastructure is needed. This paper aims to analyse the…

  16. How to correct long-term system externality of large scale wind power development by a capacity mechanism?

    International Nuclear Information System (INIS)

    Cepeda, Mauricio; Finon, Dominique

    2013-04-01

    This paper deals with the practical problems related to long-term security of supply in electricity markets in the presence of large-scale wind power development. The success of renewable promotion schemes adds a new dimension to ensuring long-term security of supply. It necessitates designing second-best policies to prevent large-scale wind power development from distorting long-run equilibrium prices and investments in conventional generation and in particular in peaking units. We rely upon a long-term simulation model which simulates electricity market players' investment decisions in a market regime and incorporates large-scale wind power development either in the presence of either subsidised wind production or in market-driven development. We test the use of capacity mechanisms to compensate for the long-term effects of large-scale wind power development on the system reliability. The first finding is that capacity mechanisms can help to reduce the social cost of large scale wind power development in terms of decrease of loss of load probability. The second finding is that, in a market-based wind power deployment without subsidy, wind generators are penalized for insufficient contribution to the long term system's reliability. (authors)

  17. The Effect of Superstar Software on Hardware Sales in System Markets

    OpenAIRE

    Binken, Jeroen; Stremersch, Stefan

    2008-01-01

    textabstractSystems are composed of complementary products (e.g., video game systems are composed of the video game console and video games). Prior literature on indirect network effects argues that, in system markets, sales of the primary product (often referred to as "hardware") largely depend on the availability of complementary products (often referred to as "software"). Mathematical and empirical analyses have almost exclusively operationalized software availability as software quantity....

  18. Development of large scale wind energy conservation system. Development of large scale wind energy conversion system; Ogata furyoku hatsuden system no kaihatsu. Ogata furyoku hatsuden system no kaihatsu

    Energy Technology Data Exchange (ETDEWEB)

    Takita, M [New Energy and Industrial Technology Development Organization, Tokyo (Japan)

    1994-12-01

    Described herein are the results of the FY1994 research program for development of large scale wind energy conversion system. The study on technological development of key components evaluates performance of, and confirms reliability and applicability of, hydraulic systems centered by those equipped with variable pitch mechanisms and electrohydraulic servo valves that control them. The study on blade conducts fatigue and crack-propagation tests, which show that the blades developed have high strength. The study on speed-increasing gear conducts load tests, confirming the effects of reducing vibration and noise by modification of the gear teeth. The study on NACELLE cover conducts vibration tests to confirm its vibration characteristics, and analyzes three-dimensional vibration by the finite element method. Some components for a 500kW commercial wind mill are fabricated, including rotor heads, variable pitch mechanisms, speed-increasing gears, YAW systems, and hydraulic control systems. The others fabricated include a remote supervisory control system for maintenance, system to integrate the wind mill into a power system, and electrical control devices in which site conditions, such as atmospheric temperature and lightening, are taken into consideration.

  19. A top-down approach to construct execution views of a large software-intensive system

    NARCIS (Netherlands)

    Callo Arias, Trosky B.; America, Pierre; Avgeriou, Paris

    This paper presents an approach to construct execution views, which are views that describe what the software of a software-intensive system does at runtime and how it does it. The approach represents an architecture reconstruction solution based on a metamodel, a set of viewpoints, and a dynamic

  20. Algorithm of search and track of static and moving large-scale objects

    Directory of Open Access Journals (Sweden)

    Kalyaev Anatoly

    2017-01-01

    Full Text Available We suggest an algorithm for processing of a sequence, which contains images of search and track of static and moving large-scale objects. The possible software implementation of the algorithm, based on multithread CUDA processing, is suggested. Experimental analysis of the suggested algorithm implementation is performed.

  1. Prospects for large scale electricity storage in Denmark

    DEFF Research Database (Denmark)

    Krog Ekman, Claus; Jensen, Søren Højgaard

    2010-01-01

    In a future power systems with additional wind power capacity there will be an increased need for large scale power management as well as reliable balancing and reserve capabilities. Different technologies for large scale electricity storage provide solutions to the different challenges arising w...

  2. A Web-based Multi-user Interactive Visualization System For Large-Scale Computing Using Google Web Toolkit Technology

    Science.gov (United States)

    Weiss, R. M.; McLane, J. C.; Yuen, D. A.; Wang, S.

    2009-12-01

    We have created a web-based, interactive system for multi-user collaborative visualization of large data sets (on the order of terabytes) that allows users in geographically disparate locations to simultaneous and collectively visualize large data sets over the Internet. By leveraging asynchronous java and XML (AJAX) web development paradigms via the Google Web Toolkit (http://code.google.com/webtoolkit/), we are able to provide remote, web-based users a web portal to LCSE's (http://www.lcse.umn.edu) large-scale interactive visualization system already in place at the University of Minnesota that provides high resolution visualizations to the order of 15 million pixels by Megan Damon. In the current version of our software, we have implemented a new, highly extensible back-end framework built around HTTP "server push" technology to provide a rich collaborative environment and a smooth end-user experience. Furthermore, the web application is accessible via a variety of devices including netbooks, iPhones, and other web- and javascript-enabled cell phones. New features in the current version include: the ability for (1) users to launch multiple visualizations, (2) a user to invite one or more other users to view their visualization in real-time (multiple observers), (3) users to delegate control aspects of the visualization to others (multiple controllers) , and (4) engage in collaborative chat and instant messaging with other users within the user interface of the web application. We will explain choices made regarding implementation, overall system architecture and method of operation, and the benefits of an extensible, modular design. We will also discuss future goals, features, and our plans for increasing scalability of the system which includes a discussion of the benefits potentially afforded us by a migration of server-side components to the Google Application Engine (http://code.google.com/appengine/).

  3. Reliable Software Development for Machine Protection Systems

    CERN Document Server

    Anderson, D; Dragu, M; Fuchsberger, K; Garnier, JC; Gorzawski, AA; Koza, M; Krol, K; Misiowiec, K; Stamos, K; Zerlauth, M

    2014-01-01

    The Controls software for the Large Hadron Collider (LHC) at CERN, with more than 150 millions lines of code, resides amongst the largest known code bases in the world1. Industry has been applying Agile software engineering techniques for more than two decades now, and the advantages of these techniques can no longer be ignored to manage the code base for large projects within the accelerator community. Furthermore, CERN is a particular environment due to the high personnel turnover and manpower limitations, where applying Agile processes can improve both, the codebase management as well as its quality. This paper presents the successful application of the Agile software development process Scrum for machine protection systems at CERN, the quality standards and infrastructure introduced together with the Agile process as well as the challenges encountered to adapt it to the CERN environment.

  4. A Decentralized Multivariable Robust Adaptive Voltage and Speed Regulator for Large-Scale Power Systems

    Science.gov (United States)

    Okou, Francis A.; Akhrif, Ouassima; Dessaint, Louis A.; Bouchard, Derrick

    2013-05-01

    This papter introduces a decentralized multivariable robust adaptive voltage and frequency regulator to ensure the stability of large-scale interconnnected generators. Interconnection parameters (i.e. load, line and transormer parameters) are assumed to be unknown. The proposed design approach requires the reformulation of conventiaonal power system models into a multivariable model with generator terminal voltages as state variables, and excitation and turbine valve inputs as control signals. This model, while suitable for the application of modern control methods, introduces problems with regards to current design techniques for large-scale systems. Interconnection terms, which are treated as perturbations, do not meet the common matching condition assumption. A new adaptive method for a certain class of large-scale systems is therefore introduces that does not require the matching condition. The proposed controller consists of nonlinear inputs that cancel some nonlinearities of the model. Auxiliary controls with linear and nonlinear components are used to stabilize the system. They compensate unknown parametes of the model by updating both the nonlinear component gains and excitation parameters. The adaptation algorithms involve the sigma-modification approach for auxiliary control gains, and the projection approach for excitation parameters to prevent estimation drift. The computation of the matrix-gain of the controller linear component requires the resolution of an algebraic Riccati equation and helps to solve the perturbation-mismatching problem. A realistic power system is used to assess the proposed controller performance. The results show that both stability and transient performance are considerably improved following a severe contingency.

  5. Hardware-assisted software clock synchronization for homogeneous distributed systems

    Science.gov (United States)

    Ramanathan, P.; Kandlur, Dilip D.; Shin, Kang G.

    1990-01-01

    A clock synchronization scheme that strikes a balance between hardware and software solutions is proposed. The proposed is a software algorithm that uses minimal additional hardware to achieve reasonably tight synchronization. Unlike other software solutions, the guaranteed worst-case skews can be made insensitive to the maximum variation of message transit delay in the system. The scheme is particularly suitable for large partially connected distributed systems with topologies that support simple point-to-point broadcast algorithms. Examples of such topologies include the hypercube and the mesh interconnection structures.

  6. Software essentials design and construction

    CERN Document Server

    Dingle, Adair

    2014-01-01

    About the Cover: Although capacity may be a problem for a doghouse, other requirements are usually minimal. Unlike skyscrapers, doghouses are simple units. They do not require plumbing, electricity, fire alarms, elevators, or ventilation systems, and they do not need to be built to code or pass inspections. The range of complexity in software design is similar. Given available software tools and libraries-many of which are free-hobbyists can build small or short-lived computer apps. Yet, design for software longevity, security, and efficiency can be intricate-as is the design of large-scale sy

  7. Review of the Educational Software Evaluation Forms and Scales

    Directory of Open Access Journals (Sweden)

    Ahmet ARSLAN

    2016-12-01

    Full Text Available The main purpose of this study is to review existing evaluation forms and scales that have been prepared for educational software evaluation. In addition to this purpose, the study aims to provide insight and guidance for future studies in this context. In total, forty-two studies that including evaluation forms and scales have been taken into consideration. “Educational software evaluation”, “Software evaluation”, “Educational software evaluation forms/scales” were searched as keywords in the: “Education Resources Information Centre (ERIC”, “Marmara University e-Library”, “National Thesis Center” and “Science Direct” databases. Twenty-nine of them have met the review selection criteria and been evaluated. There is an increase in the number of evaluation tools between 2006 – 2010. However, it was noticed that there is no sufficient number of evaluation tools targeting “educational games”. It was concluded that reliability and validity studies are very important part of developing educational software evaluation tools and this is a matter that should be considered in future studies.

  8. A fiber-optic ice detection system for large-scale wind turbine blades

    Science.gov (United States)

    Kim, Dae-gil; Sampath, Umesh; Kim, Hyunjin; Song, Minho

    2017-09-01

    Icing causes substantial problems in the integrity of large-scale wind turbines. In this work, a fiber-optic sensor system for detection of icing with an arrayed waveguide grating is presented. The sensor system detects Fresnel reflections from the ends of the fibers. The transition in Fresnel reflection due to icing gives peculiar intensity variations, which categorizes the ice, the water, and the air medium on the wind turbine blades. From the experimental results, with the proposed sensor system, the formation of icing conditions and thickness of ice were identified successfully in real time.

  9. Daily quality assurance software for a satellite radiometer system

    Science.gov (United States)

    Keegstra, P. B.; Smoot, G. F.; Bennett, C. L.; Aymon, J.; Backus, C.; Deamici, G.; Hinshaw, G.; Jackson, P. D.; Kogut, A.; Lineweaver, C.

    1992-01-01

    Six Differential Microwave Radiometers (DMR) on COBE (Cosmic Background Explorer) measure the large-angular-scale isotropy of the cosmic microwave background (CMB) at 31.5, 53, and 90 GHz. Quality assurance software analyzes the daily telemetry from the spacecraft to ensure that the instrument is operating correctly and that the data are not corrupted. Quality assurance for DMR poses challenging requirements. The data are differential, so a single bad point can affect a large region of the sky, yet the CMB isotropy requires lengthy integration times (greater than 1 year) to limit potential CMB anisotropies. Celestial sources (with the exception of the moon) are not, in general, visible in the raw differential data. A 'quicklook' software system was developed that, in addition to basic plotting and limit-checking, implements a collection of data tests as well as long-term trending. Some of the key capabilities include the following: (1) stability analysis showing how well the data RMS averages down with increased data; (2) a Fourier analysis and autocorrelation routine to plot the power spectrum and confirm the presence of the 3 mK 'cosmic' dipole signal; (3) binning of the data against basic spacecraft quantities such as orbit angle; (4) long-term trending; and (5) dipole fits to confirm the spacecraft attitude azimuth angle.

  10. Dynamic state estimation techniques for large-scale electric power systems

    International Nuclear Information System (INIS)

    Rousseaux, P.; Pavella, M.

    1991-01-01

    This paper presents the use of dynamic type state estimators for energy management in electric power systems. Various dynamic type estimators have been developed, but have never been implemented. This is primarily because of dimensionality problems posed by the conjunction of an extended Kalman filter with a large scale power system. This paper precisely focuses on how to circumvent the high dimensionality, especially prohibitive in the filtering step, by using a decomposition-aggregation hierarchical scheme; to appropriately model the power system dynamics, the authors introduce new state variables in the prediction step and rely on a load forecasting method. The combination of these two techniques succeeds in solving the overall dynamic state estimation problem not only in a tractable and realistic way, but also in compliance with real-time computational requirements. Further improvements are also suggested, bound to the specifics of the high voltage electric transmission systems

  11. Virtual design software for mechanical system dynamics using transfer matrix method of multibody system and its application

    Directory of Open Access Journals (Sweden)

    Hai-gen Yang

    2015-09-01

    Full Text Available The complex mechanical systems such as high-speed trains, multiple launch rocket system, self-propelled artillery, and industrial robots are becoming increasingly larger in scale and more complicated in structure. Designing these products often requires complex model design, multibody system dynamics calculation, and analysis of large amounts of data repeatedly. In recent 20 years, the transfer matrix method of multibody system has been widely applied in engineering fields and welcomed at home and in abroad for the following features: without global dynamic equations of the system, low orders of involved system matrices, high computational efficiency, and high programming. In order to realize the rapid and visual simulation for complex mechanical system virtual design using transfer matrix method of multibody system, a virtual design software named MSTMMSim is designed and implemented. In the MSTMMSim, the transfer matrix method of multibody system is used as the solver for dynamic modeling and calculation; the Open CASCADE is used for solid geometry modeling. Various auxiliary analytical tools such as curve plot and animation display are provided in the post-processor to analyze and process the simulation results. Two numerical examples are given to verify the validity and accuracy of the software, and a multiple launch rocket system engineering example is given at the end of this article to show that the software provides a powerful platform for complex mechanical systems simulation and virtual design.

  12. The Proposal of Scaling the Roles in Scrum of Scrums for Distributed Large Projects

    OpenAIRE

    Abeer M. AlMutairi; M. Rizwan Jameel Qureshi

    2015-01-01

    Scrum of scrums is an approach used to scale the traditional Scrum methodology to fit for the development of complex and large projects. However, scaling the roles of scrum members brought new challenges especially in distributed and large software projects. This paper describes in details the roles of each scrum member in scrum of scrum to propose a solution to use a dedicated product owner for a team and inclusion of sub-backlog. The main goal of the proposed solution i...

  13. Distributed weighted least-squares estimation with fast convergence for large-scale systems.

    Science.gov (United States)

    Marelli, Damián Edgardo; Fu, Minyue

    2015-01-01

    In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods.

  14. Large-scale multielectrode recording and stimulation of neural activity

    International Nuclear Information System (INIS)

    Sher, A.; Chichilnisky, E.J.; Dabrowski, W.; Grillo, A.A.; Grivich, M.; Gunning, D.; Hottowy, P.; Kachiguine, S.; Litke, A.M.; Mathieson, K.; Petrusca, D.

    2007-01-01

    Large circuits of neurons are employed by the brain to encode and process information. How this encoding and processing is carried out is one of the central questions in neuroscience. Since individual neurons communicate with each other through electrical signals (action potentials), the recording of neural activity with arrays of extracellular electrodes is uniquely suited for the investigation of this question. Such recordings provide the combination of the best spatial (individual neurons) and temporal (individual action-potentials) resolutions compared to other large-scale imaging methods. Electrical stimulation of neural activity in turn has two very important applications: it enhances our understanding of neural circuits by allowing active interactions with them, and it is a basis for a large variety of neural prosthetic devices. Until recently, the state-of-the-art in neural activity recording systems consisted of several dozen electrodes with inter-electrode spacing ranging from tens to hundreds of microns. Using silicon microstrip detector expertise acquired in the field of high-energy physics, we created a unique neural activity readout and stimulation framework that consists of high-density electrode arrays, multi-channel custom-designed integrated circuits, a data acquisition system, and data-processing software. Using this framework we developed a number of neural readout and stimulation systems: (1) a 512-electrode system for recording the simultaneous activity of as many as hundreds of neurons, (2) a 61-electrode system for electrical stimulation and readout of neural activity in retinas and brain-tissue slices, and (3) a system with telemetry capabilities for recording neural activity in the intact brain of awake, naturally behaving animals. We will report on these systems, their various applications to the field of neurobiology, and novel scientific results obtained with some of them. We will also outline future directions

  15. Framework for Small-Scale Experiments in Software Engineering: Guidance and Control Software Project: Software Engineering Case Study

    Science.gov (United States)

    Hayhurst, Kelly J.

    1998-01-01

    Software is becoming increasingly significant in today's critical avionics systems. To achieve safe, reliable software, government regulatory agencies such as the Federal Aviation Administration (FAA) and the Department of Defense mandate the use of certain software development methods. However, little scientific evidence exists to show a correlation between software development methods and product quality. Given this lack of evidence, a series of experiments has been conducted to understand why and how software fails. The Guidance and Control Software (GCS) project is the latest in this series. The GCS project is a case study of the Requirements and Technical Concepts for Aviation RTCA/DO-178B guidelines, Software Considerations in Airborne Systems and Equipment Certification. All civil transport airframe and equipment vendors are expected to comply with these guidelines in building systems to be certified by the FAA for use in commercial aircraft. For the case study, two implementations of a guidance and control application were developed to comply with the DO-178B guidelines for Level A (critical) software. The development included the requirements, design, coding, verification, configuration management, and quality assurance processes. This paper discusses the details of the GCS project and presents the results of the case study.

  16. Large-scale heat pumps in sustainable energy systems: System and project perspectives

    Directory of Open Access Journals (Sweden)

    Blarke Morten B.

    2007-01-01

    Full Text Available This paper shows that in support of its ability to improve the overall economic cost-effectiveness and flexibility of the Danish energy system, the financially feasible integration of large-scale heat pumps (HP with existing combined heat and power (CHP plants, is critically sensitive to the operational mode of the HP vis-à-vis the operational coefficient of performance, mainly given by the temperature level of the heat source. When using ground source for low-temperature heat source, heat production costs increases by about 10%, while partial use of condensed flue gasses for low-temperature heat source results in an 8% cost reduction. Furthermore, the analysis shows that when a large-scale HP is integrated with an existing CHP plant, the projected spot market situation in The Nordic Power Exchange (Nord Pool towards 2025, which reflects a growing share of wind power and heat-supply constrained power generation electricity, further reduces the operational hours of the CHP unit over time, while increasing the operational hours of the HP unit. In result, an HP unit at half the heat production capacity as the CHP unit in combination with a heat-only boiler represents as a possibly financially feasible alternative to CHP operation, rather than a supplement to CHP unit operation. While such revised operational strategy would have impacts on policies to promote co-generation, these results indicate that the integration of large-scale HP may jeopardize efforts to promote co-generation. Policy instruments should be designed to promote the integration of HP with lower than half of the heating capacity of the CHP unit. Also it is found, that CHP-HP plant designs should allow for the utilization of heat recovered from the CHP unit’s flue gasses for both concurrent (CHP unit and HP unit and independent operation (HP unit only. For independent operation, the recovered heat is required to be stored. .

  17. Integrated fringe projection 3D scanning system for large-scale metrology based on laser tracker

    Science.gov (United States)

    Du, Hui; Chen, Xiaobo; Zhou, Dan; Guo, Gen; Xi, Juntong

    2017-10-01

    Large scale components exist widely in advance manufacturing industry,3D profilometry plays a pivotal role for the quality control. This paper proposes a flexible, robust large-scale 3D scanning system by integrating a robot with a binocular structured light scanner and a laser tracker. The measurement principle and system construction of the integrated system are introduced. And a mathematical model is established for the global data fusion. Subsequently, a flexible and robust method and mechanism is introduced for the establishment of the end coordination system. Based on this method, a virtual robot noumenon is constructed for hand-eye calibration. And then the transformation matrix between end coordination system and world coordination system is solved. Validation experiment is implemented for verifying the proposed algorithms. Firstly, hand-eye transformation matrix is solved. Then a car body rear is measured for 16 times for the global data fusion algorithm verification. And the 3D shape of the rear is reconstructed successfully.

  18. Application of simplified models to CO2 migration and immobilization in large-scale geological systems

    KAUST Repository

    Gasda, Sarah E.

    2012-07-01

    Long-term stabilization of injected carbon dioxide (CO 2) is an essential component of risk management for geological carbon sequestration operations. However, migration and trapping phenomena are inherently complex, involving processes that act over multiple spatial and temporal scales. One example involves centimeter-scale density instabilities in the dissolved CO 2 region leading to large-scale convective mixing that can be a significant driver for CO 2 dissolution. Another example is the potentially important effect of capillary forces, in addition to buoyancy and viscous forces, on the evolution of mobile CO 2. Local capillary effects lead to a capillary transition zone, or capillary fringe, where both fluids are present in the mobile state. This small-scale effect may have a significant impact on large-scale plume migration as well as long-term residual and dissolution trapping. Computational models that can capture both large and small-scale effects are essential to predict the role of these processes on the long-term storage security of CO 2 sequestration operations. Conventional modeling tools are unable to resolve sufficiently all of these relevant processes when modeling CO 2 migration in large-scale geological systems. Herein, we present a vertically-integrated approach to CO 2 modeling that employs upscaled representations of these subgrid processes. We apply the model to the Johansen formation, a prospective site for sequestration of Norwegian CO 2 emissions, and explore the sensitivity of CO 2 migration and trapping to subscale physics. Model results show the relative importance of different physical processes in large-scale simulations. The ability of models such as this to capture the relevant physical processes at large spatial and temporal scales is important for prediction and analysis of CO 2 storage sites. © 2012 Elsevier Ltd.

  19. Fault Detection for Large-Scale Railway Maintenance Equipment Base on Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Junfu Yu

    2014-04-01

    Full Text Available Focusing on the fault detection application for large-scale railway maintenance equipment with the specialties of low-cost, energy efficiency, collecting data of the function units. This paper proposed energy efficiency, convenient installation fault detection application using Sigsbee wireless sensor networks, which Sigsbee is the most widely used protocol based on IEEE 802.15.4. This paper proposed a systematic application from hardware design using STM32F103 chips as processer, to software system. Fault detection application is the basic part of the fault diagnose system, wireless sensor nodes of the fault detection application with different kinds of sensors for verities function units communication by Sigsbee to collecting and sending basic working status data to the home gateway, then data will be sent to the fault diagnose system.

  20. Large-scale solar purchasing

    International Nuclear Information System (INIS)

    1999-01-01

    The principal objective of the project was to participate in the definition of a new IEA task concerning solar procurement (''the Task'') and to assess whether involvement in the task would be in the interest of the UK active solar heating industry. The project also aimed to assess the importance of large scale solar purchasing to UK active solar heating market development and to evaluate the level of interest in large scale solar purchasing amongst potential large scale purchasers (in particular housing associations and housing developers). A further aim of the project was to consider means of stimulating large scale active solar heating purchasing activity within the UK. (author)

  1. System support software for TSTA

    International Nuclear Information System (INIS)

    Claborn, G.W.; Mann, L.W.; Nielson, C.W.

    1987-01-01

    The software at the Tritium Systems Test Assembly (TSTA) is logically broken into two parts, the system support software and the subsystem software. The purpose of the system support software is to isolate the subsystem software from the physical hardware. In this sense the system support software forms the kernel of the software at TSTA. The kernel software performs several functions. It gathers data from CAMAC modules and makes that data available for subsystem processes. It services requests to send commands to CAMAC modules. It provides a system of logging functions and provides for a system-wide global program state that allows highly structured interaction between subsystem processes. The kernel's most visible function is to provide the Man-Machine Interface (MMI). The MMI allows the operators a window into the physical hardware and subsystem process state. Finally the kernel provides a data archiving and compression function that allows archival data to be accessed and plotted. Such kernel software as developed and implemented at TSTA is described

  2. Representative elements: A step to large-scale fracture system simulation

    International Nuclear Information System (INIS)

    Clemo, T.M.

    1987-01-01

    Large-scale simulation of flow and transport in fractured media requires the development of a technique to represent the effect of a large number of fractures. Representative elements are used as a tool to model a subset of a fracture system as a single distributed entity. Representative elements are part of a modeling concept called dual permeability. Dual permeability modeling combines discrete fracture simulation of the most important fractures with the distributed modeling of the less important fracture of a fracture system. This study investigates the use of stochastic analysis to determine properties of representative elements. Given an assumption of fully developed laminar flow, the net fracture conductivities and hence flow velocities can be determined from descriptive statistics of fracture spacing, orientation, aperture, and extent. The distribution of physical characteristics about their mean leads to a distribution of the associated conductivities. The variance of hydraulic conductivity induces dispersion into the transport process. Simple fracture systems are treated to demonstrate the usefulness of stochastic analysis. Explicit equations for conductivity of an element are developed and the dispersion characteristics are shown. Explicit formulation of the hydraulic conductivity and transport dispersion reveals the dependence of these important characteristics on the parameters used to describe the fracture system. Understanding these dependencies will help to focus efforts to identify the characteristics of fracture systems. Simulations of stochastically generated fracture sets do not provide this explicit functional dependence on the fracture system parameters. 12 refs., 6 figs

  3. Comparing the life cycle costs of using harvest residue as feedstock for small- and large-scale bioenergy systems (part II)

    International Nuclear Information System (INIS)

    Cleary, Julian; Wolf, Derek P.; Caspersen, John P.

    2015-01-01

    In part II of our two-part study, we estimate the nominal electricity generation and GHG (greenhouse gas) mitigation costs of using harvest residue from a hardwood forest in Ontario, Canada to fuel (1) a small-scale (250 kW e ) combined heat and power wood chip gasification unit and (2) a large-scale (211 MW e ) coal-fired generating station retrofitted to combust wood pellets. Under favorable operational and regulatory conditions, generation costs are similar: 14.1 and 14.9 cents per kWh (c/kWh) for the small- and large-scale facilities, respectively. However, GHG mitigation costs are considerably higher for the large-scale system: $159/tonne of CO 2 eq., compared to $111 for the small-scale counterpart. Generation costs increase substantially under existing conditions, reaching: (1) 25.5 c/kWh for the small-scale system, due to a regulation mandating the continual presence of an operating engineer; and (2) 22.5 c/kWh for the large-scale system due to insufficient biomass supply, which reduces plant capacity factor from 34% to 8%. Limited inflation adjustment (50%) of feed-in tariff rates boosts these costs by 7% to 11%. Results indicate that policy generalizations based on scale require careful consideration of the range of operational/regulatory conditions in the jurisdiction of interest. Further, if GHG mitigation is prioritized, small-scale systems may be more cost-effective. - Highlights: • Generation costs for two forest bioenergy systems of different scales are estimated. • Nominal electricity costs are 14.1–28.3 cents/kWh for the small-scale plant. • Nominal electricity costs are 14.9–24.2 cents/kWh for the large-scale plant. • GHG mitigation costs from displacing coal and LPG are $111-$281/tonne of CO 2 eq. • High sensitivity to cap. factor (large-scale) and labor requirements (small-scale)

  4. Trends in large-scale testing of reactor structures

    International Nuclear Information System (INIS)

    Blejwas, T.E.

    2003-01-01

    Large-scale tests of reactor structures have been conducted at Sandia National Laboratories since the late 1970s. This paper describes a number of different large-scale impact tests, pressurization tests of models of containment structures, and thermal-pressure tests of models of reactor pressure vessels. The advantages of large-scale testing are evident, but cost, in particular limits its use. As computer models have grown in size, such as number of degrees of freedom, the advent of computer graphics has made possible very realistic representation of results - results that may not accurately represent reality. A necessary condition to avoiding this pitfall is the validation of the analytical methods and underlying physical representations. Ironically, the immensely larger computer models sometimes increase the need for large-scale testing, because the modeling is applied to increasing more complex structural systems and/or more complex physical phenomena. Unfortunately, the cost of large-scale tests is a disadvantage that will likely severely limit similar testing in the future. International collaborations may provide the best mechanism for funding future programs with large-scale tests. (author)

  5. Large scale reflood test

    International Nuclear Information System (INIS)

    Hirano, Kemmei; Murao, Yoshio

    1980-01-01

    The large-scale reflood test with a view to ensuring the safety of light water reactors was started in fiscal 1976 based on the special account act for power source development promotion measures by the entrustment from the Science and Technology Agency. Thereafter, to establish the safety of PWRs in loss-of-coolant accidents by joint international efforts, the Japan-West Germany-U.S. research cooperation program was started in April, 1980. Thereupon, the large-scale reflood test is now included in this program. It consists of two tests using a cylindrical core testing apparatus for examining the overall system effect and a plate core testing apparatus for testing individual effects. Each apparatus is composed of the mock-ups of pressure vessel, primary loop, containment vessel and ECCS. The testing method, the test results and the research cooperation program are described. (J.P.N.)

  6. Evolutionary Hierarchical Multi-Criteria Metaheuristics for Scheduling in Large-Scale Grid Systems

    CERN Document Server

    Kołodziej, Joanna

    2012-01-01

    One of the most challenging issues in modelling today's large-scale computational systems is to effectively manage highly parametrised distributed environments such as computational grids, clouds, ad hoc networks and P2P networks. Next-generation computational grids must provide a wide range of services and high performance computing infrastructures. Various types of information and data processed in the large-scale dynamic grid environment may be incomplete, imprecise, and fragmented, which complicates the specification of proper evaluation criteria and which affects both the availability of resources and the final collective decisions of users. The complexity of grid architectures and grid management may also contribute towards higher energy consumption. All of these issues necessitate the development of intelligent resource management techniques, which are capable of capturing all of this complexity and optimising meaningful metrics for a wide range of grid applications.   This book covers hot topics in t...

  7. MZDASoft: a software architecture that enables large-scale comparison of protein expression levels over multiple samples based on liquid chromatography/tandem mass spectrometry.

    Science.gov (United States)

    Ghanat Bari, Mehrab; Ramirez, Nelson; Wang, Zhiwei; Zhang, Jianqiu Michelle

    2015-10-15

    Without accurate peak linking/alignment, only the expression levels of a small percentage of proteins can be compared across multiple samples in Liquid Chromatography/Mass Spectrometry/Tandem Mass Spectrometry (LC/MS/MS) due to the selective nature of tandem MS peptide identification. This greatly hampers biomedical research that aims at finding biomarkers for disease diagnosis, treatment, and the understanding of disease mechanisms. A recent algorithm, PeakLink, has allowed the accurate linking of LC/MS peaks without tandem MS identifications to their corresponding ones with identifications across multiple samples collected from different instruments, tissues and labs, which greatly enhanced the ability of comparing proteins. However, PeakLink cannot be implemented practically for large numbers of samples based on existing software architectures, because it requires access to peak elution profiles from multiple LC/MS/MS samples simultaneously. We propose a new architecture based on parallel processing, which extracts LC/MS peak features, and saves them in database files to enable the implementation of PeakLink for multiple samples. The software has been deployed in High-Performance Computing (HPC) environments. The core part of the software, MZDASoft Parallel Peak Extractor (PPE), can be downloaded with a user and developer's guide, and it can be run on HPC centers directly. The quantification applications, MZDASoft TandemQuant and MZDASoft PeakLink, are written in Matlab, which are compiled with a Matlab runtime compiler. A sample script that incorporates all necessary processing steps of MZDASoft for LC/MS/MS quantification in a parallel processing environment is available. The project webpage is http://compgenomics.utsa.edu/zgroup/MZDASoft. The proposed architecture enables the implementation of PeakLink for multiple samples. Significantly more (100%-500%) proteins can be compared over multiple samples with better quantification accuracy in test cases. MZDASoft

  8. LEMON - LHC Era Monitoring for Large-Scale Infrastructures

    International Nuclear Information System (INIS)

    Babik, Marian; Hook, Nicholas; Lansdale, Thomas Hector; Lenkes, Daniel; Siket, Miroslav; Waldron, Denis; Fedorko, Ivan

    2011-01-01

    At the present time computer centres are facing a massive rise in virtualization and cloud computing as these solutions bring advantages to service providers and consolidate the computer centre resources. However, as a result the monitoring complexity is increasing. Computer centre management requires not only to monitor servers, network equipment and associated software but also to collect additional environment and facilities data (e.g. temperature, power consumption, cooling efficiency, etc.) to have also a good overview of the infrastructure performance. The LHC Era Monitoring (Lemon) system is addressing these requirements for a very large scale infrastructure. The Lemon agent that collects data on every client and forwards the samples to the central measurement repository provides a flexible interface that allows rapid development of new sensors. The system allows also to report on behalf of remote devices such as switches and power supplies. Online and historical data can be visualized via a web-based interface or retrieved via command-line tools. The Lemon Alarm System component can be used for notifying the operator about error situations. In this article, an overview of the Lemon monitoring is provided together with a description of the CERN LEMON production instance. No direct comparison is made with other monitoring tool.

  9. Medium/small-scale computers HITACHI M-620, M-630, and M-640 systems: the aim of development and characteristics

    Energy Technology Data Exchange (ETDEWEB)

    Oshima, N; Saiki, Y; Sunaga, K [Hitachi, Ltd., Tokyo (Japan)

    1990-10-01

    The medium/small-scale HITACHI M-620, M-630, and M-640 computer systems are outlined. Every system is featured by the configuration usable as a medium or small-scale host computer in offices, the function connectable with large-scale host computers, the performance of 5-50 times those of conventional office computers, easy operation and fast processing. As features of the hardware, the one-board CPU and small integrated cubicle structure containing the CPU board, high-speed large-capacity magnetic disk storage device, various kinds of controllers and others are illustrated. As features of the software, the OS (VOS K) featured by the virtual data space control (VDSA) and relational database (RDB) functions, EAGLE/4GL (effective approach to achieving high level software productivity/4th generation language), STEP (self training environmental support program) and simple end user language ACE3/E2 are outlined. 7 figs.

  10. RE-Europe, a large-scale dataset for modeling a highly renewable European electricity system

    Science.gov (United States)

    Jensen, Tue V.; Pinson, Pierre

    2017-11-01

    Future highly renewable energy systems will couple to complex weather and climate dynamics. This coupling is generally not captured in detail by the open models developed in the power and energy system communities, where such open models exist. To enable modeling such a future energy system, we describe a dedicated large-scale dataset for a renewable electric power system. The dataset combines a transmission network model, as well as information for generation and demand. Generation includes conventional generators with their technical and economic characteristics, as well as weather-driven forecasts and corresponding realizations for renewable energy generation for a period of 3 years. These may be scaled according to the envisioned degrees of renewable penetration in a future European energy system. The spatial coverage, completeness and resolution of this dataset, open the door to the evaluation, scaling analysis and replicability check of a wealth of proposals in, e.g., market design, network actor coordination and forecasting of renewable power generation.

  11. RE-Europe, a large-scale dataset for modeling a highly renewable European electricity system.

    Science.gov (United States)

    Jensen, Tue V; Pinson, Pierre

    2017-11-28

    Future highly renewable energy systems will couple to complex weather and climate dynamics. This coupling is generally not captured in detail by the open models developed in the power and energy system communities, where such open models exist. To enable modeling such a future energy system, we describe a dedicated large-scale dataset for a renewable electric power system. The dataset combines a transmission network model, as well as information for generation and demand. Generation includes conventional generators with their technical and economic characteristics, as well as weather-driven forecasts and corresponding realizations for renewable energy generation for a period of 3 years. These may be scaled according to the envisioned degrees of renewable penetration in a future European energy system. The spatial coverage, completeness and resolution of this dataset, open the door to the evaluation, scaling analysis and replicability check of a wealth of proposals in, e.g., market design, network actor coordination and forecasting of renewable power generation.

  12. The Ownership Structure Dilemma and its Implications on the Transition from Small-Scale to Large-Scale Electric Road Systems

    OpenAIRE

    BEDNARCIK ABDULHADI, EMMA; VITEZ, MARINA

    2016-01-01

    This master thesis is written on behalf of KTH Royal Institute of Technology and the Swedish National Road and Transport Research Institute (VTI). The study investigates how infrastructure ownership could affect the transition from small-scale to large-scale electric road systems (ERS) and how infrastructure ownership affects the foreseen future roles of the ERS stakeholders. The authors have used a qualitative research method, including a literature study within the areas of infrastructure t...

  13. Modeling the Hydrologic Effects of Large-Scale Green Infrastructure Projects with GIS

    Science.gov (United States)

    Bado, R. A.; Fekete, B. M.; Khanbilvardi, R.

    2015-12-01

    Impervious surfaces in urban areas generate excess runoff, which in turn causes flooding, combined sewer overflows, and degradation of adjacent surface waters. Municipal environmental protection agencies have shown a growing interest in mitigating these effects with 'green' infrastructure practices that partially restore the perviousness and water holding capacity of urban centers. Assessment of the performance of current and future green infrastructure projects is hindered by the lack of adequate hydrological modeling tools; conventional techniques fail to account for the complex flow pathways of urban environments, and detailed analyses are difficult to prepare for the very large domains in which green infrastructure projects are implemented. Currently, no standard toolset exists that can rapidly and conveniently predict runoff, consequent inundations, and sewer overflows at a city-wide scale. We demonstrate how streamlined modeling techniques can be used with open-source GIS software to efficiently model runoff in large urban catchments. Hydraulic parameters and flow paths through city blocks, roadways, and sewer drains are automatically generated from GIS layers, and ultimately urban flow simulations can be executed for a variety of rainfall conditions. With this methodology, users can understand the implications of large-scale land use changes and green/gray storm water retention systems on hydraulic loading, peak flow rates, and runoff volumes.

  14. Modeling and Validating Time, Buffering, and Utilization of a Large-Scale, Real-Time Data Acquisition System

    CERN Document Server

    AUTHOR|(SzGeCERN)756497; The ATLAS collaboration; Garcia Garcia, Pedro Javier; Vandelli, Wainer; Froening, Holger

    2017-01-01

    Data acquisition systems for large-scale high-energy physics experiments have to handle hundreds of gigabytes per second of data, and are typically realized as specialized data centers that connect a very large number of front-end electronics devices to an event detection and storage system. The design of such systems is often based on many assumptions, small-scale experiments and a substantial amount of over-provisioning. In this work, we introduce a discrete event-based simulation tool that models the data flow of the current ATLAS data acquisition system, with the main goal to be accurate with regard to the main operational characteristics. We measure buffer occupancy counting the number of elements in buffers, resource utilization measuring output bandwidth and counting the number of active processing units, and their time evolution by comparing data over many consecutive and small periods of time. We perform studies on the error of simulation when comparing the results to a large amount of real-world ope...

  15. Modeling and Validating Time, Buffering, and Utilization of a Large-Scale, Real-Time Data Acquisition System

    CERN Document Server

    AUTHOR|(SzGeCERN)756497; The ATLAS collaboration; Garcia Garcia, Pedro Javier; Vandelli, Wainer; Froening, Holger

    2017-01-01

    Data acquisition systems for large-scale high-energy physics experiments have to handle hundreds of gigabytes per second of data, and are typically implemented as specialized data centers that connect a very large number of front-end electronics devices to an event detection and storage system. The design of such systems is often based on many assumptions, small-scale experiments and a substantial amount of over-provisioning. In this paper, we introduce a discrete event-based simulation tool that models the dataflow of the current ATLAS data acquisition system, with the main goal to be accurate with regard to the main operational characteristics. We measure buffer occupancy counting the number of elements in buffers; resource utilization measuring output bandwidth and counting the number of active processing units, and their time evolution by comparing data over many consecutive and small periods of time. We perform studies on the error in simulation when comparing the results to a large amount of real-world ...

  16. Assessing Programming Costs of Explicit Memory Localization on a Large Scale Shared Memory Multiprocessor

    Directory of Open Access Journals (Sweden)

    Silvio Picano

    1992-01-01

    Full Text Available We present detailed experimental work involving a commercially available large scale shared memory multiple instruction stream-multiple data stream (MIMD parallel computer having a software controlled cache coherence mechanism. To make effective use of such an architecture, the programmer is responsible for designing the program's structure to match the underlying multiprocessors capabilities. We describe the techniques used to exploit our multiprocessor (the BBN TC2000 on a network simulation program, showing the resulting performance gains and the associated programming costs. We show that an efficient implementation relies heavily on the user's ability to explicitly manage the memory system.

  17. The architecture of a reliable software monitoring system for embedded software systems

    International Nuclear Information System (INIS)

    Munson, J.; Krings, A.; Hiromoto, R.

    2006-01-01

    We develop the notion of a measurement-based methodology for embedded software systems to ensure properties of reliability, survivability and security, not only under benign faults but under malicious and hazardous conditions as well. The driving force is the need to develop a dynamic run-time monitoring system for use in these embedded mission critical systems. These systems must run reliably, must be secure and they must fail gracefully. That is, they must continue operating in the face of the departures from their nominal operating scenarios, the failure of one or more system components due to normal hardware and software faults, as well as malicious acts. To insure the integrity of embedded software systems, the activity of these systems must be monitored as they operate. For each of these systems, it is possible to establish a very succinct representation of nominal system activity. Furthermore, it is possible to detect departures from the nominal operating scenario in a timely fashion. Such departure may be due to various circumstances, e.g., an assault from an outside agent, thus forcing the system to operate in an off-nominal environment for which it was neither tested nor certified, or a hardware/software component that has ceased to operate in a nominal fashion. A well-designed system will have the property of graceful degradation. It must continue to run even though some of the functionality may have been lost. This involves the intelligent re-mapping of system functions. Those functions that are impacted by the failure of a system component must be identified and isolated. Thus, a system must be designed so that its basic operations may be re-mapped onto system components still operational. That is, the mission objectives of the software must be reassessed in terms of the current operational capabilities of the software system. By integrating the mechanisms to support observation and detection directly into the design methodology, we propose to shift

  18. Decentralised stabilising controllers for a class of large-scale linear ...

    Indian Academy of Sciences (India)

    subsystems resulting from a new aggregation-decomposition technique. The method has been illustrated through a numerical example of a large-scale linear system consisting of three subsystems each of the fourth order. Keywords. Decentralised stabilisation; large-scale linear systems; optimal feedback control; algebraic ...

  19. Towards a Database System for Large-scale Analytics on Strings

    KAUST Repository

    Sahli, Majed A.

    2015-07-23

    Recent technological advances are causing an explosion in the production of sequential data. Biological sequences, web logs and time series are represented as strings. Currently, strings are stored, managed and queried in an ad-hoc fashion because they lack a standardized data model and query language. String queries are computationally demanding, especially when strings are long and numerous. Existing approaches cannot handle the growing number of strings produced by environmental, healthcare, bioinformatic, and space applications. There is a trade- off between performing analytics efficiently and scaling to thousands of cores to finish in reasonable times. In this thesis, we introduce a data model that unifies the input and output representations of core string operations. We define a declarative query language for strings where operators can be pipelined to form complex queries. A rich set of core string operators is described to support string analytics. We then demonstrate a database system for string analytics based on our model and query language. In particular, we propose the use of a novel data structure augmented by efficient parallel computation to strike a balance between preprocessing overheads and query execution times. Next, we delve into repeated motifs extraction as a core string operation for large-scale string analytics. Motifs are frequent patterns used, for example, to identify biological functionality, periodic trends, or malicious activities. Statistical approaches are fast but inexact while combinatorial methods are sound but slow. We introduce ACME, a combinatorial repeated motifs extractor. We study the spatial and temporal locality of motif extraction and devise a cache-aware search space traversal technique. ACME is the only method that scales to gigabyte- long strings, handles large alphabets, and supports interesting motif types with minimal overhead. While ACME is cache-efficient, it is limited by being serial. We devise a lightweight

  20. Large-scale networks in engineering and life sciences

    CERN Document Server

    Findeisen, Rolf; Flockerzi, Dietrich; Reichl, Udo; Sundmacher, Kai

    2014-01-01

    This edited volume provides insights into and tools for the modeling, analysis, optimization, and control of large-scale networks in the life sciences and in engineering. Large-scale systems are often the result of networked interactions between a large number of subsystems, and their analysis and control are becoming increasingly important. The chapters of this book present the basic concepts and theoretical foundations of network theory and discuss its applications in different scientific areas such as biochemical reactions, chemical production processes, systems biology, electrical circuits, and mobile agents. The aim is to identify common concepts, to understand the underlying mathematical ideas, and to inspire discussions across the borders of the various disciplines.  The book originates from the interdisciplinary summer school “Large Scale Networks in Engineering and Life Sciences” hosted by the International Max Planck Research School Magdeburg, September 26-30, 2011, and will therefore be of int...

  1. RE-Europe, a large-scale dataset for modeling a highly renewable European electricity system

    DEFF Research Database (Denmark)

    Jensen, Tue Vissing; Pinson, Pierre

    2017-01-01

    , we describe a dedicated large-scale dataset for a renewable electric power system. The dataset combines a transmission network model, as well as information for generation and demand. Generation includes conventional generators with their technical and economic characteristics, as well as weather-driven...... to the evaluation, scaling analysis and replicability check of a wealth of proposals in, e.g., market design, network actor coordination and forecastingof renewable power generation....

  2. A software system for oilfield facility investment minimization

    International Nuclear Information System (INIS)

    Ding, Z.X.; Startzman, R.A.

    1996-01-01

    Minimizing investment in oilfield development is an important subject that has attracted a considerable amount of industry attention. One method to reduce investment involves the optimal placement and selection of production facilities. Because of the large amount of capital used in this process, saving a small percent of the total investment may represent a large monetary value. The literature reports algorithms using mathematical programming techniques that were designed to solve the proposed problem in a global optimal manner. Owing to the high-computational complexity and the lack of user-friendly interfaces for data entry and results display, mathematical programming techniques have not been given enough attention in practice. This paper describes an interactive, graphical software system that provides a global optimal solution to the problem of placement and selection of production facilities in oil-field development processes. This software system can be used as an investment minimization tool and a scenario-study simulator. The developed software system consists of five basic modules: (1) an interactive data-input unit, (2) a cost function generator, (3) an optimization unit, (4) a graphic-output display, and (5) a sensitivity-analysis unit

  3. Impact of Data Placement on Resilience in Large-Scale Object Storage Systems

    Energy Technology Data Exchange (ETDEWEB)

    Carns, Philip; Harms, Kevin; Jenkins, John; Mubarak, Misbah; Ross, Robert; Carothers, Christopher

    2016-05-02

    Distributed object storage architectures have become the de facto standard for high-performance storage in big data, cloud, and HPC computing. Object storage deployments using commodity hardware to reduce costs often employ object replication as a method to achieve data resilience. Repairing object replicas after failure is a daunting task for systems with thousands of servers and billions of objects, however, and it is increasingly difficult to evaluate such scenarios at scale on realworld systems. Resilience and availability are both compromised if objects are not repaired in a timely manner. In this work we leverage a high-fidelity discrete-event simulation model to investigate replica reconstruction on large-scale object storage systems with thousands of servers, billions of objects, and petabytes of data. We evaluate the behavior of CRUSH, a well-known object placement algorithm, and identify configuration scenarios in which aggregate rebuild performance is constrained by object placement policies. After determining the root cause of this bottleneck, we then propose enhancements to CRUSH and the usage policies atop it to enable scalable replica reconstruction. We use these methods to demonstrate a simulated aggregate rebuild rate of 410 GiB/s (within 5% of projected ideal linear scaling) on a 1,024-node commodity storage system. We also uncover an unexpected phenomenon in rebuild performance based on the characteristics of the data stored on the system.

  4. Titius--Bode law and the possibility of recent large-scale evolution in the solar system

    International Nuclear Information System (INIS)

    Neito, M.M.

    1974-01-01

    Although it is by no means clear that the Titius--Bode law of planetary distances is indeed a ''law'' (even though there are enticing indications), it is proposed that if one assumes that the law is a ''law'' and that the planets obey it, then this argues against recent large-scale evolution in the solar system. Put another way: one can believe in the Titius--Bode law or in recent large-scale evolution or in neither of them. But it appears difficult to believe in both of them

  5. A Systematic Review of Software Architecture Visualization Techniques

    NARCIS (Netherlands)

    Shahin, M.; Liang, P.; Ali Babar, M.

    2014-01-01

    Context Given the increased interest in using visualization techniques (VTs) to help communicate and understand software architecture (SA) of large scale complex systems, several VTs and tools have been reported to represent architectural elements (such as architecture design, architectural

  6. PC Software graphics tool for conceptual design of space/planetary electrical power systems

    Science.gov (United States)

    Truong, Long V.

    1995-01-01

    This paper describes the Decision Support System (DSS), a personal computer software graphics tool for designing conceptual space and/or planetary electrical power systems. By using the DSS, users can obtain desirable system design and operating parameters, such as system weight, electrical distribution efficiency, and bus power. With this tool, a large-scale specific power system was designed in a matter of days. It is an excellent tool to help designers make tradeoffs between system components, hardware architectures, and operation parameters in the early stages of the design cycle. The DSS is a user-friendly, menu-driven tool with online help and a custom graphical user interface. An example design and results are illustrated for a typical space power system with multiple types of power sources, frequencies, energy storage systems, and loads.

  7. Development of large scale and wind energy conservation system. Operational studies on a large-scale wind energy conservation system; Ogata furyoku hatsuden system no kaihatsu. Ogata furyoku hatsuden system no unten kenkyu

    Energy Technology Data Exchange (ETDEWEB)

    Takita, M [New Energy and Industrial Technology Development Organization, Tokyo (Japan)

    1994-12-01

    Described herein are the results of the FY1994 research program for operational studies on a large-scale wind energy conversion system. A total of 8 domestic and foreign cases are studied for wind energy conversion cost, to clarify the causes for higher cost of the Japanese system. The wind power systems studied include Japanese (5 units at Tappi Wind Park, the same type supplied by company M), US (California Wind Farm, 300 units) and UK (Wales Wind Farm, 103 units) systems. The investment costs are 639, 285 and 189 thousand yen/kW for the Japanese, US and UK systems, respectively. It is also revealed that the power plant itself and assembling costs account for a majority (70 to 88%) of the total investment cost. The higher cost of the Japanese system results from a smaller number of units installed, and the power plant cost can be drastically reduced by mass production. Increasing size also reduces cost greatly.

  8. Dynamic model of frequency control in Danish power system with large scale integration of wind power

    DEFF Research Database (Denmark)

    Basit, Abdul; Hansen, Anca Daniela; Sørensen, Poul Ejnar

    2013-01-01

    This work evaluates the impact of large scale integration of wind power in future power systems when 50% of load demand can be met from wind power. The focus is on active power balance control, where the main source of power imbalance is an inaccurate wind speed forecast. In this study, a Danish...... power system model with large scale of wind power is developed and a case study for an inaccurate wind power forecast is investigated. The goal of this work is to develop an adequate power system model that depicts relevant dynamic features of the power plants and compensates for load generation...... imbalances, caused by inaccurate wind speed forecast, by an appropriate control of the active power production from power plants....

  9. Advanced Connectivity Analysis (ACA): a Large Scale Functional Connectivity Data Mining Environment.

    Science.gov (United States)

    Chen, Rong; Nixon, Erika; Herskovits, Edward

    2016-04-01

    Using resting-state functional magnetic resonance imaging (rs-fMRI) to study functional connectivity is of great importance to understand normal development and function as well as a host of neurological and psychiatric disorders. Seed-based analysis is one of the most widely used rs-fMRI analysis methods. Here we describe a freely available large scale functional connectivity data mining software package called Advanced Connectivity Analysis (ACA). ACA enables large-scale seed-based analysis and brain-behavior analysis. It can seamlessly examine a large number of seed regions with minimal user input. ACA has a brain-behavior analysis component to delineate associations among imaging biomarkers and one or more behavioral variables. We demonstrate applications of ACA to rs-fMRI data sets from a study of autism.

  10. The analysis of MAI in large scale MIMO-CDMA system

    Science.gov (United States)

    Berceanu, Madalina-Georgiana; Voicu, Carmen; Halunga, Simona

    2016-12-01

    Recently, technological development imposed a rapid growth in the use of data carried by cellular services, which also implies the necessity of higher data rates and lower latency. To meet the users' demands, it was brought into discussion a series of new data processing techniques. In this paper, we approached the MIMO technology that uses multiple antennas at the receiver and transmitter ends. To study the performances obtained by this technology, we proposed a MIMO-CDMA system, where image transmission has been used instead of random data transmission to take benefit of a larger range of quality indicators. In the simulations we increased the number of antennas, we observed how the performances of the system are modified and, based on that, we were able to make a comparison between a conventional MIMO and a Large Scale MIMO system, in terms of BER and MSSIM index, which is a metric that compares the quality of the image before transmission with the received one.

  11. RISK MANAGEMENT IN A LARGE-SCALE NEW RAILWAY TRANSPORT SYSTEM PROJECT

    Directory of Open Access Journals (Sweden)

    Sunduck D. SUH, Ph.D., P.E.

    2000-01-01

    Full Text Available Risk management experiences of the Korean Seoul-Pusan high-speed railway (KTX project since the planning stage are evaluated. One can clearly see the interplay of engineering and construction risks, financial risks and political risks in the development of the KTX project, which is the peculiarity of large-scale new railway system projects. A brief description on evaluation methodology and overview of the project is followed by detailed evaluations on key differences in risks between conventional railway system and high-speed railway system, social and political risks, engineering and construction risks, and financial risks. Risks involved in system procurement process, such as proposal solicitation, evaluation, selection, and scope of solicitation are separated out and evaluated in depth. Detailed events resulting from these issues are discussed along with their possible impact on system risk. Lessons learned and further possible refinements are also discussed.

  12. Operation Modeling of Power Systems Integrated with Large-Scale New Energy Power Sources

    Directory of Open Access Journals (Sweden)

    Hui Li

    2016-10-01

    Full Text Available In the most current methods of probabilistic power system production simulation, the output characteristics of new energy power generation (NEPG has not been comprehensively considered. In this paper, the power output characteristics of wind power generation and photovoltaic power generation are firstly analyzed based on statistical methods according to their historical operating data. Then the characteristic indexes and the filtering principle of the NEPG historical output scenarios are introduced with the confidence level, and the calculation model of NEPG’s credible capacity is proposed. Based on this, taking the minimum production costs or the best energy-saving and emission-reduction effect as the optimization objective, the power system operation model with large-scale integration of new energy power generation (NEPG is established considering the power balance, the electricity balance and the peak balance. Besides, the constraints of the operating characteristics of different power generation types, the maintenance schedule, the load reservation, the emergency reservation, the water abandonment and the transmitting capacity between different areas are also considered. With the proposed power system operation model, the operation simulations are carried out based on the actual Northwest power grid of China, which resolves the new energy power accommodations considering different system operating conditions. The simulation results well verify the validity of the proposed power system operation model in the accommodation analysis for the power system which is penetrated with large scale NEPG.

  13. Software engineering and automatic continuous verification of scientific software

    Science.gov (United States)

    Piggott, M. D.; Hill, J.; Farrell, P. E.; Kramer, S. C.; Wilson, C. R.; Ham, D.; Gorman, G. J.; Bond, T.

    2011-12-01

    Software engineering of scientific code is challenging for a number of reasons including pressure to publish and a lack of awareness of the pitfalls of software engineering by scientists. The Applied Modelling and Computation Group at Imperial College is a diverse group of researchers that employ best practice software engineering methods whilst developing open source scientific software. Our main code is Fluidity - a multi-purpose computational fluid dynamics (CFD) code that can be used for a wide range of scientific applications from earth-scale mantle convection, through basin-scale ocean dynamics, to laboratory-scale classic CFD problems, and is coupled to a number of other codes including nuclear radiation and solid modelling. Our software development infrastructure consists of a number of free tools that could be employed by any group that develops scientific code and has been developed over a number of years with many lessons learnt. A single code base is developed by over 30 people for which we use bazaar for revision control, making good use of the strong branching and merging capabilities. Using features of Canonical's Launchpad platform, such as code review, blueprints for designing features and bug reporting gives the group, partners and other Fluidity uers an easy-to-use platform to collaborate and allows the induction of new members of the group into an environment where software development forms a central part of their work. The code repositoriy are coupled to an automated test and verification system which performs over 20,000 tests, including unit tests, short regression tests, code verification and large parallel tests. Included in these tests are build tests on HPC systems, including local and UK National HPC services. The testing of code in this manner leads to a continuous verification process; not a discrete event performed once development has ceased. Much of the code verification is done via the "gold standard" of comparisons to analytical

  14. Software for large scale tracking studies

    International Nuclear Information System (INIS)

    Niederer, J.

    1984-05-01

    Over the past few years, Brookhaven accelerator physicists have been adapting particle tracking programs in planning local storage rings, and lately for SSC reference designs. In addition, the Laboratory is actively considering upgrades to its AGS capabilities aimed at higher proton intensity, polarized proton beams, and heavy ion acceleration. Further activity concerns heavy ion transfer, a proposed booster, and most recently design studies for a heavy ion collider to join to this complex. Circumstances have thus encouraged a search for common features among design and modeling programs and their data, and the corresponding controls efforts among present and tentative machines. Using a version of PATRICIA with nonlinear forces as a vehicle, we have experimented with formal ways to describe accelerator lattice problems to computers as well as to speed up the calculations for large storage ring models. Code treated by straightforward reorganization has served for SSC explorations. The representation work has led to a relational data base centered program, LILA, which has desirable properties for dealing with the many thousands of rapidly changing variables in tracking and other model programs. 13 references

  15. Physics detector simulation facility system software description

    International Nuclear Information System (INIS)

    Allen, J.; Chang, C.; Estep, P.; Huang, J.; Liu, J.; Marquez, M.; Mestad, S.; Pan, J.; Traversat, B.

    1991-12-01

    Large and costly detectors will be constructed during the next few years to study the interactions produced by the SSC. Efficient, cost-effective designs for these detectors will require careful thought and planning. Because it is not possible to test fully a proposed design in a scaled-down version, the adequacy of a proposed design will be determined by a detailed computer model of the detectors. Physics and detector simulations will be performed on the computer model using high-powered computing system at the Physics Detector Simulation Facility (PDSF). The SSCL has particular computing requirements for high-energy physics (HEP) Monte Carlo calculations for the simulation of SSCL physics and detectors. The numerical calculations to be performed in each simulation are lengthy and detailed; they could require many more months per run on a VAX 11/780 computer and may produce several gigabytes of data per run. Consequently, a distributed computing environment of several networked high-speed computing engines is envisioned to meet these needs. These networked computers will form the basis of a centralized facility for SSCL physics and detector simulation work. Our computer planning groups have determined that the most efficient, cost-effective way to provide these high-performance computing resources at this time is with RISC-based UNIX workstations. The modeling and simulation application software that will run on the computing system is usually written by physicists in FORTRAN language and may need thousands of hours of supercomputing time. The system software is the ''glue'' which integrates the distributed workstations and allows them to be managed as a single entity. This report will address the computing strategy for the SSC

  16. Bringing Model Checking Closer to Practical Software Engineering

    CERN Document Server

    AUTHOR|(CDS)2079681; Templon, J A; Willemse, T.A.C.

    Software grows in size and complexity, making it increasingly challenging to ensure that it behaves correctly. This is especially true for distributed systems, where a multitude of components are running concurrently, making it dicult to anticipate all the possible behaviors emerging in the system as a whole. Certain design errors, such as deadlocks and race-conditions, can often go unnoticed when testing is the only form of verication employed in the software engineering life-cycle. Even when bugs are detected in a running software, revealing the root cause and reproducing the behavior can be time consuming (and even impossible), given the lack of control the engineer has over the execution of the concurrent components, as well as the number of possible scenarios that could have produced the problem. This is especially pronounced for large-scale distributed systems such as the Worldwide Large Hadron Collider Computing Grid. Formal verication methods oer more rigorous means of determining whether a system sat...

  17. GiA Roots: software for the high throughput analysis of plant root system architecture

    Science.gov (United States)

    2012-01-01

    Background Characterizing root system architecture (RSA) is essential to understanding the development and function of vascular plants. Identifying RSA-associated genes also represents an underexplored opportunity for crop improvement. Software tools are needed to accelerate the pace at which quantitative traits of RSA are estimated from images of root networks. Results We have developed GiA Roots (General Image Analysis of Roots), a semi-automated software tool designed specifically for the high-throughput analysis of root system images. GiA Roots includes user-assisted algorithms to distinguish root from background and a fully automated pipeline that extracts dozens of root system phenotypes. Quantitative information on each phenotype, along with intermediate steps for full reproducibility, is returned to the end-user for downstream analysis. GiA Roots has a GUI front end and a command-line interface for interweaving the software into large-scale workflows. GiA Roots can also be extended to estimate novel phenotypes specified by the end-user. Conclusions We demonstrate the use of GiA Roots on a set of 2393 images of rice roots representing 12 genotypes from the species Oryza sativa. We validate trait measurements against prior analyses of this image set that demonstrated that RSA traits are likely heritable and associated with genotypic differences. Moreover, we demonstrate that GiA Roots is extensible and an end-user can add functionality so that GiA Roots can estimate novel RSA traits. In summary, we show that the software can function as an efficient tool as part of a workflow to move from large numbers of root images to downstream analysis. PMID:22834569

  18. Review of DC System Technologies for Large Scale Integration of Wind Energy Systems with Electricity Grids

    Directory of Open Access Journals (Sweden)

    Sheng Jie Shao

    2010-06-01

    Full Text Available The ever increasing development and availability of power electronic systems is the underpinning technology that enables large scale integration of wind generation plants with the electricity grid. As the size and power capacity of the wind turbine continues to increase, so is the need to place these significantly large structures at off-shore locations. DC grids and associated power transmission technologies provide opportunities for cost reduction and electricity grid impact minimization as the bulk power is concentrated at single point of entry. As a result, planning, optimization and impact can be studied and carefully controlled minimizing the risk of the investment as well as power system stability issues. This paper discusses the key technologies associated with DC grids for offshore wind farm applications.

  19. Algorithms for large scale singular value analysis of spatially variant tomography systems

    International Nuclear Information System (INIS)

    Cao-Huu, Tuan; Brownell, G.; Lachiver, G.

    1996-01-01

    The problem of determining the eigenvalues of large matrices occurs often in the design and analysis of modem tomography systems. As there is an interest in solving systems containing an ever-increasing number of variables, current research effort is being made to create more robust solvers which do not depend on some special feature of the matrix for convergence (e.g. block circulant), and to improve the speed of already known and understood solvers so that solving even larger systems in a reasonable time becomes viable. Our standard techniques for singular value analysis are based on sparse matrix factorization and are not applicable when the input matrices are large because the algorithms cause too much fill. Fill refers to the increase of non-zero elements in the LU decomposition of the original matrix A (the system matrix). So we have developed iterative solutions that are based on sparse direct methods. Data motion and preconditioning techniques are critical for performance. This conference paper describes our algorithmic approaches for large scale singular value analysis of spatially variant imaging systems, and in particular of PCR2, a cylindrical three-dimensional PET imager 2 built at the Massachusetts General Hospital (MGH) in Boston. We recommend the desirable features and challenges for the next generation of parallel machines for optimal performance of our solver

  20. Energy transfers in large-scale and small-scale dynamos

    Science.gov (United States)

    Samtaney, Ravi; Kumar, Rohit; Verma, Mahendra

    2015-11-01

    We present the energy transfers, mainly energy fluxes and shell-to-shell energy transfers in small-scale dynamo (SSD) and large-scale dynamo (LSD) using numerical simulations of MHD turbulence for Pm = 20 (SSD) and for Pm = 0.2 on 10243 grid. For SSD, we demonstrate that the magnetic energy growth is caused by nonlocal energy transfers from the large-scale or forcing-scale velocity field to small-scale magnetic field. The peak of these energy transfers move towards lower wavenumbers as dynamo evolves, which is the reason for the growth of the magnetic fields at the large scales. The energy transfers U2U (velocity to velocity) and B2B (magnetic to magnetic) are forward and local. For LSD, we show that the magnetic energy growth takes place via energy transfers from large-scale velocity field to large-scale magnetic field. We observe forward U2U and B2B energy flux, similar to SSD.

  1. Expanded Large-Scale Forcing Properties Derived from the Multiscale Data Assimilation System and Its Application to Single-Column Models

    Science.gov (United States)

    Feng, S.; Li, Z.; Liu, Y.; Lin, W.; Toto, T.; Vogelmann, A. M.; Fridlind, A. M.

    2013-12-01

    We present an approach to derive large-scale forcing that is used to drive single-column models (SCMs) and cloud resolving models (CRMs)/large eddy simulation (LES) for evaluating fast physics parameterizations in climate models. The forcing fields are derived by use of a newly developed multi-scale data assimilation (MS-DA) system. This DA system is developed on top of the NCEP Gridpoint Statistical Interpolation (GSI) System and is implemented in the Weather Research and Forecasting (WRF) model at a cloud resolving resolution of 2 km. This approach has been applied to the generation of large scale forcing for a set of Intensive Operation Periods (IOPs) over the Atmospheric Radiation Measurement (ARM) Climate Research Facility's Southern Great Plains (SGP) site. The dense ARM in-situ observations and high-resolution satellite data effectively constrain the WRF model. The evaluation shows that the derived forcing displays accuracies comparable to the existing continuous forcing product and, overall, a better dynamic consistency with observed cloud and precipitation. One important application of this approach is to derive large-scale hydrometeor forcing and multiscale forcing, which is not provided in the existing continuous forcing product. It is shown that the hydrometeor forcing poses an appreciable impact on cloud and precipitation fields in the single-column model simulations. The large-scale forcing exhibits a significant dependency on domain-size that represents SCM grid-sizes. Subgrid processes often contribute a significant component to the large-scale forcing, and this contribution is sensitive to the grid-size and cloud-regime.

  2. Impacts of large-scale offshore wind farm integration on power systems through VSC-HVDC

    DEFF Research Database (Denmark)

    Liu, Hongzhi; Chen, Zhe

    2013-01-01

    The potential of offshore wind energy has been commonly recognized and explored globally. Many countries have implemented and planned offshore wind farms to meet their increasing electricity demands and public environmental appeals, especially in Europe. With relatively less space limitation......, an offshore wind farm could have a capacity rating to hundreds of MWs or even GWs that is large enough to compete with conventional power plants. Thus the impacts of a large offshore wind farm on power system operation and security should be thoroughly studied and understood. This paper investigates...... the impacts of integrating a large-scale offshore wind farm into the transmission system of a power grid through VSC-HVDC connection. The concerns are focused on steady-state voltage stability, dynamic voltage stability and transient angle stability. Simulation results based on an exemplary power system...

  3. Capabilities of the Large-Scale Sediment Transport Facility

    Science.gov (United States)

    2016-04-01

    pump flow meters, sediment trap weigh tanks , and beach profiling lidar. A detailed discussion of the original LSTF features and capabilities can be...ERDC/CHL CHETN-I-88 April 2016 Approved for public release; distribution is unlimited. Capabilities of the Large-Scale Sediment Transport...describes the Large-Scale Sediment Transport Facility (LSTF) and recent upgrades to the measurement systems. The purpose of these upgrades was to increase

  4. An expert system based software sizing tool, phase 2

    Science.gov (United States)

    Friedlander, David

    1990-01-01

    A software tool was developed for predicting the size of a future computer program at an early stage in its development. The system is intended to enable a user who is not expert in Software Engineering to estimate software size in lines of source code with an accuracy similar to that of an expert, based on the program's functional specifications. The project was planned as a knowledge based system with a field prototype as the goal of Phase 2 and a commercial system planned for Phase 3. The researchers used techniques from Artificial Intelligence and knowledge from human experts and existing software from NASA's COSMIC database. They devised a classification scheme for the software specifications, and a small set of generic software components that represent complexity and apply to large classes of programs. The specifications are converted to generic components by a set of rules and the generic components are input to a nonlinear sizing function which makes the final prediction. The system developed for this project predicted code sizes from the database with a bias factor of 1.06 and a fluctuation factor of 1.77, an accuracy similar to that of human experts but without their significant optimistic bias.

  5. WAMS Based Intelligent Operation and Control of Modern Power System with large Scale Renewable Energy Penetration

    DEFF Research Database (Denmark)

    Rather, Zakir Hussain

    security limits. Under such scenario, progressive displacement of conventional generation by wind generation is expected to eventually lead a complex power system with least presence of central power plants. Consequently the support from conventional power plants is expected to reach its all-time low...... system voltage control responsibility from conventional power plants to wind turbines. With increased wind penetration and displaced conventional central power plants, dynamic voltage security has been identified as one of the challenging issue for large scale wind integration. To address the dynamic...... security issue, a WAMS based systematic voltage control scheme for large scale wind integrated power system has been proposed. Along with the optimal reactive power compensation, the proposed scheme considers voltage support from wind farms (equipped with voltage support functionality) and refurbished...

  6. Development of Anti-Insect Microencapsulated Polypropylene Films Using a Large Scale Film Coating System.

    Science.gov (United States)

    Song, Ah Young; Choi, Ha Young; Lee, Eun Song; Han, Jaejoon; Min, Sea C

    2018-04-01

    Films containing microencapsulated cinnamon oil (CO) were developed using a large-scale production system to protect against the Indian meal moth (Plodia interpunctella). CO at concentrations of 0%, 0.8%, or 1.7% (w/w ink mixture) was microencapsulated with polyvinyl alcohol. The microencapsulated CO emulsion was mixed with ink (47% or 59%, w/w) and thinner (20% or 25%, w/w) and coated on polypropylene (PP) films. The PP film was then laminated with a low-density polyethylene (LDPE) film on the coated side. The film with microencapsulated CO at 1.7% repelled P. interpunctella most effectively. Microencapsulation did not negatively affect insect repelling activity. The release rate of cinnamaldehyde, an active repellent, was lower when CO was microencapsulated than that in the absence of microencapsulation. Thermogravimetric analysis exhibited that microencapsulation prevented the volatilization of CO. The tensile strength, percentage elongation at break, elastic modulus, and water vapor permeability of the films indicated that microencapsulation did not affect the tensile and moisture barrier properties (P > 0.05). The results of this study suggest that effective films for the prevention of Indian meal moth invasion can be produced by the microencapsulation of CO using a large-scale film production system. Low-density polyethylene-laminated polypropylene films printed with ink incorporating microencapsulated cinnamon oil using a large-scale film production system effectively repelled Indian meal moth larvae. Without altering the tensile and moisture barrier properties of the film, microencapsulation resulted in the release of an active repellent for extended periods with a high thermal stability of cinnamon oil, enabling commercial film production at high temperatures. This anti-insect film system may have applications to other food-packaging films that use the same ink-printing platform. © 2018 Institute of Food Technologists®.

  7. Evolving a Simulation Model Product Line Software Architecture from Heterogeneous Model Representations

    National Research Council Canada - National Science Library

    Greaney, Kevin

    2003-01-01

    .... Many of these large-scale, software-intensive simulation systems were autonomously developed over time, and subject to varying degrees of funding, maintenance, and life-cycle management practices...

  8. Web tools for large-scale 3D biological images and atlases

    Directory of Open Access Journals (Sweden)

    Husz Zsolt L

    2012-06-01

    Full Text Available Abstract Background Large-scale volumetric biomedical image data of three or more dimensions are a significant challenge for distributed browsing and visualisation. Many images now exceed 10GB which for most users is too large to handle in terms of computer RAM and network bandwidth. This is aggravated when users need to access tens or hundreds of such images from an archive. Here we solve the problem for 2D section views through archive data delivering compressed tiled images enabling users to browse through very-large volume data in the context of a standard web-browser. The system provides an interactive visualisation for grey-level and colour 3D images including multiple image layers and spatial-data overlay. Results The standard Internet Imaging Protocol (IIP has been extended to enable arbitrary 2D sectioning of 3D data as well a multi-layered images and indexed overlays. The extended protocol is termed IIP3D and we have implemented a matching server to deliver the protocol and a series of Ajax/Javascript client codes that will run in an Internet browser. We have tested the server software on a low-cost linux-based server for image volumes up to 135GB and 64 simultaneous users. The section views are delivered with response times independent of scale and orientation. The exemplar client provided multi-layer image views with user-controlled colour-filtering and overlays. Conclusions Interactive browsing of arbitrary sections through large biomedical-image volumes is made possible by use of an extended internet protocol and efficient server-based image tiling. The tools open the possibility of enabling fast access to large image archives without the requirement of whole image download and client computers with very large memory configurations. The system was demonstrated using a range of medical and biomedical image data extending up to 135GB for a single image volume.

  9. Large-scale production of Fischer-Tropsch diesel from biomass. Optimal gasification and gas cleaning systems

    International Nuclear Information System (INIS)

    Boerrigter, H.; Van der Drift, A.

    2004-12-01

    The paper is presented in the form of copies of overhead sheets. The contents concern definitions, an overview of Integrated biomass gasification and Fischer Tropsch (FT) systems (state-of-the-art, gas cleaning and biosyngas production, experimental demonstration and conclusions), some aspects of large-scale systems (motivation, biomass import) and an outlook

  10. Software Framework for Development of Web-GIS Systems for Analysis of Georeferenced Geophysical Data

    Science.gov (United States)

    Okladnikov, I.; Gordov, E. P.; Titov, A. G.

    2011-12-01

    Georeferenced datasets (meteorological databases, modeling and reanalysis results, remote sensing products, etc.) are currently actively used in numerous applications including modeling, interpretation and forecast of climatic and ecosystem changes for various spatial and temporal scales. Due to inherent heterogeneity of environmental datasets as well as their size which might constitute up to tens terabytes for a single dataset at present studies in the area of climate and environmental change require a special software support. A dedicated software framework for rapid development of providing such support information-computational systems based on Web-GIS technologies has been created. The software framework consists of 3 basic parts: computational kernel developed using ITTVIS Interactive Data Language (IDL), a set of PHP-controllers run within specialized web portal, and JavaScript class library for development of typical components of web mapping application graphical user interface (GUI) based on AJAX technology. Computational kernel comprise of number of modules for datasets access, mathematical and statistical data analysis and visualization of results. Specialized web-portal consists of web-server Apache, complying OGC standards Geoserver software which is used as a base for presenting cartographical information over the Web, and a set of PHP-controllers implementing web-mapping application logic and governing computational kernel. JavaScript library aiming at graphical user interface development is based on GeoExt library combining ExtJS Framework and OpenLayers software. Based on the software framework an information-computational system for complex analysis of large georeferenced data archives was developed. Structured environmental datasets available for processing now include two editions of NCEP/NCAR Reanalysis, JMA/CRIEPI JRA-25 Reanalysis, ECMWF ERA-40 Reanalysis, ECMWF ERA Interim Reanalysis, MRI/JMA APHRODITE's Water Resources Project Reanalysis

  11. Large-Scale Structure and Hyperuniformity of Amorphous Ices

    Science.gov (United States)

    Martelli, Fausto; Torquato, Salvatore; Giovambattista, Nicolas; Car, Roberto

    2017-09-01

    We investigate the large-scale structure of amorphous ices and transitions between their different forms by quantifying their large-scale density fluctuations. Specifically, we simulate the isothermal compression of low-density amorphous ice (LDA) and hexagonal ice to produce high-density amorphous ice (HDA). Both HDA and LDA are nearly hyperuniform; i.e., they are characterized by an anomalous suppression of large-scale density fluctuations. By contrast, in correspondence with the nonequilibrium phase transitions to HDA, the presence of structural heterogeneities strongly suppresses the hyperuniformity and the system becomes hyposurficial (devoid of "surface-area fluctuations"). Our investigation challenges the largely accepted "frozen-liquid" picture, which views glasses as structurally arrested liquids. Beyond implications for water, our findings enrich our understanding of pressure-induced structural transformations in glasses.

  12. Software management of the LHC Detector Control Systems

    CERN Document Server

    Varela, F

    2007-01-01

    The control systems of each of the four Large Hadron Collider (LHC) experiments will contain of the order of 150 computers running the back-end applications. These applications will have to be maintained and eventually upgraded during the lifetime of the experiments, ~20 years. This paper presents the centralized software management strategy adopted by the Joint COntrols Project (JCOP) [1], which is based on a central database that holds the overall system configuration. The approach facilitates the integration of different parts of a control system and provides versioning of its various software components. The information stored in the configuration database can eventually be used to restore a computer in the event of failure.

  13. Software management of the LHC detector control systems

    CERN Document Server

    Varela, F

    2007-01-01

    The control systems of each of the four Large Hadron Collider (LHC) experiments will contain of the order of 150 computers running the back-end applications. These applications will have to be maintained and eventually upgraded during the lifetime of the experiments, ~20 years. This paper presents the centralized software management strategy adopted by the Joint COntrols Project (JCOP) [1], which is based on a central database that holds the overall system configuration. The approach facilitates the integration of different parts of a control system and provides versioning of its various software components. The information stored in the configuration database can eventually be used to restore a computer in the event of failure.

  14. A central solar-industrial waste heat heating system with large scale borehole thermal storage

    NARCIS (Netherlands)

    Guo, F.; Yang, X.; Xu, L.; Torrens, I.; Hensen, J.L.M.

    2017-01-01

    In this paper, a new research of seasonal thermal storage is introduced. This study aims to maximize the utilization of renewable energy source and industrial waste heat (IWH) for urban district heating systems in both heating and non-heating seasons through the use of large-scale seasonal thermal

  15. The software development process in worldwide collaborations

    International Nuclear Information System (INIS)

    Amako, K.

    1998-01-01

    High energy physics experiments in future colliders are inevitably large scale international collaborations. In these experiments, software development has to be done by a large number of physicists, software engineers and computer scientists, dispersed all over the world. The major subject of this paper is to discuss on various aspects of software development in the worldwide environment. These include software engineering and methodology, software development process and management. (orig.)

  16. Comparison of Multi-Scale Digital Elevation Models for Defining Waterways and Catchments Over Large Areas

    Science.gov (United States)

    Harris, B.; McDougall, K.; Barry, M.

    2012-07-01

    Digital Elevation Models (DEMs) allow for the efficient and consistent creation of waterways and catchment boundaries over large areas. Studies of waterway delineation from DEMs are usually undertaken over small or single catchment areas due to the nature of the problems being investigated. Improvements in Geographic Information Systems (GIS) techniques, software, hardware and data allow for analysis of larger data sets and also facilitate a consistent tool for the creation and analysis of waterways over extensive areas. However, rarely are they developed over large regional areas because of the lack of available raw data sets and the amount of work required to create the underlying DEMs. This paper examines definition of waterways and catchments over an area of approximately 25,000 km2 to establish the optimal DEM scale required for waterway delineation over large regional projects. The comparative study analysed multi-scale DEMs over two test areas (Wivenhoe catchment, 543 km2 and a detailed 13 km2 within the Wivenhoe catchment) including various data types, scales, quality, and variable catchment input parameters. Historic and available DEM data was compared to high resolution Lidar based DEMs to assess variations in the formation of stream networks. The results identified that, particularly in areas of high elevation change, DEMs at 20 m cell size created from broad scale 1:25,000 data (combined with more detailed data or manual delineation in flat areas) are adequate for the creation of waterways and catchments at a regional scale.

  17. A compact to revitalise large-scale irrigation systems: A ‘theory of change’ approach

    Directory of Open Access Journals (Sweden)

    Bruce A. Lankford

    2016-02-01

    Full Text Available In countries with transitional economies such as those found in South Asia, large-scale irrigation systems (LSIS with a history of public ownership account for about 115 million ha (Mha or approximately 45% of their total area under irrigation. In terms of the global area of irrigation (320 Mha for all countries, LSIS are estimated at 130 Mha or 40% of irrigated land. These systems can potentially deliver significant local, regional and global benefits in terms of food, water and energy security, employment, economic growth and ecosystem services. For example, primary crop production is conservatively valued at about US$355 billion. However, efforts to enhance these benefits and reform the sector have been costly and outcomes have been underwhelming and short-lived. We propose the application of a 'theory of change' (ToC as a foundation for promoting transformational change in large-scale irrigation centred upon a 'global irrigation compact' that promotes new forms of leadership, partnership and ownership (LPO. The compact argues that LSIS can change by switching away from the current channelling of aid finances controlled by government irrigation agencies. Instead it is for irrigators, closely partnered by private, public and NGO advisory and regulatory services, to develop strong leadership models and to find new compensatory partnerships with cities and other river basin neighbours. The paper summarises key assumptions for change in the LSIS sector including the need to initially test this change via a handful of volunteer systems. Our other key purpose is to demonstrate a ToC template by which large-scale irrigation policy can be better elaborated and discussed.

  18. On-line transient stability assessment of large-scale power systems by using ball vector machines

    International Nuclear Information System (INIS)

    Mohammadi, M.; Gharehpetian, G.B.

    2010-01-01

    In this paper ball vector machine (BVM) has been used for on-line transient stability assessment of large-scale power systems. To classify the system transient security status, a BVM has been trained for all contingencies. The proposed BVM based security assessment algorithm has very small training time and space in comparison with artificial neural networks (ANN), support vector machines (SVM) and other machine learning based algorithms. In addition, the proposed algorithm has less support vectors (SV) and therefore is faster than existing algorithms for on-line applications. One of the main points, to apply a machine learning method is feature selection. In this paper, a new Decision Tree (DT) based feature selection technique has been presented. The proposed BVM based algorithm has been applied to New England 39-bus power system. The simulation results show the effectiveness and the stability of the proposed method for on-line transient stability assessment procedure of large-scale power system. The proposed feature selection algorithm has been compared with different feature selection algorithms. The simulation results demonstrate the effectiveness of the proposed feature algorithm.

  19. Energy efficiency supervision strategy selection of Chinese large-scale public buildings

    International Nuclear Information System (INIS)

    Jin Zhenxing; Wu Yong; Li Baizhan; Gao Yafeng

    2009-01-01

    This paper discusses energy consumption, building development and building energy consumption in China, and points that energy efficiency management and maintenance of large-scale public buildings is the breakthrough point of building energy saving in China. Three obstacles are lack of basic statistics data, lack of service market for building energy saving, and lack of effective management measures account for the necessity of energy efficiency supervision for large-scale public buildings. And then the paper introduces the supervision aims, the supervision system and the five basic systems' role in the supervision system, and analyzes the working mechanism of the five basic systems. The energy efficiency supervision system of large-scale public buildings takes energy consumption statistics as a data basis, Energy auditing as a technical support, energy consumption ration as a benchmark of energy saving and price increase beyond ration as a price lever, and energy efficiency public-noticing as an amplifier. The supervision system promotes energy efficiency operation and maintenance of large-scale public building, and drives a comprehensive building energy saving in China.

  20. Energy efficiency supervision strategy selection of Chinese large-scale public buildings

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Zhenxing; Li, Baizhan; Gao, Yafeng [The Faculty of Urban Construction and Environmental Engineering, Chongqing University, Chongqing (China); Key Laboratory of the Three Gorges Reservoir Region' s Eco-Environment, Ministry of Education, Chongqing 400045 (China); Wu, Yong [The Department of Science and Technology, Ministry of Construction, Beijing 100835 (China)

    2009-06-15

    This paper discusses energy consumption, building development and building energy consumption in China, and points that energy efficiency management and maintenance of large-scale public buildings is the breakthrough point of building energy saving in China. Three obstacles are lack of basic statistics data, lack of service market for building energy saving, and lack of effective management measures account for the necessity of energy efficiency supervision for large-scale public buildings. And then the paper introduces the supervision aims, the supervision system and the five basic systems' role in the supervision system, and analyzes the working mechanism of the five basic systems. The energy efficiency supervision system of large-scale public buildings takes energy consumption statistics as a data basis, Energy auditing as a technical support, energy consumption ration as a benchmark of energy saving and price increase beyond ration as a price lever, and energy efficiency public-noticing as an amplifier. The supervision system promotes energy efficiency operation and maintenance of large-scale public building, and drives a comprehensive building energy saving in China. (author)

  1. Energy efficiency supervision strategy selection of Chinese large-scale public buildings

    Energy Technology Data Exchange (ETDEWEB)

    Jin Zhenxing [Faculty of Urban Construction and Environmental Engineering, Chongqing University, Chongqing (China); Key Laboratory of the Three Gorges Reservoir Region' s Eco-Environment, Ministry of Education, Chongqing 400045 (China)], E-mail: jinzhenxing33@sina.com; Wu Yong [Department of Science and Technology, Ministry of Construction, Beijing 100835 (China); Li Baizhan; Gao Yafeng [Faculty of Urban Construction and Environmental Engineering, Chongqing University, Chongqing (China); Key Laboratory of the Three Gorges Reservoir Region' s Eco-Environment, Ministry of Education, Chongqing 400045 (China)

    2009-06-15

    This paper discusses energy consumption, building development and building energy consumption in China, and points that energy efficiency management and maintenance of large-scale public buildings is the breakthrough point of building energy saving in China. Three obstacles are lack of basic statistics data, lack of service market for building energy saving, and lack of effective management measures account for the necessity of energy efficiency supervision for large-scale public buildings. And then the paper introduces the supervision aims, the supervision system and the five basic systems' role in the supervision system, and analyzes the working mechanism of the five basic systems. The energy efficiency supervision system of large-scale public buildings takes energy consumption statistics as a data basis, Energy auditing as a technical support, energy consumption ration as a benchmark of energy saving and price increase beyond ration as a price lever, and energy efficiency public-noticing as an amplifier. The supervision system promotes energy efficiency operation and maintenance of large-scale public building, and drives a comprehensive building energy saving in China.

  2. Recommendation systems in software engineering

    CERN Document Server

    Robillard, Martin P; Walker, Robert J; Zimmermann, Thomas

    2014-01-01

    With the growth of public and private data stores and the emergence of off-the-shelf data-mining technology, recommendation systems have emerged that specifically address the unique challenges of navigating and interpreting software engineering data.This book collects, structures and formalizes knowledge on recommendation systems in software engineering. It adopts a pragmatic approach with an explicit focus on system design, implementation, and evaluation. The book is divided into three parts: "Part I - Techniques" introduces basics for building recommenders in software engineering, including techniques for collecting and processing software engineering data, but also for presenting recommendations to users as part of their workflow.?"Part II - Evaluation" summarizes methods and experimental designs for evaluating recommendations in software engineering.?"Part III - Applications" describes needs, issues and solution concepts involved in entire recommendation systems for specific software engineering tasks, fo...

  3. An Novel Architecture of Large-scale Communication in IOT

    Science.gov (United States)

    Ma, Wubin; Deng, Su; Huang, Hongbin

    2018-03-01

    In recent years, many scholars have done a great deal of research on the development of Internet of Things and networked physical systems. However, few people have made the detailed visualization of the large-scale communications architecture in the IOT. In fact, the non-uniform technology between IPv6 and access points has led to a lack of broad principles of large-scale communications architectures. Therefore, this paper presents the Uni-IPv6 Access and Information Exchange Method (UAIEM), a new architecture and algorithm that addresses large-scale communications in the IOT.

  4. Low-Complexity Transmit Antenna Selection and Beamforming for Large-Scale MIMO Communications

    Directory of Open Access Journals (Sweden)

    Kun Qian

    2014-01-01

    Full Text Available Transmit antenna selection plays an important role in large-scale multiple-input multiple-output (MIMO communications, but optimal large-scale MIMO antenna selection is a technical challenge. Exhaustive search is often employed in antenna selection, but it cannot be efficiently implemented in large-scale MIMO communication systems due to its prohibitive high computation complexity. This paper proposes a low-complexity interactive multiple-parameter optimization method for joint transmit antenna selection and beamforming in large-scale MIMO communication systems. The objective is to jointly maximize the channel outrage capacity and signal-to-noise (SNR performance and minimize the mean square error in transmit antenna selection and minimum variance distortionless response (MVDR beamforming without exhaustive search. The effectiveness of all the proposed methods is verified by extensive simulation results. It is shown that the required antenna selection processing time of the proposed method does not increase along with the increase of selected antennas, but the computation complexity of conventional exhaustive search method will significantly increase when large-scale antennas are employed in the system. This is particularly useful in antenna selection for large-scale MIMO communication systems.

  5. Manufacturing test of large scale hollow capsule and long length cladding in the large scale oxide dispersion strengthened (ODS) martensitic steel

    International Nuclear Information System (INIS)

    Narita, Takeshi; Ukai, Shigeharu; Kaito, Takeji; Ohtsuka, Satoshi; Fujiwara, Masayuki

    2004-04-01

    Mass production capability of oxide dispersion strengthened (ODS) martensitic steel cladding (9Cr) has being evaluated in the Phase II of the Feasibility Studies on Commercialized Fast Reactor Cycle System. The cost for manufacturing mother tube (raw materials powder production, mechanical alloying (MA) by ball mill, canning, hot extrusion, and machining) is a dominant factor in the total cost for manufacturing ODS ferritic steel cladding. In this study, the large-sale 9Cr-ODS martensitic steel mother tube which is made with a large-scale hollow capsule, and long length claddings were manufactured, and the applicability of these processes was evaluated. Following results were obtained in this study. (1) Manufacturing the large scale mother tube in the dimension of 32 mm OD, 21 mm ID, and 2 m length has been successfully carried out using large scale hollow capsule. This mother tube has a high degree of accuracy in size. (2) The chemical composition and the micro structure of the manufactured mother tube are similar to the existing mother tube manufactured by a small scale can. And the remarkable difference between the bottom and top sides in the manufactured mother tube has not been observed. (3) The long length cladding has been successfully manufactured from the large scale mother tube which was made using a large scale hollow capsule. (4) For reducing the manufacturing cost of the ODS steel claddings, manufacturing process of the mother tubes using a large scale hollow capsules is promising. (author)

  6. Large-scale data analytics

    CERN Document Server

    Gkoulalas-Divanis, Aris

    2014-01-01

    Provides cutting-edge research in large-scale data analytics from diverse scientific areas Surveys varied subject areas and reports on individual results of research in the field Shares many tips and insights into large-scale data analytics from authors and editors with long-term experience and specialization in the field

  7. Workflow management in large distributed systems

    International Nuclear Information System (INIS)

    Legrand, I; Newman, H; Voicu, R; Dobre, C; Grigoras, C

    2011-01-01

    The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.

  8. Workflow management in large distributed systems

    Science.gov (United States)

    Legrand, I.; Newman, H.; Voicu, R.; Dobre, C.; Grigoras, C.

    2011-12-01

    The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.

  9. Managing Cultural Variation in Software Process Improvement

    DEFF Research Database (Denmark)

    Kræmmergaard, Pernille; Müller, Sune Dueholm; Mathiassen, Lars

    The scale and complexity of change in software process improvement (SPI) are considerable and managerial attention to organizational culture during SPI can therefore potentially contribute to successful outcomes. However, we know little about the impact of variations in organizational subculture...... on SPI initiatives. On this backdrop, we report from a large scale SPI project in a Danish high-tech company, Terma. Two of its business units - Integrated Systems (ISY) and Airborne Systems (ASY) - followed similar approaches over a two year period, but with quite different outcomes. While ISY reached...... CMMI level 2 as planned, ASY struggled to implement even modest improvements. To explain these differences, we analyzed the underlying organizational culture within ISY and ASY using two different methods for subculture assessment. The study demonstrates how variations in culture across software...

  10. Environmental aspects of large-scale wind-power systems in the UK

    Science.gov (United States)

    Robson, A.

    1984-11-01

    Environmental issues relating to the introduction of large, MW-scale wind turbines at land-based sites in the UK are discussed. Noise, television interference, hazards to bird life, and visual effects are considered. Areas of uncertainty are identified, but enough is known from experience elsewhere in the world to enable the first UK machines to be introduced in a safe and environementally acceptable manner. Research to establish siting criteria more clearly, and significantly increase the potential wind-energy resource is mentioned. Studies of the comparative risk of energy systems are shown to be overpessimistic for UK wind turbines.

  11. Pool fires in a large scale ventilation system

    International Nuclear Information System (INIS)

    Smith, P.R.; Leslie, I.H.; Gregory, W.S.; White, B.

    1991-01-01

    A series of pool fire experiments was carried out in the Large Scale Flow Facility of the Mechanical Engineering Department at New Mexico State University. The various experiments burned alcohol, hydraulic cutting oil, kerosene, and a mixture of kerosene and tributylphosphate. Gas temperature and wall temperature measurements as a function of time were made throughout the 23.3m 3 burn compartment and the ducts of the ventilation system. The mass of the smoke particulate deposited upon the ventilation system 0.61m x 0.61m high efficiency particulate air filter for the hydraulic oil, kerosene, and kerosene-tributylphosphate mixture fires was measured using an in situ null balance. Significant increases in filter resistance were observed for all three fuels for burning time periods ranging from 10 to 30 minutes. This was found to be highly dependent upon initial ventilation system flow rate, fuel type, and flow configuration. The experimental results were compared to simulated results predicted by the Los Alamos National Laboratory FIRAC computer code. In general, the experimental and the computer results were in reasonable agreement, despite the fact that the fire compartment for the experiments was an insulated steel tank with 0.32 cm walls, while the compartment model FIRIN of FIRAC assumes 0.31 m thick concrete walls. This difference in configuration apparently caused FIRAC to consistently underpredict the measured temperatures in the fire compartment. The predicted deposition of soot proved to be insensitive to ventilation system flow rate, but the measured values showed flow rate dependence. However, predicted soot deposition was of the same order of magnitude as measured soot deposition

  12. Iterative learning-based decentralized adaptive tracker for large-scale systems: a digital redesign approach.

    Science.gov (United States)

    Tsai, Jason Sheng-Hong; Du, Yan-Yi; Huang, Pei-Hsiang; Guo, Shu-Mei; Shieh, Leang-San; Chen, Yuhua

    2011-07-01

    In this paper, a digital redesign methodology of the iterative learning-based decentralized adaptive tracker is proposed to improve the dynamic performance of sampled-data linear large-scale control systems consisting of N interconnected multi-input multi-output subsystems, so that the system output will follow any trajectory which may not be presented by the analytic reference model initially. To overcome the interference of each sub-system and simplify the controller design, the proposed model reference decentralized adaptive control scheme constructs a decoupled well-designed reference model first. Then, according to the well-designed model, this paper develops a digital decentralized adaptive tracker based on the optimal analog control and prediction-based digital redesign technique for the sampled-data large-scale coupling system. In order to enhance the tracking performance of the digital tracker at specified sampling instants, we apply the iterative learning control (ILC) to train the control input via continual learning. As a result, the proposed iterative learning-based decentralized adaptive tracker not only has robust closed-loop decoupled property but also possesses good tracking performance at both transient and steady state. Besides, evolutionary programming is applied to search for a good learning gain to speed up the learning process of ILC. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  13. The use of intelligent systems for risk management in software projects

    Directory of Open Access Journals (Sweden)

    Oksana A. Gushchina

    2017-06-01

    Full Text Available Introduction: The article identifies the main risks of a software project, examines the use of different types of intelligent systems in the risk management process for software projects, discusses the basic methods used for process estimation and forecasting in the field of software engineering, identifies currently used empty expert systems, software systems for analysis and risk management of software projects. Materials and Methods: The author describes the peculiarities of risk management in the field of software engineering with involvement of intelligent systems. The intelligent techniques allow solving the control task with expert precision without the involvement of human experts. Results: The result of this work: – identification of the key risks of a software project (tax, legal, financial and commercial risks, IT risks, personnel risks, risks related to competitors, suppliers, marketing and demand and market; – investigation of the current, applied to risk management of software system projects, artificial intelligence, particularly expert systems and software tools for evaluation of the process results; – identification of the most popular empty expert systems (Clips, G2 and Leonardo and software products of the analysis of large databases (Orange, Weka, Rattle GUI, Apache Mahout, SCaViS, RapidMiner, Databionic ESOM Tools, ELKI, KNIME, Pandas and UIMA; – consideration of the cluster, correlation, regression, factor and dispersion analysis methods for the estimation and prediction of the processes of software engineering. Discussion and Conclusions: The results show the feasibility of the application of various intelligent systems in the risk management process. The analysis of methods of evaluating risks and the tendency of their application in the modern systems of intellectual analysis can serve as a start point for creating a unified system of risk management for software projects of medium and high complexity with a

  14. Large-Scale Graph Processing Using Apache Giraph

    KAUST Repository

    Sakr, Sherif

    2017-01-07

    This book takes its reader on a journey through Apache Giraph, a popular distributed graph processing platform designed to bring the power of big data processing to graph data. Designed as a step-by-step self-study guide for everyone interested in large-scale graph processing, it describes the fundamental abstractions of the system, its programming models and various techniques for using the system to process graph data at scale, including the implementation of several popular and advanced graph analytics algorithms.

  15. Large-Scale Graph Processing Using Apache Giraph

    KAUST Repository

    Sakr, Sherif; Orakzai, Faisal Moeen; Abdelaziz, Ibrahim; Khayyat, Zuhair

    2017-01-01

    This book takes its reader on a journey through Apache Giraph, a popular distributed graph processing platform designed to bring the power of big data processing to graph data. Designed as a step-by-step self-study guide for everyone interested in large-scale graph processing, it describes the fundamental abstractions of the system, its programming models and various techniques for using the system to process graph data at scale, including the implementation of several popular and advanced graph analytics algorithms.

  16. MOST-visualization: software for producing automated textbook-style maps of genome-scale metabolic networks.

    Science.gov (United States)

    Kelley, James J; Maor, Shay; Kim, Min Kyung; Lane, Anatoliy; Lun, Desmond S

    2017-08-15

    Visualization of metabolites, reactions and pathways in genome-scale metabolic networks (GEMs) can assist in understanding cellular metabolism. Three attributes are desirable in software used for visualizing GEMs: (i) automation, since GEMs can be quite large; (ii) production of understandable maps that provide ease in identification of pathways, reactions and metabolites; and (iii) visualization of the entire network to show how pathways are interconnected. No software currently exists for visualizing GEMs that satisfies all three characteristics, but MOST-Visualization, an extension of the software package MOST (Metabolic Optimization and Simulation Tool), satisfies (i), and by using a pre-drawn overview map of metabolism based on the Roche map satisfies (ii) and comes close to satisfying (iii). MOST is distributed for free on the GNU General Public License. The software and full documentation are available at http://most.ccib.rutgers.edu/. dslun@rutgers.edu. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  17. Multibiodose radiation emergency triage categorization software.

    Science.gov (United States)

    Ainsbury, Elizabeth A; Barnard, Stephen; Barrios, Lleonard; Fattibene, Paola; de Gelder, Virginie; Gregoire, Eric; Lindholm, Carita; Lloyd, David; Nergaard, Inger; Rothkamm, Kai; Romm, Horst; Scherthan, Harry; Thierens, Hubert; Vandevoorde, Charlot; Woda, Clemens; Wojcik, Andrzej

    2014-07-01

    In this note, the authors describe the MULTIBIODOSE software, which has been created as part of the MULTIBIODOSE project. The software enables doses estimated by networks of laboratories, using up to five retrospective (biological and physical) assays, to be combined to give a single estimate of triage category for each individual potentially exposed to ionizing radiation in a large scale radiation accident or incident. The MULTIBIODOSE software has been created in Java. The usage of the software is based on the MULTIBIODOSE Guidance: the program creates a link to a single SQLite database for each incident, and the database is administered by the lead laboratory. The software has been tested with Java runtime environment 6 and 7 on a number of different Windows, Mac, and Linux systems, using data from a recent intercomparison exercise. The Java program MULTIBIODOSE_1.0.jar is freely available to download from http://www.multibiodose.eu/software or by contacting the software administrator: MULTIBIODOSE-software@gmx.com.

  18. Large-scale modelling of neuronal systems

    International Nuclear Information System (INIS)

    Castellani, G.; Verondini, E.; Giampieri, E.; Bersani, F.; Remondini, D.; Milanesi, L.; Zironi, I.

    2009-01-01

    The brain is, without any doubt, the most, complex system of the human body. Its complexity is also due to the extremely high number of neurons, as well as the huge number of synapses connecting them. Each neuron is capable to perform complex tasks, like learning and memorizing a large class of patterns. The simulation of large neuronal systems is challenging for both technological and computational reasons, and can open new perspectives for the comprehension of brain functioning. A well-known and widely accepted model of bidirectional synaptic plasticity, the BCM model, is stated by a differential equation approach based on bistability and selectivity properties. We have modified the BCM model extending it from a single-neuron to a whole-network model. This new model is capable to generate interesting network topologies starting from a small number of local parameters, describing the interaction between incoming and outgoing links from each neuron. We have characterized this model in terms of complex network theory, showing how this, learning rule can be a support For network generation.

  19. Large-scale grid management

    International Nuclear Information System (INIS)

    Langdal, Bjoern Inge; Eggen, Arnt Ove

    2003-01-01

    The network companies in the Norwegian electricity industry now have to establish a large-scale network management, a concept essentially characterized by (1) broader focus (Broad Band, Multi Utility,...) and (2) bigger units with large networks and more customers. Research done by SINTEF Energy Research shows so far that the approaches within large-scale network management may be structured according to three main challenges: centralization, decentralization and out sourcing. The article is part of a planned series

  20. Status and Future Developments in Large Accelerator Control Systems

    International Nuclear Information System (INIS)

    Karen S. White

    2006-01-01

    Over the years, accelerator control systems have evolved from small hardwired systems to complex computer controlled systems with many types of graphical user interfaces and electronic data processing. Today's control systems often include multiple software layers, hundreds of distributed processors, and hundreds of thousands of lines of code. While it is clear that the next generation of accelerators will require much bigger control systems, they will also need better systems. Advances in technology will be needed to ensure the network bandwidth and CPU power can provide reasonable update rates and support the requisite timing systems. Beyond the scaling problem, next generation systems face additional challenges due to growing cyber security threats and the likelihood that some degree of remote development and operation will be required. With a large number of components, the need for high reliability increases and commercial solutions can play a key role towards this goal. Future control systems will operate more complex machines and need to present a well integrated, interoperable set of tools with a high degree of automation. Consistency of data presentation and exception handling will contribute to efficient operations. From the development perspective, engineers will need to provide integrated data management in the beginning of the project and build adaptive software components around a central data repository. This will make the system maintainable and ensure consistency throughout the inevitable changes during the machine lifetime. Additionally, such a large project will require professional project management and disciplined use of well-defined engineering processes. Distributed project teams will make the use of standards, formal requirements and design and configuration control vital. Success in building the control system of the future may hinge on how well we integrate commercial components and learn from best practices used in other industries

  1. Multi-Agent System Supporting Automated Large-Scale Photometric Computations

    Directory of Open Access Journals (Sweden)

    Adam Sȩdziwy

    2016-02-01

    Full Text Available The technologies related to green energy, smart cities and similar areas being dynamically developed in recent years, face frequently problems of a computational nature rather than a technological one. The example is the ability of accurately predicting the weather conditions for PV farms or wind turbines. Another group of issues is related to the complexity of the computations required to obtain an optimal setup of a solution being designed. In this article, we present the case representing the latter group of problems, namely designing large-scale power-saving lighting installations. The term “large-scale” refers to an entire city area, containing tens of thousands of luminaires. Although a simple power reduction for a single street, giving limited savings, is relatively easy, it becomes infeasible for tasks covering thousands of luminaires described by precise coordinates (instead of simplified layouts. To overcome this critical issue, we propose introducing a formal representation of a computing problem and applying a multi-agent system to perform design-related computations in parallel. The important measure introduced in the article indicating optimization progress is entropy. It also allows for terminating optimization when the solution is satisfying. The article contains the results of real-life calculations being made with the help of the presented approach.

  2. Software quality assurance and software safety in the Biomed Control System

    International Nuclear Information System (INIS)

    Singh, R.P.; Chu, W.T.; Ludewigt, B.A.; Marks, K.M.; Nyman, M.A.; Renner, T.R.; Stradtner, R.

    1989-01-01

    The Biomed Control System is a hardware/software system used for the delivery, measurement and monitoring of heavy-ion beams in the patient treatment and biology experiment rooms in the Bevalac at the Lawrence Berkeley Laboratory (LBL). This paper describes some aspects of this system including historical background philosophy, configuration management, hardware features that facilitate software testing, software testing procedures, the release of new software quality assurance, safety and operator monitoring. 3 refs

  3. Software systems for energy control in the English industry

    International Nuclear Information System (INIS)

    Bouma, J.W.J.

    1993-01-01

    Monitoring and targeting software systems have proved to be valuable tools for energy control, permitting to save five to ten percent of energy. The article reviews the systems that are presently available in England and illustrates how these systems are successfully used in practice in small (British Telecom) and middle large (Charles Wells Brewery) industrial applications. (A.S.)

  4. A Simple Instrumentation System for Large Structure Vibration Monitoring

    Directory of Open Access Journals (Sweden)

    Didik R. Santoso

    2010-12-01

    Full Text Available Traditional instrumentation systems used for monitoring vibration of large-scale infrastructure building such as bridges, railway, and others structural building, generally have a complex design. Makes it simple would be very useful both in terms of low-cost and easy maintenance. This paper describes how to develop the instrumentation system. The system is built based on distributed network, with field bus topology, using single-master multi-slave architecture. Master is a control unit, built based on a PC equipped with RS-485 interface. Slave is a sensing unit; each slave was built by integrating a 3-axis vibration sensor with a microcontroller based data acquisition system. Vibration sensor is designed using the main components of a MEMS accelerometer. While the software is developed for two functions: as a control system hardware and data processing. To verify performance of the developed instrumentation system, several laboratory tests have been performed. The result shows that the system has good performance.

  5. Development and implementation of a 'Mental Health Finder' software tool within an electronic medical record system.

    Science.gov (United States)

    Swan, D; Hannigan, A; Higgins, S; McDonnell, R; Meagher, D; Cullen, W

    2017-02-01

    In Ireland, as in many other healthcare systems, mental health service provision is being reconfigured with a move toward more care in the community, and particularly primary care. Recording and surveillance systems for mental health information and activities in primary care are needed for service planning and quality improvement. We describe the development and initial implementation of a software tool ('mental health finder') within a widely used primary care electronic medical record system (EMR) in Ireland to enable large-scale data collection on the epidemiology and management of mental health and substance use problems among patients attending general practice. In collaboration with the Irish Primary Care Research Network (IPCRN), we developed the 'Mental Health Finder' as a software plug-in to a commonly used primary care EMR system to facilitate data collection on mental health diagnoses and pharmacological treatments among patients. The finder searches for and identifies patients based on diagnostic coding and/or prescribed medicines. It was initially implemented among a convenience sample of six GP practices. Prevalence of mental health and substance use problems across the six practices, as identified by the finder, was 9.4% (range 6.9-12.7%). 61.9% of identified patients were female; 25.8% were private patients. One-third (33.4%) of identified patients were prescribed more than one class of psychotropic medication. Of the patients identified by the finder, 89.9% were identifiable via prescribing data, 23.7% via diagnostic coding. The finder is a feasible and promising methodology for large-scale data collection on mental health problems in primary care.

  6. Rainbow: a tool for large-scale whole-genome sequencing data analysis using cloud computing.

    Science.gov (United States)

    Zhao, Shanrong; Prenger, Kurt; Smith, Lance; Messina, Thomas; Fan, Hongtao; Jaeger, Edward; Stephens, Susan

    2013-06-27

    Technical improvements have decreased sequencing costs and, as a result, the size and number of genomic datasets have increased rapidly. Because of the lower cost, large amounts of sequence data are now being produced by small to midsize research groups. Crossbow is a software tool that can detect single nucleotide polymorphisms (SNPs) in whole-genome sequencing (WGS) data from a single subject; however, Crossbow has a number of limitations when applied to multiple subjects from large-scale WGS projects. The data storage and CPU resources that are required for large-scale whole genome sequencing data analyses are too large for many core facilities and individual laboratories to provide. To help meet these challenges, we have developed Rainbow, a cloud-based software package that can assist in the automation of large-scale WGS data analyses. Here, we evaluated the performance of Rainbow by analyzing 44 different whole-genome-sequenced subjects. Rainbow has the capacity to process genomic data from more than 500 subjects in two weeks using cloud computing provided by the Amazon Web Service. The time includes the import and export of the data using Amazon Import/Export service. The average cost of processing a single sample in the cloud was less than 120 US dollars. Compared with Crossbow, the main improvements incorporated into Rainbow include the ability: (1) to handle BAM as well as FASTQ input files; (2) to split large sequence files for better load balance downstream; (3) to log the running metrics in data processing and monitoring multiple Amazon Elastic Compute Cloud (EC2) instances; and (4) to merge SOAPsnp outputs for multiple individuals into a single file to facilitate downstream genome-wide association studies. Rainbow is a scalable, cost-effective, and open-source tool for large-scale WGS data analysis. For human WGS data sequenced by either the Illumina HiSeq 2000 or HiSeq 2500 platforms, Rainbow can be used straight out of the box. Rainbow is available

  7. Comparison and Evaluation of Large-Scale and On-Site Recycling Systems for Food Waste via Life Cycle Cost Analysis

    Directory of Open Access Journals (Sweden)

    Kyoung Hee Lee

    2017-11-01

    Full Text Available The purpose of this study was to evaluate the cost-benefit of on-site food waste recycling system using Life-Cycle Cost analysis, and to compare with large-scale treatment system. For accurate evaluation, the cost-benefit analysis was conducted with respect to local governments and residents, and qualitative environmental improvement effects were quantified. As for the local governments, analysis results showed that, when large-scale treatment system was replaced with on-site recycling system, there was significant cost reduction from the initial stage depending on reduction of investment, maintenance, and food wastewater treatment costs. As for the residents, it was found that the cost incurred from using the on-site recycling system was larger than the cost of using large-scale treatment system due to the cost of producing and installing the on-site treatment facilities at the initial stage. However, analysis showed that with continuous benefits such as greenhouse gas emission reduction, compost utilization, and food wastewater reduction, cost reduction would be obtained after 6 years of operating the on-site recycling system. Therefore, it was recommended for local governments and residents to consider introducing an on-site food waste recycling system if they are to replace an old treatment system or need to establish a new one.

  8. Analysis for Large Scale Integration of Electric Vehicles into Power Grids

    DEFF Research Database (Denmark)

    Hu, Weihao; Chen, Zhe; Wang, Xiaoru

    2011-01-01

    Electric Vehicles (EVs) provide a significant opportunity for reducing the consumption of fossil energies and the emission of carbon dioxide. With more and more electric vehicles integrated in the power systems, it becomes important to study the effects of EV integration on the power systems......, especially the low and middle voltage level networks. In the paper, the basic structure and characteristics of the electric vehicles are introduced. The possible impacts of large scale integration of electric vehicles on the power systems especially the advantage to the integration of the renewable energies...... are discussed. Finally, the research projects related to the large scale integration of electric vehicles into the power systems are introduced, it will provide reference for large scale integration of Electric Vehicles into power grids....

  9. Multi-Level Formation of Complex Software Systems

    Directory of Open Access Journals (Sweden)

    Hui Li

    2016-05-01

    Full Text Available We present a multi-level formation model for complex software systems. The previous works extract the software systems to software networks for further studies, but usually investigate the software networks at the class level. In contrast to these works, our treatment of software systems as multi-level networks is more realistic. In particular, the software networks are organized by three levels of granularity, which represents the modularity and hierarchy in the formation process of real-world software systems. More importantly, simulations based on this model have generated more realistic structural properties of software networks, such as power-law, clustering and modularization. On the basis of this model, how the structure of software systems effects software design principles is then explored, and it could be helpful for understanding software evolution and software engineering practices.

  10. Large-Scale Transit Signal Priority Implementation

    OpenAIRE

    Lee, Kevin S.; Lozner, Bailey

    2018-01-01

    In 2016, the District Department of Transportation (DDOT) deployed Transit Signal Priority (TSP) at 195 intersections in highly urbanized areas of Washington, DC. In collaboration with a broader regional implementation, and in partnership with the Washington Metropolitan Area Transit Authority (WMATA), DDOT set out to apply a systems engineering–driven process to identify, design, test, and accept a large-scale TSP system. This presentation will highlight project successes and lessons learned.

  11. RESTRUCTURING OF THE LARGE-SCALE SPRINKLERS

    Directory of Open Access Journals (Sweden)

    Paweł Kozaczyk

    2016-09-01

    Full Text Available One of the best ways for agriculture to become independent from shortages of precipitation is irrigation. In the seventies and eighties of the last century a number of large-scale sprinklers in Wielkopolska was built. At the end of 1970’s in the Poznan province 67 sprinklers with a total area of 6400 ha were installed. The average size of the sprinkler reached 95 ha. In 1989 there were 98 sprinklers, and the area which was armed with them was more than 10 130 ha. The study was conducted on 7 large sprinklers with the area ranging from 230 to 520 hectares in 1986÷1998. After the introduction of the market economy in the early 90’s and ownership changes in agriculture, large-scale sprinklers have gone under a significant or total devastation. Land on the State Farms of the State Agricultural Property Agency has leased or sold and the new owners used the existing sprinklers to a very small extent. This involved a change in crop structure, demand structure and an increase in operating costs. There has also been a threefold increase in electricity prices. Operation of large-scale irrigation encountered all kinds of barriers in practice and limitations of system solutions, supply difficulties, high levels of equipment failure which is not inclined to rational use of available sprinklers. An effect of a vision of the local area was to show the current status of the remaining irrigation infrastructure. The adopted scheme for the restructuring of Polish agriculture was not the best solution, causing massive destruction of assets previously invested in the sprinkler system.

  12. High performance in software development

    CERN Multimedia

    CERN. Geneva; Haapio, Petri; Liukkonen, Juha-Matti

    2015-01-01

    What are the ingredients of high-performing software? Software development, especially for large high-performance systems, is one the most complex tasks mankind has ever tried. Technological change leads to huge opportunities but challenges our old ways of working. Processing large data sets, possibly in real time or with other tight computational constraints, requires an efficient solution architecture. Efficiency requirements span from the distributed storage and large-scale organization of computation and data onto the lowest level of processor and data bus behavior. Integrating performance behavior over these levels is especially important when the computation is resource-bounded, as it is in numerics: physical simulation, machine learning, estimation of statistical models, etc. For example, memory locality and utilization of vector processing are essential for harnessing the computing power of modern processor architectures due to the deep memory hierarchies of modern general-purpose computers. As a r...

  13. Ethics of large-scale change

    OpenAIRE

    Arler, Finn

    2006-01-01

      The subject of this paper is long-term large-scale changes in human society. Some very significant examples of large-scale change are presented: human population growth, human appropriation of land and primary production, the human use of fossil fuels, and climate change. The question is posed, which kind of attitude is appropriate when dealing with large-scale changes like these from an ethical point of view. Three kinds of approaches are discussed: Aldo Leopold's mountain thinking, th...

  14. Environmental aspects of large-scale wind-power systems in the UK

    Energy Technology Data Exchange (ETDEWEB)

    Robson, A

    1983-12-01

    Environmental issues relating to the introduction of large, MW-scale wind turbines at land-based sites in the U.K. are discussed. Areas of interest include noise, television interference, hazards to bird life and visual effects. A number of areas of uncertainty are identified, but enough is known from experience elsewhere in the world to enable the first U.K. machines to be introduced in a safe and environmentally acceptable manner. Research currently under way will serve to establish siting criteria more clearly, and could significantly increase the potential wind-energy resource. Certain studies of the comparative risk of energy systems are shown to be overpessimistic for U.K. wind turbines.

  15. COMPARISON OF MULTI-SCALE DIGITAL ELEVATION MODELS FOR DEFINING WATERWAYS AND CATCHMENTS OVER LARGE AREAS

    Directory of Open Access Journals (Sweden)

    B. Harris

    2012-07-01

    Full Text Available Digital Elevation Models (DEMs allow for the efficient and consistent creation of waterways and catchment boundaries over large areas. Studies of waterway delineation from DEMs are usually undertaken over small or single catchment areas due to the nature of the problems being investigated. Improvements in Geographic Information Systems (GIS techniques, software, hardware and data allow for analysis of larger data sets and also facilitate a consistent tool for the creation and analysis of waterways over extensive areas. However, rarely are they developed over large regional areas because of the lack of available raw data sets and the amount of work required to create the underlying DEMs. This paper examines definition of waterways and catchments over an area of approximately 25,000 km2 to establish the optimal DEM scale required for waterway delineation over large regional projects. The comparative study analysed multi-scale DEMs over two test areas (Wivenhoe catchment, 543 km2 and a detailed 13 km2 within the Wivenhoe catchment including various data types, scales, quality, and variable catchment input parameters. Historic and available DEM data was compared to high resolution Lidar based DEMs to assess variations in the formation of stream networks. The results identified that, particularly in areas of high elevation change, DEMs at 20 m cell size created from broad scale 1:25,000 data (combined with more detailed data or manual delineation in flat areas are adequate for the creation of waterways and catchments at a regional scale.

  16. The Schroedinger-Poisson equations as the large-N limit of the Newtonian N-body system. Applications to the large scale dark matter dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Briscese, Fabio [Northumbria University, Department of Mathematics, Physics and Electrical Engineering, Newcastle upon Tyne (United Kingdom); Citta Universitaria, Istituto Nazionale di Alta Matematica Francesco Severi, Gruppo Nazionale di Fisica Matematica, Rome (Italy)

    2017-09-15

    In this paper it is argued how the dynamics of the classical Newtonian N-body system can be described in terms of the Schroedinger-Poisson equations in the large N limit. This result is based on the stochastic quantization introduced by Nelson, and on the Calogero conjecture. According to the Calogero conjecture, the emerging effective Planck constant is computed in terms of the parameters of the N-body system as ℎ ∝ M{sup 5/3}G{sup 1/2}(N/ left angle ρ right angle){sup 1/6}, where is G the gravitational constant, N and M are the number and the mass of the bodies, and left angle ρ right angle is their average density. The relevance of this result in the context of large scale structure formation is discussed. In particular, this finding gives a further argument in support of the validity of the Schroedinger method as numerical double of the N-body simulations of dark matter dynamics at large cosmological scales. (orig.)

  17. Unraveling The Connectome: Visualizing and Abstracting Large-Scale Connectomics Data

    KAUST Repository

    Al-Awami, Ali K.

    2017-01-01

    -user system seamlessly integrates a diverse set of tools. Our system provides support for the management, provenance, accountability, and auditing of large-scale segmentations. Finally, we present a novel architecture to render very large volumes interactively

  18. Optimal Siting and Sizing of Energy Storage System for Power Systems with Large-scale Wind Power Integration

    DEFF Research Database (Denmark)

    Zhao, Haoran; Wu, Qiuwei; Huang, Shaojun

    2015-01-01

    This paper proposes algorithms for optimal sitingand sizing of Energy Storage System (ESS) for the operationplanning of power systems with large scale wind power integration.The ESS in this study aims to mitigate the wind powerfluctuations during the interval between two rolling Economic......Dispatches (EDs) in order to maintain generation-load balance.The charging and discharging of ESS is optimized consideringoperation cost of conventional generators, capital cost of ESSand transmission losses. The statistics from simulated systemoperations are then coupled to the planning process to determinethe...

  19. Large-scale tests of aqueous scrubber systems for LMFBR vented containment

    International Nuclear Information System (INIS)

    McCormack, J.D.; Hilliard, R.K.; Postma, A.K.

    1980-01-01

    Six large-scale air cleaning tests performed in the Containment Systems Test Facility (CSTF) are described. The test conditions simulated those postulated for hypothetical accidents in an LMFBR involving containment venting to control hydrogen concentration and containment overpressure. Sodium aerosols were generated by continously spraying sodium into air and adding steam and/or carbon dioxide to create the desired Na 2 O 2 , Na 2 CO 3 or NaOH aerosol. Two air cleaning systems were tested: (a) spray quench chamber, educator venturi scrubber and high efficiency fibrous scrubber in series; and (b) the same except with the spray quench chamber eliminated. The gas flow rates ranged up to 0.8 m 3 /s (1700 acfm) at temperatures to 313 0 C (600 0 F). Quantities of aerosol removed from the gas stream ranged up to 700 kg per test. The systems performed very satisfactorily with overall aerosol mass removal efficiencies exceeding 99.9% in each test

  20. Real-time graphic display system for ROSA-V Large Scale Test Facility

    International Nuclear Information System (INIS)

    Kondo, Masaya; Anoda, Yoshinari; Osaki, Hideki; Kukita, Yutaka; Takigawa, Yoshio.

    1993-11-01

    A real-time graphic display system was developed for the ROSA-V Large Scale Test Facility (LSTF) experiments simulating accident management measures for prevention of severe core damage in pressurized water reactors (PWRs). The system works on an IBM workstation (Power Station RS/6000 model 560) and accommodates 512 channels out of about 2500 total measurements in the LSTF. It has three major functions: (a) displaying the coolant inventory distribution in the facility primary and secondary systems; (b) displaying the measured quantities at desired locations in the facility; and (c) displaying the time histories of measured quantities. The coolant inventory distribution is derived from differential pressure measurements along vertical sections and gamma-ray densitometer measurements for horizontal legs. The color display indicates liquid subcooling calculated from pressure and temperature at individual locations. (author)

  1. Software And Systems Engineering Risk Management

    Science.gov (United States)

    2010-04-01

    RSKM 2004 COSO Enterprise RSKM Framework 2006 ISO/IEC 16085 Risk Management Process 2008 ISO/IEC 12207 Software Lifecycle Processes 2009 ISO/IEC...1 Software And Systems Engineering Risk Management John Walz VP Technical and Conferences Activities, IEEE Computer Society Vice-Chair Planning...Software & Systems Engineering Standards Committee, IEEE Computer Society US TAG to ISO TMB Risk Management Working Group Systems and Software

  2. Towards Large-area Field-scale Operational Evapotranspiration for Water Use Mapping

    Science.gov (United States)

    Senay, G. B.; Friedrichs, M.; Morton, C.; Huntington, J. L.; Verdin, J.

    2017-12-01

    Field-scale evapotranspiration (ET) estimates are needed for improving surface and groundwater use and water budget studies. Ideally, field-scale ET estimates would be at regional to national levels and cover long time periods. As a result of large data storage and computational requirements associated with processing field-scale satellite imagery such as Landsat, numerous challenges remain to develop operational ET estimates over large areas for detailed water use and availability studies. However, the combination of new science, data availability, and cloud computing technology is enabling unprecedented capabilities for ET mapping. To demonstrate this capability, we used Google's Earth Engine cloud computing platform to create nationwide annual ET estimates with 30-meter resolution Landsat ( 16,000 images) and gridded weather data using the Operational Simplified Surface Energy Balance (SSEBop) model in support of the National Water Census, a USGS research program designed to build decision support capacity for water management agencies and other natural resource managers. By leveraging Google's Earth Engine Application Programming Interface (API) and developing software in a collaborative, open-platform environment, we rapidly advance from research towards applications for large-area field-scale ET mapping. Cloud computing of the Landsat image archive combined with other satellite, climate, and weather data, is creating never imagined opportunities for assessing ET model behavior and uncertainty, and ultimately providing the ability for more robust operational monitoring and assessment of water use at field-scales.

  3. Archiving Software Systems: Approaches to Preserve Computational Capabilities

    Science.gov (United States)

    King, T. A.

    2014-12-01

    A great deal of effort is made to preserve scientific data. Not only because data is knowledge, but it is often costly to acquire and is sometimes collected under unique circumstances. Another part of the science enterprise is the development of software to process and analyze the data. Developed software is also a large investment and worthy of preservation. However, the long term preservation of software presents some challenges. Software often requires a specific technology stack to operate. This can include software, operating systems and hardware dependencies. One past approach to preserve computational capabilities is to maintain ancient hardware long past its typical viability. On an archive horizon of 100 years, this is not feasible. Another approach to preserve computational capabilities is to archive source code. While this can preserve details of the implementation and algorithms, it may not be possible to reproduce the technology stack needed to compile and run the resulting applications. This future forward dilemma has a solution. Technology used to create clouds and process big data can also be used to archive and preserve computational capabilities. We explore how basic hardware, virtual machines, containers and appropriate metadata can be used to preserve computational capabilities and to archive functional software systems. In conjunction with data archives, this provides scientist with both the data and capability to reproduce the processing and analysis used to generate past scientific results.

  4. Analyzing the State of Static Analysis : A Large-Scale Evaluation in Open Source Software

    NARCIS (Netherlands)

    Beller, M.; Bholanath, R.; McIntosh, S.; Zaidman, A.E.

    2016-01-01

    The use of automatic static analysis has been a software engineering best practice for decades. However, we still do not know a lot about its use in real-world software projects: How prevalent is the use of Automated Static Analysis Tools (ASATs) such as FindBugs and JSHint? How do developers use

  5. Software metrics: Software quality metrics for distributed systems. [reliability engineering

    Science.gov (United States)

    Post, J. V.

    1981-01-01

    Software quality metrics was extended to cover distributed computer systems. Emphasis is placed on studying embedded computer systems and on viewing them within a system life cycle. The hierarchy of quality factors, criteria, and metrics was maintained. New software quality factors were added, including survivability, expandability, and evolvability.

  6. Software simulation: a tool for enhancing control system design

    International Nuclear Information System (INIS)

    Sze, B.; Ridgway, G.H.

    2008-01-01

    The creation, implementation and management of engineering design tools are important to the quality and efficiency of any large engineering project. Some of the most complicated tools to develop are system simulators. The development and implementation of system simulators to support replacement fuel handling control systems is of particular interest to the Canadian nuclear industry given the current age of installations and the risk of obsolescence to many utilities. The use of such simulator tools has been known to significantly improve successful deployment of new software packages and maintenance-related software changes while reducing the time required for their overall development. Moreover, these simulation systems can also serve as operator training stations and provide a virtual environment for site engineers to test operational changes before they are uploaded to the actual system. (author)

  7. Study of a large scale neutron measurement channel

    International Nuclear Information System (INIS)

    Amarouayache, Anissa; Ben Hadid, Hayet.

    1982-12-01

    A large scale measurement channel allows the processing of the signal coming from an unique neutronic sensor, during three different running modes: impulses, fluctuations and current. The study described in this note includes three parts: - A theoretical study of the large scale channel and its brief description are given. The results obtained till now in that domain are presented. - The fluctuation mode is thoroughly studied and the improvements to be done are defined. The study of a fluctuation linear channel with an automatic commutation of scales is described and the results of the tests are given. In this large scale channel, the method of data processing is analogical. - To become independent of the problems generated by the use of a an analogical processing of the fluctuation signal, a digital method of data processing is tested. The validity of that method is improved. The results obtained on a test system realized according to this method are given and a preliminary plan for further research is defined [fr

  8. A coordination model for ultra-large scale systems of systems

    Directory of Open Access Journals (Sweden)

    Manuela L. Bujorianu

    2013-11-01

    Full Text Available The ultra large multi-agent systems are becoming increasingly popular due to quick decay of the individual production costs and the potential of speeding up the solving of complex problems. Examples include nano-robots, or systems of nano-satellites for dangerous meteorite detection, or cultures of stem cells for organ regeneration or nerve repair. The topics associated with these systems are usually dealt within the theories of intelligent swarms or biologically inspired computation systems. Stochastic models play an important role and they are based on various formulations of the mechanical statistics. In these cases, the main assumption is that the swarm elements have a simple behaviour and that some average properties can be deduced for the entire swarm. In contrast, complex systems in areas like aeronautics are formed by elements with sophisticated behaviour, which are even autonomous. In situations like this, a new approach to swarm coordination is necessary. We present a stochastic model where the swarm elements are communicating autonomous systems, the coordination is separated from the component autonomous activity and the entire swarm can be abstracted away as a piecewise deterministic Markov process, which constitutes one of the most popular model in stochastic control. Keywords: ultra large multi-agent systems, system of systems, autonomous systems, stochastic hybrid systems.

  9. Unisys' experience in software quality and productivity management of an existing system

    Science.gov (United States)

    Munson, John B.

    1988-01-01

    A summary of Quality Improvement techniques, implementation, and results in the maintenance, management, and modification of large software systems for the Space Shuttle Program's ground-based systems is provided.

  10. A Topology Visualization Early Warning Distribution Algorithm for Large-Scale Network Security Incidents

    Directory of Open Access Journals (Sweden)

    Hui He

    2013-01-01

    Full Text Available It is of great significance to research the early warning system for large-scale network security incidents. It can improve the network system’s emergency response capabilities, alleviate the cyber attacks’ damage, and strengthen the system’s counterattack ability. A comprehensive early warning system is presented in this paper, which combines active measurement and anomaly detection. The key visualization algorithm and technology of the system are mainly discussed. The large-scale network system’s plane visualization is realized based on the divide and conquer thought. First, the topology of the large-scale network is divided into some small-scale networks by the MLkP/CR algorithm. Second, the sub graph plane visualization algorithm is applied to each small-scale network. Finally, the small-scale networks’ topologies are combined into a topology based on the automatic distribution algorithm of force analysis. As the algorithm transforms the large-scale network topology plane visualization problem into a series of small-scale network topology plane visualization and distribution problems, it has higher parallelism and is able to handle the display of ultra-large-scale network topology.

  11. Mining the Mind Research Network: A Novel Framework for Exploring Large Scale, Heterogeneous Translational Neuroscience Research Data Sources

    Science.gov (United States)

    Bockholt, Henry J.; Scully, Mark; Courtney, William; Rachakonda, Srinivas; Scott, Adam; Caprihan, Arvind; Fries, Jill; Kalyanam, Ravi; Segall, Judith M.; de la Garza, Raul; Lane, Susan; Calhoun, Vince D.

    2009-01-01

    A neuroinformatics (NI) system is critical to brain imaging research in order to shorten the time between study conception and results. Such a NI system is required to scale well when large numbers of subjects are studied. Further, when multiple sites participate in research projects organizational issues become increasingly difficult. Optimized NI applications mitigate these problems. Additionally, NI software enables coordination across multiple studies, leveraging advantages potentially leading to exponential research discoveries. The web-based, Mind Research Network (MRN), database system has been designed and improved through our experience with 200 research studies and 250 researchers from seven different institutions. The MRN tools permit the collection, management, reporting and efficient use of large scale, heterogeneous data sources, e.g., multiple institutions, multiple principal investigators, multiple research programs and studies, and multimodal acquisitions. We have collected and analyzed data sets on thousands of research participants and have set up a framework to automatically analyze the data, thereby making efficient, practical data mining of this vast resource possible. This paper presents a comprehensive framework for capturing and analyzing heterogeneous neuroscience research data sources that has been fully optimized for end-users to perform novel data mining. PMID:20461147

  12. Mining the mind research network: a novel framework for exploring large scale, heterogeneous translational neuroscience research data sources.

    Directory of Open Access Journals (Sweden)

    Henry Jeremy Bockholt

    2010-04-01

    Full Text Available A neuroinformatics (NI system is critical to brain imaging research in order to shorten the time between study conception and results. Such a NI system is required to scale well when large numbers of subjects are studied. Further, when multiple sites participate in research projects organizational issues become increasingly difficult. Optimized NI applications mitigate these problems. Additionally, NI software enables coordination across multiple studies, leveraging advantages potentially leading to exponential research discoveries. The web-based, Mind Research Network (MRN, database system has been designed and improved through our experience with 200 research studies and 250 researchers from 7 different institutions. The MRN tools permit the collection, management, reporting and efficient use of large scale, heterogeneous data sources, e.g., multiple institutions, multiple principal investigators, multiple research programs and studies, and multimodal acquisitions. We have collected and analyzed data sets on thousands of research participants and have set up a framework to automatically analyze the data, thereby making efficient, practical data mining of this vast resource possible. This paper presents a comprehensive framework for capturing and analyzing heterogeneous neuroscience research data sources that has been fully optimized for end-users to perform novel data mining.

  13. Software Engineering and Swarm-Based Systems

    Science.gov (United States)

    Hinchey, Michael G.; Sterritt, Roy; Pena, Joaquin; Rouff, Christopher A.

    2006-01-01

    We discuss two software engineering aspects in the development of complex swarm-based systems. NASA researchers have been investigating various possible concept missions that would greatly advance future space exploration capabilities. The concept mission that we have focused on exploits the principles of autonomic computing as well as being based on the use of intelligent swarms, whereby a (potentially large) number of similar spacecraft collaborate to achieve mission goals. The intent is that such systems not only can be sent to explore remote and harsh environments but also are endowed with greater degrees of protection and longevity to achieve mission goals.

  14. LARGE SCALE DISTRIBUTED PARAMETER MODEL OF MAIN MAGNET SYSTEM AND FREQUENCY DECOMPOSITION ANALYSIS

    Energy Technology Data Exchange (ETDEWEB)

    ZHANG,W.; MARNERIS, I.; SANDBERG, J.

    2007-06-25

    Large accelerator main magnet system consists of hundreds, even thousands, of dipole magnets. They are linked together under selected configurations to provide highly uniform dipole fields when powered. Distributed capacitance, insulation resistance, coil resistance, magnet inductance, and coupling inductance of upper and lower pancakes make each magnet a complex network. When all dipole magnets are chained together in a circle, they become a coupled pair of very high order complex ladder networks. In this study, a network of more than thousand inductive, capacitive or resistive elements are used to model an actual system. The circuit is a large-scale network. Its equivalent polynomial form has several hundred degrees. Analysis of this high order circuit and simulation of the response of any or all components is often computationally infeasible. We present methods to use frequency decomposition approach to effectively simulate and analyze magnet configuration and power supply topologies.

  15. Multi-function nuclear weight scale system

    International Nuclear Information System (INIS)

    Zheng Mingquan; Sun Jinhua; Jia Changchun; Wang Mingqian; Tang Ke

    1998-01-01

    The author introduces the methods to contrive the hardware and software of a Multi-function Nuclear Weight Scale System based on the communication contract in compliance with RS485 between a master (industrial control computer 386) and a slave (single chip 8098) and its main functions

  16. Software Radar Technology

    Directory of Open Access Journals (Sweden)

    Tang Jun

    2015-08-01

    Full Text Available In this paper, the definition and the key features of Software Radar, which is a new concept, are proposed and discussed. We consider the development of modern radar system technology to be divided into three stages: Digital Radar, Software radar and Intelligent Radar, and the second stage is just commencing now. A Software Radar system should be a combination of various modern digital modular components conformed to certain software and hardware standards. Moreover, a software radar system with an open system architecture supporting to decouple application software and low level hardware would be easy to adopt "user requirements-oriented" developing methodology instead of traditional "specific function-oriented" developing methodology. Compared with traditional Digital Radar, Software Radar system can be easily reconfigured and scaled up or down to adapt to the changes of requirements and technologies. A demonstration Software Radar signal processing system, RadarLab 2.0, which has been developed by Tsinghua University, is introduced in this paper and the suggestions for the future development of Software Radar in China are also given in the conclusion.

  17. Third generation participatory design in health informatics--making user participation applicable to large-scale information system projects.

    Science.gov (United States)

    Pilemalm, Sofie; Timpka, Toomas

    2008-04-01

    Participatory Design (PD) methods in the field of health informatics have mainly been applied to the development of small-scale systems with homogeneous user groups in local settings. Meanwhile, health service organizations are becoming increasingly large and complex in character, making it necessary to extend the scope of the systems that are used for managing data, information and knowledge. This study reports participatory action research on the development of a PD framework for large-scale system design. The research was conducted in a public health informatics project aimed at developing a system for 175,000 users. A renewed PD framework was developed in response to six major limitations experienced to be associated with the existing methods. The resulting framework preserves the theoretical grounding, but extends the toolbox to suit applications in networked health service organizations. Future research should involve evaluations of the framework in other health service settings where comprehensive HISs are developed.

  18. Survey of large-scale solar water heaters installed in Taiwan, China

    Energy Technology Data Exchange (ETDEWEB)

    Chang Keh-Chin; Lee Tsong-Sheng; Chung Kung-Ming [Cheng Kung Univ., Tainan (China); Lien Ya-Feng; Lee Chine-An [Cheng Kung Univ. Research and Development Foundation, Tainan (China)

    2008-07-01

    Almost all the solar collectors installed in Taiwan, China were used for production of hot water for homeowners (residential systems), in which the area of solar collectors is less than 10 square meters. From 2001 to 2006, there were only 39 large-scale systems (defined as the area of solar collectors being over 100 m{sup 2}) installed. Their utilization purposes are for rooming house (dormitory), swimming pool, restaurant, and manufacturing process. A comprehensive survey of those large-scale solar water heaters was conducted in 2006. The objectives of the survey were to asses the systems' performance and to have the feedback from the individual users. It is found that lack of experience in system design and maintenance are the key factors for reliable operation of a system. For further promotion of large-scale solar water heaters in Taiwan, a more compressive program on a system design for manufacturing process should be conducted. (orig.)

  19. Developing Large-Scale Bayesian Networks by Composition: Fault Diagnosis of Electrical Power Systems in Aircraft and Spacecraft

    Science.gov (United States)

    Mengshoel, Ole Jakob; Poll, Scott; Kurtoglu, Tolga

    2009-01-01

    In this paper, we investigate the use of Bayesian networks to construct large-scale diagnostic systems. In particular, we consider the development of large-scale Bayesian networks by composition. This compositional approach reflects how (often redundant) subsystems are architected to form systems such as electrical power systems. We develop high-level specifications, Bayesian networks, clique trees, and arithmetic circuits representing 24 different electrical power systems. The largest among these 24 Bayesian networks contains over 1,000 random variables. Another BN represents the real-world electrical power system ADAPT, which is representative of electrical power systems deployed in aerospace vehicles. In addition to demonstrating the scalability of the compositional approach, we briefly report on experimental results from the diagnostic competition DXC, where the ProADAPT team, using techniques discussed here, obtained the highest scores in both Tier 1 (among 9 international competitors) and Tier 2 (among 6 international competitors) of the industrial track. While we consider diagnosis of power systems specifically, we believe this work is relevant to other system health management problems, in particular in dependable systems such as aircraft and spacecraft. (See CASI ID 20100021910 for supplemental data disk.)

  20. Using CyberShake Workflows to Manage Big Seismic Hazard Data on Large-Scale Open-Science HPC Resources

    Science.gov (United States)

    Callaghan, S.; Maechling, P. J.; Juve, G.; Vahi, K.; Deelman, E.; Jordan, T. H.

    2015-12-01

    The CyberShake computational platform, developed by the Southern California Earthquake Center (SCEC), is an integrated collection of scientific software and middleware that performs 3D physics-based probabilistic seismic hazard analysis (PSHA) for Southern California. CyberShake integrates large-scale and high-throughput research codes to produce probabilistic seismic hazard curves for individual locations of interest and hazard maps for an entire region. A recent CyberShake calculation produced about 500,000 two-component seismograms for each of 336 locations, resulting in over 300 million synthetic seismograms in a Los Angeles-area probabilistic seismic hazard model. CyberShake calculations require a series of scientific software programs. Early computational stages produce data used as inputs by later stages, so we describe CyberShake calculations using a workflow definition language. Scientific workflow tools automate and manage the input and output data and enable remote job execution on large-scale HPC systems. To satisfy the requests of broad impact users of CyberShake data, such as seismologists, utility companies, and building code engineers, we successfully completed CyberShake Study 15.4 in April and May 2015, calculating a 1 Hz urban seismic hazard map for Los Angeles. We distributed the calculation between the NSF Track 1 system NCSA Blue Waters, the DOE Leadership-class system OLCF Titan, and USC's Center for High Performance Computing. This study ran for over 5 weeks, burning about 1.1 million node-hours and producing over half a petabyte of data. The CyberShake Study 15.4 results doubled the maximum simulated seismic frequency from 0.5 Hz to 1.0 Hz as compared to previous studies, representing a factor of 16 increase in computational complexity. We will describe how our workflow tools supported splitting the calculation across multiple systems. We will explain how we modified CyberShake software components, including GPU implementations and

  1. Large-scale educational telecommunications systems for the US: An analysis of educational needs and technological opportunities

    Science.gov (United States)

    Morgan, R. P.; Singh, J. P.; Rothenberg, D.; Robinson, B. E.

    1975-01-01

    The needs to be served, the subsectors in which the system might be used, the technology employed, and the prospects for future utilization of an educational telecommunications delivery system are described and analyzed. Educational subsectors are analyzed with emphasis on the current status and trends within each subsector. Issues which affect future development, and prospects for future use of media, technology, and large-scale electronic delivery within each subsector are included. Information on technology utilization is presented. Educational telecommunications services are identified and grouped into categories: public television and radio, instructional television, computer aided instruction, computer resource sharing, and information resource sharing. Technology based services, their current utilization, and factors which affect future development are stressed. The role of communications satellites in providing these services is discussed. Efforts to analyze and estimate future utilization of large-scale educational telecommunications are summarized. Factors which affect future utilization are identified. Conclusions are presented.

  2. New generation pharmacogenomic tools: a SNP linkage disequilibrium Map, validated SNP assay resource, and high-throughput instrumentation system for large-scale genetic studies.

    Science.gov (United States)

    De La Vega, Francisco M; Dailey, David; Ziegle, Janet; Williams, Julie; Madden, Dawn; Gilbert, Dennis A

    2002-06-01

    Since public and private efforts announced the first draft of the human genome last year, researchers have reported great numbers of single nucleotide polymorphisms (SNPs). We believe that the availability of well-mapped, quality SNP markers constitutes the gateway to a revolution in genetics and personalized medicine that will lead to better diagnosis and treatment of common complex disorders. A new generation of tools and public SNP resources for pharmacogenomic and genetic studies--specifically for candidate-gene, candidate-region, and whole-genome association studies--will form part of the new scientific landscape. This will only be possible through the greater accessibility of SNP resources and superior high-throughput instrumentation-assay systems that enable affordable, highly productive large-scale genetic studies. We are contributing to this effort by developing a high-quality linkage disequilibrium SNP marker map and an accompanying set of ready-to-use, validated SNP assays across every gene in the human genome. This effort incorporates both the public sequence and SNP data sources, and Celera Genomics' human genome assembly and enormous resource ofphysically mapped SNPs (approximately 4,000,000 unique records). This article discusses our approach and methodology for designing the map, choosing quality SNPs, designing and validating these assays, and obtaining population frequency ofthe polymorphisms. We also discuss an advanced, high-performance SNP assay chemisty--a new generation of the TaqMan probe-based, 5' nuclease assay-and high-throughput instrumentation-software system for large-scale genotyping. We provide the new SNP map and validation information, validated SNP assays and reagents, and instrumentation systems as a novel resource for genetic discoveries.

  3. Software Quality Assurance for Nuclear Safety Systems

    International Nuclear Information System (INIS)

    Sparkman, D R; Lagdon, R

    2004-01-01

    The US Department of Energy has undertaken an initiative to improve the quality of software used to design and operate their nuclear facilities across the United States. One aspect of this initiative is to revise or create new directives and guides associated with quality practices for the safety software in its nuclear facilities. Safety software includes the safety structures, systems, and components software and firmware, support software and design and analysis software used to ensure the safety of the facility. DOE nuclear facilities are unique when compared to commercial nuclear or other industrial activities in terms of the types and quantities of hazards that must be controlled to protect workers, public and the environment. Because of these differences, DOE must develop an approach to software quality assurance that ensures appropriate risk mitigation by developing a framework of requirements that accomplishes the following goals: (sm b ullet) Ensures the software processes developed to address nuclear safety in design, operation, construction and maintenance of its facilities are safe (sm b ullet) Considers the larger system that uses the software and its impacts (sm b ullet) Ensures that the software failures do not create unsafe conditions Software designers for nuclear systems and processes must reduce risks in software applications by incorporating processes that recognize, detect, and mitigate software failure in safety related systems. It must also ensure that fail safe modes and component testing are incorporated into software design. For nuclear facilities, the consideration of risk is not necessarily sufficient to ensure safety. Systematic evaluation, independent verification and system safety analysis must be considered for software design, implementation, and operation. The software industry primarily uses risk analysis to determine the appropriate level of rigor applied to software practices. This risk-based approach distinguishes safety

  4. Implementing effect of energy efficiency supervision system for government office buildings and large-scale public buildings in China

    International Nuclear Information System (INIS)

    Zhao Jing; Wu Yong; Zhu Neng

    2009-01-01

    The Chinese central government released a document to initiate a task of energy efficiency supervision system construction for government office buildings and large-scale public buildings in 2007, which marks the overall start of existing buildings energy efficiency management in China with the government office buildings and large-scale public buildings as a breakthrough. This paper focused on the implementing effect in the demonstration region all over China for less than one year, firstly introduced the target and path of energy efficiency supervision system, then described the achievements and problems during the implementing process in the first demonstration provinces and cities. A certain data from the energy efficiency public notice in some typical demonstration provinces and cities were analyzed statistically. It can be concluded that different functional buildings have different energy consumption and the average energy consumption of large-scale public buildings is too high in China compared with the common public buildings and residential buildings. The obstacles need to be overcome afterward were summarized and the prospects for the future work were also put forward in the end.

  5. Implementing effect of energy efficiency supervision system for government office buildings and large-scale public buildings in China

    Energy Technology Data Exchange (ETDEWEB)

    Zhao Jing [School of Environmental Science and Engineering, Tianjin University, Tianjin 300072 (China)], E-mail: zhaojing@tju.edu.cn; Wu Yong [Department of Science and Technology, Ministry of Housing and Urban-Rural Development of the People' s Republic of China, Beijing 100835 (China); Zhu Neng [School of Environmental Science and Engineering, Tianjin University, Tianjin 300072 (China)

    2009-06-15

    The Chinese central government released a document to initiate a task of energy efficiency supervision system construction for government office buildings and large-scale public buildings in 2007, which marks the overall start of existing buildings energy efficiency management in China with the government office buildings and large-scale public buildings as a breakthrough. This paper focused on the implementing effect in the demonstration region all over China for less than one year, firstly introduced the target and path of energy efficiency supervision system, then described the achievements and problems during the implementing process in the first demonstration provinces and cities. A certain data from the energy efficiency public notice in some typical demonstration provinces and cities were analyzed statistically. It can be concluded that different functional buildings have different energy consumption and the average energy consumption of large-scale public buildings is too high in China compared with the common public buildings and residential buildings. The obstacles need to be overcome afterward were summarized and the prospects for the future work were also put forward in the end.

  6. Implementing effect of energy efficiency supervision system for government office buildings and large-scale public buildings in China

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Jing; Zhu, Neng [School of Environmental Science and Engineering, Tianjin University, Tianjin 300072 (China); Wu, Yong [Department of Science and Technology, Ministry of Housing and Urban-Rural Development of the People' s Republic of China, Beijing 100835 (China)

    2009-06-15

    The Chinese central government released a document to initiate a task of energy efficiency supervision system construction for government office buildings and large-scale public buildings in 2007, which marks the overall start of existing buildings energy efficiency management in China with the government office buildings and large-scale public buildings as a breakthrough. This paper focused on the implementing effect in the demonstration region all over China for less than one year, firstly introduced the target and path of energy efficiency supervision system, then described the achievements and problems during the implementing process in the first demonstration provinces and cities. A certain data from the energy efficiency public notice in some typical demonstration provinces and cities were analyzed statistically. It can be concluded that different functional buildings have different energy consumption and the average energy consumption of large-scale public buildings is too high in China compared with the common public buildings and residential buildings. The obstacles need to be overcome afterward were summarized and the prospects for the future work were also put forward in the end. (author)

  7. Across Space and Time: Social Responses to Large-Scale Biophysical Systems

    Science.gov (United States)

    Macmynowski, Dena P.

    2007-06-01

    The conceptual rubric of ecosystem management has been widely discussed and deliberated in conservation biology, environmental policy, and land/resource management. In this paper, I argue that two critical aspects of the ecosystem management concept require greater attention in policy and practice. First, although emphasis has been placed on the “space” of systems, the “time”—or rates of change—associated with biophysical and social systems has received much less consideration. Second, discussions of ecosystem management have often neglected the temporal disconnects between changes in biophysical systems and the response of social systems to management issues and challenges. The empirical basis of these points is a case study of the “Crown of the Continent Ecosystem,” an international transboundary area of the Rocky Mountains that surrounds Glacier National Park (USA) and Waterton Lakes National Park (Canada). This project assessed the experiences and perspectives of 1) middle- and upper-level government managers responsible for interjurisdictional cooperation, and 2) environmental nongovernment organizations with an international focus. I identify and describe 10 key challenges to increasing the extent and intensity of transboundary cooperation in land/resource management policy and practice. These issues are discussed in terms of their political, institutional, cultural, information-based, and perceptual elements. Analytic techniques include a combination of environmental history, semistructured interviews with 48 actors, and text analysis in a systematic qualitative framework. The central conclusion of this work is that the rates of response of human social systems must be better integrated with the rates of ecological change. This challenge is equal to or greater than the well-recognized need to adapt the spatial scale of human institutions to large-scale ecosystem processes and transboundary wildlife.

  8. Performance Modeling of Hybrid MPI/OpenMP Scientific Applications on Large-scale Multicore Cluster Systems

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2011-01-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore clusters: IBM POWER4, POWER5+ and Blue Gene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore clusters because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyro kinetic Toroidal Code in magnetic fusion to validate our performance model of the hybrid application on these multicore clusters. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore clusters. © 2011 IEEE.

  9. Performance Modeling of Hybrid MPI/OpenMP Scientific Applications on Large-scale Multicore Cluster Systems

    KAUST Repository

    Wu, Xingfu

    2011-08-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore clusters: IBM POWER4, POWER5+ and Blue Gene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore clusters because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyro kinetic Toroidal Code in magnetic fusion to validate our performance model of the hybrid application on these multicore clusters. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore clusters. © 2011 IEEE.

  10. The economics of information systems and software

    CERN Document Server

    Veryard, Richard

    2014-01-01

    The Economics of Information Systems and Software focuses on the economic aspects of information systems and software, including advertising, evaluation of information systems, and software maintenance. The book first elaborates on value and values, software business, and scientific information as an economic category. Discussions focus on information products and information services, special economic properties of information, culture and convergence, hardware and software products, materiality and consumption, technological progress, and software flexibility. The text then takes a look at a

  11. DNSSM: A Large Scale Passive DNS Security Monitoring Framework

    OpenAIRE

    Marchal , Samuel; François , Jérôme; Wagner , Cynthia; State , Radu; Dulaunoy , Alexandre; Engel , Thomas; Festor , Olivier

    2012-01-01

    International audience; We present a monitoring approach and the supporting software architecture for passive DNS traffic. Monitoring DNS traffic can reveal essential network and system level activity profiles. Worm infected and botnet participating hosts can be identified and malicious backdoor communications can be detected. Any passive DNS monitoring solution needs to address several challenges that range from architectural approaches for dealing with large volumes of data up to specific D...

  12. Analytical methods for large-scale sensitivity analysis using GRESS [GRadient Enhanced Software System] and ADGEN [Automated Adjoint Generator

    International Nuclear Information System (INIS)

    Pin, F.G.

    1988-04-01

    Sensitivity analysis is an established methodology used by researchers in almost every field to gain essential insight in design and modeling studies and in performance assessments of complex systems. Conventional sensitivity analysis methodologies, however, have not enjoyed the widespread use they deserve considering the wealth of information they can provide, partly because of their prohibitive cost or the large initial analytical investment they require. Automated systems have recently been developed at ORNL to eliminate these drawbacks. Compilers such as GRESS and ADGEN now allow automatic and cost effective calculation of sensitivities in FORTRAN computer codes. In this paper, these and other related tools are described and their impact and applicability in the general areas of modeling, performance assessment and decision making for radioactive waste isolation problems are discussed. 7 refs., 2 figs

  13. Comparison of PV system design software packages for urban applications

    Energy Technology Data Exchange (ETDEWEB)

    Gharakhani Siraki, Arbi; Pillay, Pragasen

    2010-09-15

    A large number of software packages are available for solar resource evaluation and PV system design. However, few of them are suitable for urban applications. In this paper a comparison has been made between two specifically designed solar tools known as the Ecotect 2010 and the PVsyst 5.05. Conclusions have been made for proper use of these packages based on their specifications and privileges. Moreover, the calculations have been repeated with HOMER software package (which is a generic tool) for the same location. The results suggest that a generic solar software tool should not be used for an urban application.

  14. Instrument control software development process for the multi-star AO system ARGOS

    Science.gov (United States)

    Kulas, M.; Barl, L.; Borelli, J. L.; Gässler, W.; Rabien, S.

    2012-09-01

    The ARGOS project (Advanced Rayleigh guided Ground layer adaptive Optics System) will upgrade the Large Binocular Telescope (LBT) with an AO System consisting of six Rayleigh laser guide stars. This adaptive optics system integrates several control loops and many different components like lasers, calibration swing arms and slope computers that are dispersed throughout the telescope. The purpose of the instrument control software (ICS) is running this AO system and providing convenient client interfaces to the instruments and the control loops. The challenges for the ARGOS ICS are the development of a distributed and safety-critical software system with no defects in a short time, the creation of huge and complex software programs with a maintainable code base, the delivery of software components with the desired functionality and the support of geographically distributed project partners. To tackle these difficult tasks, the ARGOS software engineers reuse existing software like the novel middleware from LINC-NIRVANA, an instrument for the LBT, provide many tests at different functional levels like unit tests and regression tests, agree about code and architecture style and deliver software incrementally while closely collaborating with the project partners. Many ARGOS ICS components are already successfully in use in the laboratories for testing ARGOS control loops.

  15. Review of software tools for design and analysis of large scale MRM proteomic datasets.

    Science.gov (United States)

    Colangelo, Christopher M; Chung, Lisa; Bruce, Can; Cheung, Kei-Hoi

    2013-06-15

    Selective or Multiple Reaction monitoring (SRM/MRM) is a liquid-chromatography (LC)/tandem-mass spectrometry (MS/MS) method that enables the quantitation of specific proteins in a sample by analyzing precursor ions and the fragment ions of their selected tryptic peptides. Instrumentation software has advanced to the point that thousands of transitions (pairs of primary and secondary m/z values) can be measured in a triple quadrupole instrument coupled to an LC, by a well-designed scheduling and selection of m/z windows. The design of a good MRM assay relies on the availability of peptide spectra from previous discovery-phase LC-MS/MS studies. The tedious aspect of manually developing and processing MRM assays involving thousands of transitions has spurred to development of software tools to automate this process. Software packages have been developed for project management, assay development, assay validation, data export, peak integration, quality assessment, and biostatistical analysis. No single tool provides a complete end-to-end solution, thus this article reviews the current state and discusses future directions of these software tools in order to enable researchers to combine these tools for a comprehensive targeted proteomics workflow. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.

  16. WebViz:A Web-based Collaborative Interactive Visualization System for large-Scale Data Sets

    Science.gov (United States)

    Yuen, D. A.; McArthur, E.; Weiss, R. M.; Zhou, J.; Yao, B.

    2010-12-01

    WebViz is a web-based application designed to conduct collaborative, interactive visualizations of large data sets for multiple users, allowing researchers situated all over the world to utilize the visualization services offered by the University of Minnesota’s Laboratory for Computational Sciences and Engineering (LCSE). This ongoing project has been built upon over the last 3 1/2 years .The motivation behind WebViz lies primarily with the need to parse through an increasing amount of data produced by the scientific community as a result of larger and faster multicore and massively parallel computers coming to the market, including the use of general purpose GPU computing. WebViz allows these large data sets to be visualized online by anyone with an account. The application allows users to save time and resources by visualizing data ‘on the fly’, wherever he or she may be located. By leveraging AJAX via the Google Web Toolkit (http://code.google.com/webtoolkit/), we are able to provide users with a remote, web portal to LCSE's (http://www.lcse.umn.edu) large-scale interactive visualization system already in place at the University of Minnesota. LCSE’s custom hierarchical volume rendering software provides high resolution visualizations on the order of 15 million pixels and has been employed for visualizing data primarily from simulations in astrophysics to geophysical fluid dynamics . In the current version of WebViz, we have implemented a highly extensible back-end framework built around HTTP "server push" technology. The web application is accessible via a variety of devices including netbooks, iPhones, and other web and javascript-enabled cell phones. Features in the current version include the ability for users to (1) securely login (2) launch multiple visualizations (3) conduct collaborative visualization sessions (4) delegate control aspects of a visualization to others and (5) engage in collaborative chats with other users within the user interface

  17. Parameter estimation in large-scale systems biology models: a parallel and self-adaptive cooperative strategy.

    Science.gov (United States)

    Penas, David R; González, Patricia; Egea, Jose A; Doallo, Ramón; Banga, Julio R

    2017-01-21

    The development of large-scale kinetic models is one of the current key issues in computational systems biology and bioinformatics. Here we consider the problem of parameter estimation in nonlinear dynamic models. Global optimization methods can be used to solve this type of problems but the associated computational cost is very large. Moreover, many of these methods need the tuning of a number of adjustable search parameters, requiring a number of initial exploratory runs and therefore further increasing the computation times. Here we present a novel parallel method, self-adaptive cooperative enhanced scatter search (saCeSS), to accelerate the solution of this class of problems. The method is based on the scatter search optimization metaheuristic and incorporates several key new mechanisms: (i) asynchronous cooperation between parallel processes, (ii) coarse and fine-grained parallelism, and (iii) self-tuning strategies. The performance and robustness of saCeSS is illustrated by solving a set of challenging parameter estimation problems, including medium and large-scale kinetic models of the bacterium E. coli, bakerés yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The results consistently show that saCeSS is a robust and efficient method, allowing very significant reduction of computation times with respect to several previous state of the art methods (from days to minutes, in several cases) even when only a small number of processors is used. The new parallel cooperative method presented here allows the solution of medium and large scale parameter estimation problems in reasonable computation times and with small hardware requirements. Further, the method includes self-tuning mechanisms which facilitate its use by non-experts. We believe that this new method can play a key role in the development of large-scale and even whole-cell dynamic models.

  18. Distributed inter process communication framework of BES III DAQ online software

    International Nuclear Information System (INIS)

    Li Fei; Liu Yingjie; Ren Zhenyu; Wang Liang; Chinese Academy of Sciences, Beijing; Chen Mali; Zhu Kejun; Zhao Jingwei

    2006-01-01

    DAQ (Data Acquisition) system is one important part of BES III, which is the large scale high-energy physics detector on the BEPC. The inter process communication (IPC) of online software in distributed environments is very pivotal for design and implement of DAQ system. This article will introduce one distributed inter process communication framework, which is based on CORBA and used in BES III DAQ online software. The article mainly presents the design and implementation of the IPC framework and application based on IPC. (authors)

  19. Personalized Opportunistic Computing for CMS at Large Scale

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    **Douglas Thain** is an Associate Professor of Computer Science and Engineering at the University of Notre Dame, where he designs large scale distributed computing systems to power the needs of advanced science and...

  20. Spaceport Command and Control System Software Development

    Science.gov (United States)

    Mahlin, Jonathan Nicholas

    2017-01-01

    There is an immense challenge in organizing personnel across a large agency such as NASA, or even over a subset of that, like a center's Engineering directorate. Workforce inefficiencies and challenges are bound to grow over time without oversight and management. It is also not always possible to hire new employees to fill workforce gaps, therefore available resources must be utilized more efficiently. The goal of this internship was to develop software that improves organizational efficiency by aiding managers, making employee information viewable and editable in an intuitive manner. This semester I created an application for managers that aids in optimizing allocation of employee resources for a single division with the possibility of scaling upwards. My duties this semester consisted of developing frontend and backend software to complete this task. The application provides user-friendly information displays and documentation of the workforce to allow NASA to track diligently track the status and skills of its workforce. This tool should be able to prove that current employees are being effectively utilized and if new hires are necessary to fulfill skill gaps.

  1. A mixed-methods study of system-level sustainability of evidence-based practices in 12 large-scale implementation initiatives.

    Science.gov (United States)

    Scudder, Ashley T; Taber-Thomas, Sarah M; Schaffner, Kristen; Pemberton, Joy R; Hunter, Leah; Herschell, Amy D

    2017-12-07

    In recent decades, evidence-based practices (EBPs) have been broadly promoted in community behavioural health systems in the United States of America, yet reported EBP penetration rates remain low. Determining how to systematically sustain EBPs in complex, multi-level service systems has important implications for public health. This study examined factors impacting the sustainability of parent-child interaction therapy (PCIT) in large-scale initiatives in order to identify potential predictors of sustainment. A mixed-methods approach to data collection was used. Qualitative interviews and quantitative surveys examining sustainability processes and outcomes were completed by participants from 12 large-scale initiatives. Sustainment strategies fell into nine categories, including infrastructure, training, marketing, integration and building partnerships. Strategies involving integration of PCIT into existing practices and quality monitoring predicted sustainment, while financing also emerged as a key factor. The reported factors and strategies impacting sustainability varied across initiatives; however, integration into existing practices, monitoring quality and financing appear central to high levels of sustainability of PCIT in community-based systems. More detailed examination of the progression of specific activities related to these strategies may aide in identifying priorities to include in strategic planning of future large-scale initiatives. ClinicalTrials.gov ID NCT02543359 ; Protocol number PRO12060529.

  2. Geospatial Optimization of Siting Large-Scale Solar Projects

    Energy Technology Data Exchange (ETDEWEB)

    Macknick, Jordan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Quinby, Ted [National Renewable Energy Lab. (NREL), Golden, CO (United States); Caulfield, Emmet [Stanford Univ., CA (United States); Gerritsen, Margot [Stanford Univ., CA (United States); Diffendorfer, Jay [U.S. Geological Survey, Boulder, CO (United States); Haines, Seth [U.S. Geological Survey, Boulder, CO (United States)

    2014-03-01

    Recent policy and economic conditions have encouraged a renewed interest in developing large-scale solar projects in the U.S. Southwest. However, siting large-scale solar projects is complex. In addition to the quality of the solar resource, solar developers must take into consideration many environmental, social, and economic factors when evaluating a potential site. This report describes a proof-of-concept, Web-based Geographical Information Systems (GIS) tool that evaluates multiple user-defined criteria in an optimization algorithm to inform discussions and decisions regarding the locations of utility-scale solar projects. Existing siting recommendations for large-scale solar projects from governmental and non-governmental organizations are not consistent with each other, are often not transparent in methods, and do not take into consideration the differing priorities of stakeholders. The siting assistance GIS tool we have developed improves upon the existing siting guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.

  3. Software engineering the mixed model for genome-wide association studies on large samples.

    Science.gov (United States)

    Zhang, Zhiwu; Buckler, Edward S; Casstevens, Terry M; Bradbury, Peter J

    2009-11-01

    Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample size and number of markers used for GWAS is increasing dramatically, resulting in greater statistical power to detect those associations. The use of mixed models with increasingly large data sets depends on the availability of software for analyzing those models. While multiple software packages implement the mixed model method, no single package provides the best combination of fast computation, ability to handle large samples, flexible modeling and ease of use. Key elements of association analysis with mixed models are reviewed, including modeling phenotype-genotype associations using mixed models, population stratification, kinship and its estimation, variance component estimation, use of best linear unbiased predictors or residuals in place of raw phenotype, improving efficiency and software-user interaction. The available software packages are evaluated, and suggestions made for future software development.

  4. Ruling by canal: Governance and system-level design characteristics of large scale irrigation infrastructure in India and Uzbekistan

    NARCIS (Netherlands)

    Mollinga, P.; Veldwisch, G.J.A.

    2016-01-01

    This paper explores the relationship between governance regime and large-scale irrigation system design by investigating three cases: 1) protective irrigation design in post-independent South India; 2) canal irrigation system design in Khorezm Province, Uzbekistan, as implemented in the USSR period,

  5. Sustainability of Open-Source Software Organizations as Underpinning for Sustainable Interoperability on Large Scales

    Science.gov (United States)

    Fulker, D. W.; Gallagher, J. H. R.

    2015-12-01

    OPeNDAP's Hyrax data server is an open-source framework fostering interoperability via easily-deployed Web services. Compatible with solutions listed in the (PA001) session description—federation, rigid standards and brokering/mediation—the framework can support tight or loose coupling, even with dependence on community-contributed software. Hyrax is a Web-services framework with a middleware-like design and a handler-style architecture that together reduce the interoperability challenge (for N datatypes and M user contexts) to an O(N+M) problem, similar to brokering. Combined with an open-source ethos, this reduction makes Hyrax a community tool for gaining interoperability. E.g., in its response to the Big Earth Data Initiative (BEDI), NASA references OPeNDAP-based interoperability. Assuming its suitability, the question becomes: how sustainable is OPeNDAP, a small not-for-profit that produces open-source software, i.e., has no software-sales? In other words, if geoscience interoperability depends on OPeNDAP and similar organizations, are those entities in turn sustainable? Jim Collins (in Good to Great) highlights three questions that successful companies can answer (paraphrased here): What is your passion? Where is your world-class excellence? What drives your economic engine? We attempt to shed light on OPeNDAP sustainability by examining these. Passion: OPeNDAP has a focused passion for improving the effectiveness of scientific data sharing and use, as deeply-cooperative community endeavors. Excellence: OPeNDAP has few peers in remote, scientific data access. Skills include computer science with experience in data science, (operational, secure) Web services, and software design (for servers and clients, where the latter vary from Web pages to standalone apps and end-user programs). Economic Engine: OPeNDAP is an engineering services organization more than a product company, despite software being key to OPeNDAP's reputation. In essence, provision of

  6. The Ragnarok Software Development Environment

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak

    1999-01-01

    Ragnarok is an experimental software development environment that focuses on enhanced support for managerial activities in large scale software development taking the daily work of the software developer as its point of departure. The main emphasis is support in three areas: management, navigation......, and collaboration. The leitmotif is the software architecture, which is extended to handle managerial data in addition to source code; this extended software architecture is put under tight version- and configuration management control and furthermore used as basis for visualisation. Preliminary results of using...

  7. Software for nuclear data acquisition systems

    International Nuclear Information System (INIS)

    Christensen, P.

    1983-01-01

    The situation for experimenters and system designers needing software for instrumentation is described. It is stated that software for a data acquisition system can be divided into programmes described as the foundation software, the applications programme, and the analysis programme. Special attention is given to CAMAC. Two examples from Risoe describing data transportation and archiving are given. Finally the supply of software and the problems of documentation are described. (author)

  8. A large-scale RF-based Indoor Localization System Using Low-complexity Gaussian filter and improved Bayesian inference

    Directory of Open Access Journals (Sweden)

    L. Xiao

    2013-04-01

    Full Text Available The growing convergence among mobile computing device and smart sensors boosts the development of ubiquitous computing and smart spaces, where localization is an essential part to realize the big vision. The general localization methods based on GPS and cellular techniques are not suitable for tracking numerous small size and limited power objects in the indoor case. In this paper, we propose and demonstrate a new localization method, this method is an easy-setup and cost-effective indoor localization system based on off-the-shelf active RFID technology. Our system is not only compatible with the future smart spaces and ubiquitous computing systems, but also suitable for large-scale indoor localization. The use of low-complexity Gaussian Filter (GF, Wheel Graph Model (WGM and Probabilistic Localization Algorithm (PLA make the proposed algorithm robust and suitable for large-scale indoor positioning from uncertainty, self-adjective to varying indoor environment. Using MATLAB simulation, we study the system performances, especially the dependence on a number of system and environment parameters, and their statistical properties. The simulation results prove that our proposed system is an accurate and cost-effective candidate for indoor localization.

  9. Managing Change in Software Process Improvement

    DEFF Research Database (Denmark)

    Mathiassen, Lars; Ngwenyama, Ojelanki K.; Aaen, Ivan

    2005-01-01

    When software managers initiate SPI, most are ill prepared for the scale and complexity of the organizational change involved. Although they typically know how to deal with large software projects, few managers have sufficient experience with projects that transform organizations. To succeed with...

  10. Organizational Influences on Interdisciplinary Interactions during Research and Design of Large-Scale Complex Engineered Systems

    Science.gov (United States)

    McGowan, Anna-Maria R.; Seifert, Colleen M.; Papalambros, Panos Y.

    2012-01-01

    The design of large-scale complex engineered systems (LaCES) such as an aircraft is inherently interdisciplinary. Multiple engineering disciplines, drawing from a team of hundreds to thousands of engineers and scientists, are woven together throughout the research, development, and systems engineering processes to realize one system. Though research and development (R&D) is typically focused in single disciplines, the interdependencies involved in LaCES require interdisciplinary R&D efforts. This study investigates the interdisciplinary interactions that take place during the R&D and early conceptual design phases in the design of LaCES. Our theoretical framework is informed by both engineering practices and social science research on complex organizations. This paper provides preliminary perspective on some of the organizational influences on interdisciplinary interactions based on organization theory (specifically sensemaking), data from a survey of LaCES experts, and the authors experience in the research and design. The analysis reveals couplings between the engineered system and the organization that creates it. Survey respondents noted the importance of interdisciplinary interactions and their significant benefit to the engineered system, such as innovation and problem mitigation. Substantial obstacles to interdisciplinarity are uncovered beyond engineering that include communication and organizational challenges. Addressing these challenges may ultimately foster greater efficiencies in the design and development of LaCES and improved system performance by assisting with the collective integration of interdependent knowledge bases early in the R&D effort. This research suggests that organizational and human dynamics heavily influence and even constrain the engineering effort for large-scale complex systems.

  11. Analysis using large-scale ringing data

    Directory of Open Access Journals (Sweden)

    Baillie, S. R.

    2004-06-01

    ]; Peach et al., 1998; DeSante et al., 2001 are generally co–ordinated by ringing centres such as those that make up the membership of EURING. In some countries volunteer census work (often called Breeding Bird Surveys is undertaken by the same organizations while in others different bodies may co–ordinate this aspect of the work. This session was concerned with the analysis of such extensive data sets and the approaches that are being developed to address the key theoretical and applied issues outlined above. The papers reflect the development of more spatially explicit approaches to analyses of data gathered at large spatial scales. They show that while the statistical tools that have been developed in recent years can be used to derive useful biological conclusions from such data, there is additional need for further developments. Future work should also consider how to best implement such analytical developments within future study designs. In his plenary paper Andy Royle (Royle, 2004 addresses this theme directly by describing a general framework for modelling spatially replicated abundance data. The approach is based on the idea that a set of spatially referenced local populations constitutes a metapopulation, within which local abundance is determined as a random process. This provides an elegant and general approach in which the metapopulation model as described above is combined with a data–generating model specific to the type of data being analysed to define a simple hierarchical model that can be analysed using conventional methods. It should be noted, however, that further software development will be needed if the approach is to be made readily available to biologists. The approach is well suited to dealing with sparse data and avoids the need for data aggregation prior to analysis. Spatial synchrony has received most attention in studies of species whose populations show cyclic fluctuations, particularly certain game birds and small mammals. However

  12. Software architecture and engineering for patient records: current and future.

    Science.gov (United States)

    Weng, Chunhua; Levine, Betty A; Mun, Seong K

    2009-05-01

    During the "The National Forum on the Future of the Defense Health Information System," a track focusing on "Systems Architecture and Software Engineering" included eight presenters. These presenters identified three key areas of interest in this field, which include the need for open enterprise architecture and a federated database design, net centrality based on service-oriented architecture, and the need for focus on software usability and reusability. The eight panelists provided recommendations related to the suitability of service-oriented architecture and the enabling technologies of grid computing and Web 2.0 for building health services research centers and federated data warehouses to facilitate large-scale collaborative health care and research. Finally, they discussed the need to leverage industry best practices for software engineering to facilitate rapid software development, testing, and deployment.

  13. Large-scale Flow and Transport of Magnetic Flux in the Solar ...

    Indian Academy of Sciences (India)

    tribpo

    Abstract. Horizontal large-scale velocity field describes horizontal displacement of the photospheric magnetic flux in zonal and meridian directions. The flow systems of solar plasma, constructed according to the velocity field, create the large-scale cellular-like patterns with up-flow in the center and the down-flow on the ...

  14. Software Intensive Systems

    National Research Council Canada - National Science Library

    Horvitz, E; Katz, D. J; Rumpf, R. L; Shrobe, H; Smith, T. B; Webber, G. E; Williamson, W. E; Winston, P. H; Wolbarsht, James L

    2006-01-01

    .... Additionally, recommend that DoN invest in software engineering, particularly as it complements commercial industry developments and promotes the application of systems engineering methodology...

  15. First Mile Challenges for Large-Scale IoT

    KAUST Repository

    Bader, Ahmed

    2017-03-16

    The Internet of Things is large-scale by nature. This is not only manifested by the large number of connected devices, but also by the sheer scale of spatial traffic intensity that must be accommodated, primarily in the uplink direction. To that end, cellular networks are indeed a strong first mile candidate to accommodate the data tsunami to be generated by the IoT. However, IoT devices are required in the cellular paradigm to undergo random access procedures as a precursor to resource allocation. Such procedures impose a major bottleneck that hinders cellular networks\\' ability to support large-scale IoT. In this article, we shed light on the random access dilemma and present a case study based on experimental data as well as system-level simulations. Accordingly, a case is built for the latent need to revisit random access procedures. A call for action is motivated by listing a few potential remedies and recommendations.

  16. LES SOFTWARE FOR THE DESIGN OF LOW EMISSION COMBUSTION SYSTEMS FOR VISION 21 PLANTS

    Energy Technology Data Exchange (ETDEWEB)

    Clifford E. Smith; Steven M. Cannon; Virgil Adumitroaie; David L. Black; Karl V. Meredith

    2005-01-01

    In this project, an advanced computational software tool was developed for the design of low emission combustion systems required for Vision 21 clean energy plants. Vision 21 combustion systems, such as combustors for gas turbines, combustors for indirect fired cycles, furnaces and sequestrian-ready combustion systems, will require innovative low emission designs and low development costs if Vision 21 goals are to be realized. The simulation tool will greatly reduce the number of experimental tests; this is especially desirable for gas turbine combustor design since the cost of the high pressure testing is extremely costly. In addition, the software will stimulate new ideas, will provide the capability of assessing and adapting low-emission combustors to alternate fuels, and will greatly reduce the development time cycle of combustion systems. The revolutionary combustion simulation software is able to accurately simulate the highly transient nature of gaseous-fueled (e.g. natural gas, low BTU syngas, hydrogen, biogas etc.) turbulent combustion and assess innovative concepts needed for Vision 21 plants. In addition, the software is capable of analyzing liquid-fueled combustion systems since that capability was developed under a concurrent Air Force Small Business Innovative Research (SBIR) program. The complex physics of the reacting flow field are captured using 3D Large Eddy Simulation (LES) methods, in which large scale transient motion is resolved by time-accurate numerics, while the small scale motion is modeled using advanced subgrid turbulence and chemistry closures. In this way, LES combustion simulations can model many physical aspects that, until now, were impossible to predict with 3D steady-state Reynolds Averaged Navier-Stokes (RANS) analysis, i.e. very low NOx emissions, combustion instability (coupling of unsteady heat and acoustics), lean blowout, flashback, autoignition, etc. LES methods are becoming more and more practical by linking together tens

  17. Software Design Methods for Real-Time Systems

    Science.gov (United States)

    1989-12-01

    This module describes the concepts and methods used in the software design of real time systems . It outlines the characteristics of real time systems , describes...the role of software design in real time system development, surveys and compares some software design methods for real - time systems , and

  18. Research on unit commitment with large-scale wind power connected power system

    Science.gov (United States)

    Jiao, Ran; Zhang, Baoqun; Chi, Zhongjun; Gong, Cheng; Ma, Longfei; Yang, Bing

    2017-01-01

    Large-scale integration of wind power generators into power grid brings severe challenges to power system economic dispatch due to its stochastic volatility. Unit commitment including wind farm is analyzed from the two parts of modeling and solving methods. The structures and characteristics can be summarized after classification has been done according to different objective function and constraints. Finally, the issues to be solved and possible directions of research and development in the future are discussed, which can adapt to the requirements of the electricity market, energy-saving power generation dispatching and smart grid, even providing reference for research and practice of researchers and workers in this field.

  19. Hierarchical system for autonomous sensing-healing of delamination in large-scale composite structures

    International Nuclear Information System (INIS)

    Minakuchi, Shu; Sun, Denghao; Takeda, Nobuo

    2014-01-01

    This study combines our hierarchical fiber-optic-based delamination detection system with a microvascular self-healing material to develop the first autonomous sensing-healing system applicable to large-scale composite structures. In this combined system, embedded vascular modules are connected through check valves to a surface-mounted supply tube of a pressurized healing agent while fiber-optic-based sensors monitor the internal pressure of these vascular modules. When delamination occurs, the healing agent flows into the vascular modules breached by the delamination and infiltrates the damage for healing. At the same time, the pressure sensors identify the damaged modules by detecting internal pressure changes. This paper begins by describing the basic concept of the combined system and by discussing the advantages that arise from its hierarchical nature. The feasibility of the system is then confirmed through delamination infiltration tests. Finally, the hierarchical system is validated in a plate specimen by focusing on the detection and infiltration of the damage. Its self-diagnostic function is also demonstrated. (paper)

  20. Big Software for Big Data: Scaling Up Photometry for LSST (Abstract)

    Science.gov (United States)

    Rawls, M.

    2017-06-01

    (Abstract only) The Large Synoptic Survey Telescope (LSST) will capture mosaics of the sky every few nights, each containing more data than your computer's hard drive can store. As a result, the software to process these images is as critical to the science as the telescope and the camera. I discuss the algorithms and software being developed by the LSST Data Management team to handle such a large volume of data. All of our work is open source and available to the community. Once LSST comes online, our software will produce catalogs of objects and a stream of alerts. These will bring exciting new opportunities for follow-up observations and collaborations with LSST scientists.

  1. A Software Development Platform for Mechatronic Systems

    DEFF Research Database (Denmark)

    Guan, Wei

    Software has become increasingly determinative for development of mechatronic systems, which underscores the importance of demands for shortened time-to-market, increased productivity, higher quality, and improved dependability. As the complexity of systems is dramatically increasing, these demands...... present a challenge to the practitioners who adopt conventional software development approach. An effective approach towards industrial production of software for mechatronic systems is needed. This approach requires a disciplined engineering process that encompasses model-driven engineering and component......-based software engineering, whereby we enable incremental software development using component models to address the essential design issues of real-time embedded systems. To this end, this dissertation presents a software development platform that provides an incremental model-driven development process based...

  2. Active self-testing noise measurement sensors for large-scale environmental sensor networks.

    Science.gov (United States)

    Domínguez, Federico; Cuong, Nguyen The; Reinoso, Felipe; Touhafi, Abdellah; Steenhaut, Kris

    2013-12-13

    Large-scale noise pollution sensor networks consist of hundreds of spatially distributed microphones that measure environmental noise. These networks provide historical and real-time environmental data to citizens and decision makers and are therefore a key technology to steer environmental policy. However, the high cost of certified environmental microphone sensors render large-scale environmental networks prohibitively expensive. Several environmental network projects have started using off-the-shelf low-cost microphone sensors to reduce their costs, but these sensors have higher failure rates and produce lower quality data. To offset this disadvantage, we developed a low-cost noise sensor that actively checks its condition and indirectly the integrity of the data it produces. The main design concept is to embed a 13 mm speaker in the noise sensor casing and, by regularly scheduling a frequency sweep, estimate the evolution of the microphone's frequency response over time. This paper presents our noise sensor's hardware and software design together with the results of a test deployment in a large-scale environmental network in Belgium. Our middle-range-value sensor (around €50) effectively detected all experienced malfunctions, in laboratory tests and outdoor deployments, with a few false positives. Future improvements could further lower the cost of our sensor below €10.

  3. A fast and optimized dynamic economic load dispatch for large scale power systems

    International Nuclear Information System (INIS)

    Musse Mohamud Ahmed; Mohd Ruddin Ab Ghani; Ismail Hassan

    2000-01-01

    This paper presents Lagrangian Multipliers (LM) and Linear Programming (LP) based dynamic economic load dispatch (DELD) solution for large-scale power system operations. It is to minimize the operation cost of power generation. units subject to the considered constraints. After individual generator units are economically loaded and periodically dispatched, fast and optimized DELD has been achieved. DELD with period intervals has been taken into consideration The results found from the algorithm based on LM and LP techniques appear to be modest in both optimizing the operation cost and achieving fast computation. (author)

  4. Emergent Semantics Interoperability in Large-Scale Decentralized Information Systems

    CERN Document Server

    Cudré-Mauroux, Philippe

    2008-01-01

    Peer-to-peer systems are evolving with new information-system architectures, leading to the idea that the principles of decentralization and self-organization will offer new approaches in informatics, especially for systems that scale with the number of users or for which central authorities do not prevail. This book describes a new way of building global agreements (semantic interoperability) based only on decentralized, self-organizing interactions.

  5. Test software for BESIII MDC electronics system

    International Nuclear Information System (INIS)

    Zhang Hongyu; Sheng Huayi; Zhu Haitao; Ji Xiaolu; Zhao Dongxu

    2006-01-01

    This paper presents the design of Test System Software for BESIII MDC Electronics. Two kinds of test systems, SBS VP7 based and PowerPC based systems, and their corresponding test software are introduced. The software is developed in LabVIEW 7.1 and Microsoft Visual C++ 6.0, some test functions of the software, as well as their user interfaces, are described in detail. The software has been applied in hardware debugging, performance test and long term stability test. (authors)

  6. Modeling Relief Demands in an Emergency Supply Chain System under Large-Scale Disasters Based on a Queuing Network

    Science.gov (United States)

    He, Xinhua

    2014-01-01

    This paper presents a multiple-rescue model for an emergency supply chain system under uncertainties in large-scale affected area of disasters. The proposed methodology takes into consideration that the rescue demands caused by a large-scale disaster are scattered in several locations; the servers are arranged in multiple echelons (resource depots, distribution centers, and rescue center sites) located in different places but are coordinated within one emergency supply chain system; depending on the types of rescue demands, one or more distinct servers dispatch emergency resources in different vehicle routes, and emergency rescue services queue in multiple rescue-demand locations. This emergency system is modeled as a minimal queuing response time model of location and allocation. A solution to this complex mathematical problem is developed based on genetic algorithm. Finally, a case study of an emergency supply chain system operating in Shanghai is discussed. The results demonstrate the robustness and applicability of the proposed model. PMID:24688367

  7. Modeling Relief Demands in an Emergency Supply Chain System under Large-Scale Disasters Based on a Queuing Network

    Directory of Open Access Journals (Sweden)

    Xinhua He

    2014-01-01

    Full Text Available This paper presents a multiple-rescue model for an emergency supply chain system under uncertainties in large-scale affected area of disasters. The proposed methodology takes into consideration that the rescue demands caused by a large-scale disaster are scattered in several locations; the servers are arranged in multiple echelons (resource depots, distribution centers, and rescue center sites located in different places but are coordinated within one emergency supply chain system; depending on the types of rescue demands, one or more distinct servers dispatch emergency resources in different vehicle routes, and emergency rescue services queue in multiple rescue-demand locations. This emergency system is modeled as a minimal queuing response time model of location and allocation. A solution to this complex mathematical problem is developed based on genetic algorithm. Finally, a case study of an emergency supply chain system operating in Shanghai is discussed. The results demonstrate the robustness and applicability of the proposed model.

  8. Modeling relief demands in an emergency supply chain system under large-scale disasters based on a queuing network.

    Science.gov (United States)

    He, Xinhua; Hu, Wenfa

    2014-01-01

    This paper presents a multiple-rescue model for an emergency supply chain system under uncertainties in large-scale affected area of disasters. The proposed methodology takes into consideration that the rescue demands caused by a large-scale disaster are scattered in several locations; the servers are arranged in multiple echelons (resource depots, distribution centers, and rescue center sites) located in different places but are coordinated within one emergency supply chain system; depending on the types of rescue demands, one or more distinct servers dispatch emergency resources in different vehicle routes, and emergency rescue services queue in multiple rescue-demand locations. This emergency system is modeled as a minimal queuing response time model of location and allocation. A solution to this complex mathematical problem is developed based on genetic algorithm. Finally, a case study of an emergency supply chain system operating in Shanghai is discussed. The results demonstrate the robustness and applicability of the proposed model.

  9. ALFA: The new ALICE-FAIR software framework

    Science.gov (United States)

    Al-Turany, M.; Buncic, P.; Hristov, P.; Kollegger, T.; Kouzinopoulos, C.; Lebedev, A.; Lindenstruth, V.; Manafov, A.; Richter, M.; Rybalchenko, A.; Vande Vyvre, P.; Winckler, N.

    2015-12-01

    The commonalities between the ALICE and FAIR experiments and their computing requirements led to the development of large parts of a common software framework in an experiment independent way. The FairRoot project has already shown the feasibility of such an approach for the FAIR experiments and extending it beyond FAIR to experiments at other facilities[1, 2]. The ALFA framework is a joint development between ALICE Online- Offline (O2) and FairRoot teams. ALFA is designed as a flexible, elastic system, which balances reliability and ease of development with performance using multi-processing and multithreading. A message- based approach has been adopted; such an approach will support the use of the software on different hardware platforms, including heterogeneous systems. Each process in ALFA assumes limited communication and reliance on other processes. Such a design will add horizontal scaling (multiple processes) to vertical scaling provided by multiple threads to meet computing and throughput demands. ALFA does not dictate any application protocols. Potentially, any content-based processor or any source can change the application protocol. The framework supports different serialization standards for data exchange between different hardware and software languages.

  10. Evolutionary leap in large-scale flood risk assessment needed

    OpenAIRE

    Vorogushyn, Sergiy; Bates, Paul D.; de Bruijn, Karin; Castellarin, Attilio; Kreibich, Heidi; Priest, Sally J.; Schröter, Kai; Bagli, Stefano; Blöschl, Günter; Domeneghetti, Alessio; Gouldby, Ben; Klijn, Frans; Lammersen, Rita; Neal, Jeffrey C.; Ridder, Nina

    2018-01-01

    Current approaches for assessing large-scale flood risks contravene the fundamental principles of the flood risk system functioning because they largely ignore basic interactions and feedbacks between atmosphere, catchments, river-floodplain systems and socio-economic processes. As a consequence, risk analyses are uncertain and might be biased. However, reliable risk estimates are required for prioritizing national investments in flood risk mitigation or for appraisal and management of insura...

  11. Analysis of Large Genomic Data in Silico: The EPICNorfolk Study of Obesity

    DEFF Research Database (Denmark)

    Zhao, Jing Hua; Luan, Jian'an; Tan, Qihua

    In human genetics, large-scale data are now available with advances in genotyping technologies and international collaborative projects. Our ongoing study of obesity involves Affymetrix 500k genechips on approximately 7000 individuals from the European Prospective Investigation of Cancer (EPIC......) Norfolk study. Although the scale of our data is well beyond the ability of many software systems, we have successfully performed the analysis using the statistical analysis system (SAS) software. Our implementation trades memory with computing time and requires moderate hardware configuration. By using...

  12. The research of the test-class method based on interface object in the software integration test of the large container inspection system

    International Nuclear Information System (INIS)

    Sun Shaohua; Chen Zhiqiang; Zhang Li; Gao Wenhuan; Kang Kejun

    2000-01-01

    Software test is the important stage in software process. The has been mature theory, method and model for unit test in practice. But for integration test, there is not regular method to adhere to. The author presents a new method, developed during the development of the large container inspection system, named test class method based on interface object. In this method a set of basic test-class based on the concept of class in the object-oriented method is established and the method of combining the interface graph and the class set is used to describe the test process. So the strict control and the scientific management for the test process are achieved. The conception of test database is introduced in this method, thus the traceability and the repeatability of test process are improved

  13. The research of the test-class method based on interface object in the software integration test of the large container inspection system

    International Nuclear Information System (INIS)

    Sun Shaohua; Chen Zhiqiang; Zhang Li; Gao Wenhuan; Kang Kejun

    2001-01-01

    Software test is the important stage in software process. There has been mature theory, method and model for unit test in practice. But for integration test, there is not regular method to adhere to. The author presents a new method, developed during the development of the large container inspection system, named test-class method based on interface object. A set of basis test-class based on the concept of class in the object-oriented method is established and the method of combining the interface graph and the class set is used to describe the test process. So the strict control and the scientific management for the test process are achieved. The conception of test database is introduced in this method, thus the traceability and the repeatability of test process are improved

  14. OffshoreDC DC grids for integration of large scale wind power

    DEFF Research Database (Denmark)

    Zeni, Lorenzo; Endegnanew, Atsede Gualu; Stamatiou, Georgios

    The present report summarizes the main findings of the Nordic Energy Research project “DC grids for large scale integration of offshore wind power – OffshoreDC”. The project is been funded by Nordic Energy Research through the TFI programme and was active between 2011 and 2016. The overall...... objective of the project was to drive the development of the VSC based HVDC technology for future large scale offshore grids, supporting a standardised and commercial development of the technology, and improving the opportunities for the technology to support power system integration of large scale offshore...

  15. Design study on sodium cooled large-scale reactor

    International Nuclear Information System (INIS)

    Murakami, Tsutomu; Hishida, Masahiko; Kisohara, Naoyuki

    2004-07-01

    In Phase 1 of the 'Feasibility Studies on Commercialized Fast Reactor Cycle Systems (F/S)', an advanced loop type reactor has been selected as a promising concept of sodium-cooled large-scale reactor, which has a possibility to fulfill the design requirements of the F/S. In Phase 2, design improvement for further cost reduction of establishment of the plant concept has been performed. This report summarizes the results of the design study on the sodium-cooled large-scale reactor performed in JFY2003, which is the third year of Phase 2. In the JFY2003 design study, critical subjects related to safety, structural integrity and thermal hydraulics which found in the last fiscal year has been examined and the plant concept has been modified. Furthermore, fundamental specifications of main systems and components have been set and economy has been evaluated. In addition, as the interim evaluation of the candidate concept of the FBR fuel cycle is to be conducted, cost effectiveness and achievability for the development goal were evaluated and the data of the three large-scale reactor candidate concepts were prepared. As a results of this study, the plant concept of the sodium-cooled large-scale reactor has been constructed, which has a prospect to satisfy the economic goal (construction cost: less than 200,000 yens/kWe, etc.) and has a prospect to solve the critical subjects. From now on, reflecting the results of elemental experiments, the preliminary conceptual design of this plant will be preceded toward the selection for narrowing down candidate concepts at the end of Phase 2. (author)

  16. Design study on sodium-cooled large-scale reactor

    International Nuclear Information System (INIS)

    Shimakawa, Yoshio; Nibe, Nobuaki; Hori, Toru

    2002-05-01

    In Phase 1 of the 'Feasibility Study on Commercialized Fast Reactor Cycle Systems (F/S)', an advanced loop type reactor has been selected as a promising concept of sodium-cooled large-scale reactor, which has a possibility to fulfill the design requirements of the F/S. In Phase 2 of the F/S, it is planed to precede a preliminary conceptual design of a sodium-cooled large-scale reactor based on the design of the advanced loop type reactor. Through the design study, it is intended to construct such a plant concept that can show its attraction and competitiveness as a commercialized reactor. This report summarizes the results of the design study on the sodium-cooled large-scale reactor performed in JFY2001, which is the first year of Phase 2. In the JFY2001 design study, a plant concept has been constructed based on the design of the advanced loop type reactor, and fundamental specifications of main systems and components have been set. Furthermore, critical subjects related to safety, structural integrity, thermal hydraulics, operability, maintainability and economy have been examined and evaluated. As a result of this study, the plant concept of the sodium-cooled large-scale reactor has been constructed, which has a prospect to satisfy the economic goal (construction cost: less than 200,000yens/kWe, etc.) and has a prospect to solve the critical subjects. From now on, reflecting the results of elemental experiments, the preliminary conceptual design of this plant will be preceded toward the selection for narrowing down candidate concepts at the end of Phase 2. (author)

  17. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    Directory of Open Access Journals (Sweden)

    A. F. Van Loon

    2012-11-01

    Full Text Available Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is how well do large-scale models simulate the propagation from meteorological to hydrological drought? To answer this question, we evaluated the simulation of drought propagation in an ensemble mean of ten large-scale models, both land-surface models and global hydrological models, that participated in the model intercomparison project of WATCH (WaterMIP. For a selection of case study areas, we studied drought characteristics (number of droughts, duration, severity, drought propagation features (pooling, attenuation, lag, lengthening, and hydrological drought typology (classical rainfall deficit drought, rain-to-snow-season drought, wet-to-dry-season drought, cold snow season drought, warm snow season drought, composite drought.

    Drought characteristics simulated by large-scale models clearly reflected drought propagation; i.e. drought events became fewer and longer when moving through the hydrological cycle. However, more differentiation was expected between fast and slowly responding systems, with slowly responding systems having fewer and longer droughts in runoff than fast responding systems. This was not found using large-scale models. Drought propagation features were poorly reproduced by the large-scale models, because runoff reacted immediately to precipitation, in all case study areas. This fast reaction to precipitation, even in cold climates in winter and in semi-arid climates in summer, also greatly influenced the hydrological drought typology as identified by the large-scale models. In general, the large-scale models had the correct representation of drought types, but the percentages of occurrence had some important mismatches, e.g. an overestimation of classical rainfall deficit droughts, and an

  18. Nuclear-pumped lasers for large-scale applications

    International Nuclear Information System (INIS)

    Anderson, R.E.; Leonard, E.M.; Shea, R.F.; Berggren, R.R.

    1989-05-01

    Efficient initiation of large-volume chemical lasers may be achieved by neutron induced reactions which produce charged particles in the final state. When a burst mode nuclear reactor is used as the neutron source, both a sufficiently intense neutron flux and a sufficiently short initiation pulse may be possible. Proof-of-principle experiments are planned to demonstrate lasing in a direct nuclear-pumped large-volume system; to study the effects of various neutron absorbing materials on laser performance; to study the effects of long initiation pulse lengths; to demonstrate the performance of large-scale optics and the beam quality that may be obtained; and to assess the performance of alternative designs of burst systems that increase the neutron output and burst repetition rate. 21 refs., 8 figs., 5 tabs

  19. Buffer provisioning for large-scale data-acquisition systems

    CERN Document Server

    AUTHOR|(SzGeCERN)756497; The ATLAS collaboration; Garcia Garcia, Pedro Javier; Froening, Holger; Vandelli, Wainer

    2018-01-01

    The data acquisition system of the ATLAS experiment, a major experiment of the Large Hadron Collider (LHC) at CERN, will go through a major upgrade in the next decade. The upgrade is driven by experimental physics requirements, calling for increased data rates on the order of 6~TB/s. By contrast, the data rate of the existing system is 160~GB/s. Among the changes in the upgraded system will be a very large buffer with a projected size on the order of 70 PB. The buffer role will be decoupling of data production from on-line data processing, storing data for periods of up to 24~hours until it can be analyzed by the event processing system. The larger buffer will allow a new data recording strategy, providing additional margins to handle variable data rates. At the same time it will provide sensible trade-offs between buffering space and on-line processing capabilities. This compromise between two resources will be possible since the data production cycle includes time periods where the experiment will not produ...

  20. Scaling laws for perturbations in the ocean-atmosphere system following large CO2 emissions

    Science.gov (United States)

    Towles, N.; Olson, P.; Gnanadesikan, A.

    2015-07-01

    Scaling relationships are found for perturbations to atmosphere and ocean variables from large transient CO2 emissions. Using the Long-term Ocean-atmosphere-Sediment CArbon cycle Reservoir (LOSCAR) model (Zeebe et al., 2009; Zeebe, 2012b), we calculate perturbations to atmosphere temperature, total carbon, ocean temperature, total ocean carbon, pH, alkalinity, marine-sediment carbon, and carbon-13 isotope anomalies in the ocean and atmosphere resulting from idealized CO2 emission events. The peak perturbations in the atmosphere and ocean variables are then fit to power law functions of the form of γ DαEβ, where D is the event duration, E is its total carbon emission, and γ is a coefficient. Good power law fits are obtained for most system variables for E up to 50 000 PgC and D up to 100 kyr. Although all of the peak perturbations increase with emission rate E/D, we find no evidence of emission-rate-only scaling, α + β = 0. Instead, our scaling yields α + β ≃ 1 for total ocean and atmosphere carbon and 0 < α + β < 1 for most of the other system variables.

  1. Scaling laws for perturbations in the ocean–atmosphere system following large CO2 emissions

    Directory of Open Access Journals (Sweden)

    N. Towles

    2015-07-01

    Full Text Available Scaling relationships are found for perturbations to atmosphere and ocean variables from large transient CO2 emissions. Using the Long-term Ocean-atmosphere-Sediment CArbon cycle Reservoir (LOSCAR model (Zeebe et al., 2009; Zeebe, 2012b, we calculate perturbations to atmosphere temperature, total carbon, ocean temperature, total ocean carbon, pH, alkalinity, marine-sediment carbon, and carbon-13 isotope anomalies in the ocean and atmosphere resulting from idealized CO2 emission events. The peak perturbations in the atmosphere and ocean variables are then fit to power law functions of the form of γ DαEβ, where D is the event duration, E is its total carbon emission, and γ is a coefficient. Good power law fits are obtained for most system variables for E up to 50 000 PgC and D up to 100 kyr. Although all of the peak perturbations increase with emission rate E/D, we find no evidence of emission-rate-only scaling, α + β = 0. Instead, our scaling yields α + β ≃ 1 for total ocean and atmosphere carbon and 0 < α + β < 1 for most of the other system variables.

  2. Phylogenetic distribution of large-scale genome patchiness

    Directory of Open Access Journals (Sweden)

    Hackenberg Michael

    2008-04-01

    Full Text Available Abstract Background The phylogenetic distribution of large-scale genome structure (i.e. mosaic compositional patchiness has been explored mainly by analytical ultracentrifugation of bulk DNA. However, with the availability of large, good-quality chromosome sequences, and the recently developed computational methods to directly analyze patchiness on the genome sequence, an evolutionary comparative analysis can be carried out at the sequence level. Results The local variations in the scaling exponent of the Detrended Fluctuation Analysis are used here to analyze large-scale genome structure and directly uncover the characteristic scales present in genome sequences. Furthermore, through shuffling experiments of selected genome regions, computationally-identified, isochore-like regions were identified as the biological source for the uncovered large-scale genome structure. The phylogenetic distribution of short- and large-scale patchiness was determined in the best-sequenced genome assemblies from eleven eukaryotic genomes: mammals (Homo sapiens, Pan troglodytes, Mus musculus, Rattus norvegicus, and Canis familiaris, birds (Gallus gallus, fishes (Danio rerio, invertebrates (Drosophila melanogaster and Caenorhabditis elegans, plants (Arabidopsis thaliana and yeasts (Saccharomyces cerevisiae. We found large-scale patchiness of genome structure, associated with in silico determined, isochore-like regions, throughout this wide phylogenetic range. Conclusion Large-scale genome structure is detected by directly analyzing DNA sequences in a wide range of eukaryotic chromosome sequences, from human to yeast. In all these genomes, large-scale patchiness can be associated with the isochore-like regions, as directly detected in silico at the sequence level.

  3. Nearly incompressible fluids: Hydrodynamics and large scale inhomogeneity

    International Nuclear Information System (INIS)

    Hunana, P.; Zank, G. P.; Shaikh, D.

    2006-01-01

    A system of hydrodynamic equations in the presence of large-scale inhomogeneities for a high plasma beta solar wind is derived. The theory is derived under the assumption of low turbulent Mach number and is developed for the flows where the usual incompressible description is not satisfactory and a full compressible treatment is too complex for any analytical studies. When the effects of compressibility are incorporated only weakly, a new description, referred to as 'nearly incompressible hydrodynamics', is obtained. The nearly incompressible theory, was originally applied to homogeneous flows. However, large-scale gradients in density, pressure, temperature, etc., are typical in the solar wind and it was unclear how inhomogeneities would affect the usual incompressible and nearly incompressible descriptions. In the homogeneous case, the lowest order expansion of the fully compressible equations leads to the usual incompressible equations, followed at higher orders by the nearly incompressible equations, as introduced by Zank and Matthaeus. With this work we show that the inclusion of large-scale inhomogeneities (in this case time-independent and radially symmetric background solar wind) modifies the leading-order incompressible description of solar wind flow. We find, for example, that the divergence of velocity fluctuations is nonsolenoidal and that density fluctuations can be described to leading order as a passive scalar. Locally (for small lengthscales), this system of equations converges to the usual incompressible equations and we therefore use the term 'locally incompressible' to describe the equations. This term should be distinguished from the term 'nearly incompressible', which is reserved for higher-order corrections. Furthermore, we find that density fluctuations scale with Mach number linearly, in contrast to the original homogeneous nearly incompressible theory, in which density fluctuations scale with the square of Mach number. Inhomogeneous nearly

  4. A bridge role metric model for nodes in software networks.

    Directory of Open Access Journals (Sweden)

    Bo Li

    Full Text Available A bridge role metric model is put forward in this paper. Compared with previous metric models, our solution of a large-scale object-oriented software system as a complex network is inherently more realistic. To acquire nodes and links in an undirected network, a new model that presents the crucial connectivity of a module or the hub instead of only centrality as in previous metric models is presented. Two previous metric models are described for comparison. In addition, it is obvious that the fitting curve between the Bre results and degrees can well be fitted by a power law. The model represents many realistic characteristics of actual software structures, and a hydropower simulation system is taken as an example. This paper makes additional contributions to an accurate understanding of module design of software systems and is expected to be beneficial to software engineering practices.

  5. Assessment of the technology required to develop photovoltaic power system for large scale national energy applications

    Science.gov (United States)

    Lutwack, R.

    1974-01-01

    A technical assessment of a program to develop photovoltaic power system technology for large-scale national energy applications was made by analyzing and judging the alternative candidate photovoltaic systems and development tasks. A program plan was constructed based on achieving the 10 year objective of a program to establish the practicability of large-scale terrestrial power installations using photovoltaic conversion arrays costing less than $0.50/peak W. Guidelines for the tasks of a 5 year program were derived from a set of 5 year objectives deduced from the 10 year objective. This report indicates the need for an early emphasis on the development of the single-crystal Si photovoltaic system for commercial utilization; a production goal of 5 x 10 to the 8th power peak W/year of $0.50 cells was projected for the year 1985. The developments of other photovoltaic conversion systems were assigned to longer range development roles. The status of the technology developments and the applicability of solar arrays in particular power installations, ranging from houses to central power plants, was scheduled to be verified in a series of demonstration projects. The budget recommended for the first 5 year phase of the program is $268.5M.

  6. Techniques to maximize software reliability in radiation fields

    International Nuclear Information System (INIS)

    Eichhorn, G.; Piercey, R.B.

    1986-01-01

    Microprocessor system failures due to memory corruption by single event upsets (SEUs) and/or latch-up in RAM or ROM memory are common in environments where there is high radiation flux. Traditional methods to harden microcomputer systems against SEUs and memory latch-up have usually involved expensive large scale hardware redundancy. Such systems offer higher reliability, but they tend to be more complex and non-standard. At the Space Astronomy Laboratory the authors have developed general programming techniques for producing software which is resistant to such memory failures. These techniques, which may be applied to standard off-the-shelf hardware, as well as custom designs, include an implementation of Maximally Redundant Software (MRS) model, error detection algorithms and memory verification and management

  7. Statistical reliability assessment of software-based systems

    International Nuclear Information System (INIS)

    Korhonen, J.; Pulkkinen, U.; Haapanen, P.

    1997-01-01

    Plant vendors nowadays propose software-based systems even for the most critical safety functions. The reliability estimation of safety critical software-based systems is difficult since the conventional modeling techniques do not necessarily apply to the analysis of these systems, and the quantification seems to be impossible. Due to lack of operational experience and due to the nature of software faults, the conventional reliability estimation methods can not be applied. New methods are therefore needed for the safety assessment of software-based systems. In the research project Programmable automation systems in nuclear power plants (OHA), financed together by the Finnish Centre for Radiation and Nuclear Safety (STUK), the Ministry of Trade and Industry and the Technical Research Centre of Finland (VTT), various safety assessment methods and tools for software based systems are developed and evaluated. This volume in the OHA-report series deals with the statistical reliability assessment of software based systems on the basis of dynamic test results and qualitative evidence from the system design process. Other reports to be published later on in OHA-report series will handle the diversity requirements in safety critical software-based systems, generation of test data from operational profiles and handling of programmable automation in plant PSA-studies. (orig.) (25 refs.)

  8. On Modeling Large-Scale Multi-Agent Systems with Parallel, Sequential and Genuinely Asynchronous Cellular Automata

    International Nuclear Information System (INIS)

    Tosic, P.T.

    2011-01-01

    We study certain types of Cellular Automata (CA) viewed as an abstraction of large-scale Multi-Agent Systems (MAS). We argue that the classical CA model needs to be modified in several important respects, in order to become a relevant and sufficiently general model for the large-scale MAS, and so that thus generalized model can capture many important MAS properties at the level of agent ensembles and their long-term collective behavior patterns. We specifically focus on the issue of inter-agent communication in CA, and propose sequential cellular automata (SCA) as the first step, and genuinely Asynchronous Cellular Automata (ACA) as the ultimate deterministic CA-based abstract models for large-scale MAS made of simple reactive agents. We first formulate deterministic and nondeterministic versions of sequential CA, and then summarize some interesting configuration space properties (i.e., possible behaviors) of a restricted class of sequential CA. In particular, we compare and contrast those properties of sequential CA with the corresponding properties of the classical (that is, parallel and perfectly synchronous) CA with the same restricted class of update rules. We analytically demonstrate failure of the studied sequential CA models to simulate all possible behaviors of perfectly synchronous parallel CA, even for a very restricted class of non-linear totalistic node update rules. The lesson learned is that the interleaving semantics of concurrency, when applied to sequential CA, is not refined enough to adequately capture the perfect synchrony of parallel CA updates. Last but not least, we outline what would be an appropriate CA-like abstraction for large-scale distributed computing insofar as the inter-agent communication model is concerned, and in that context we propose genuinely asynchronous CA. (author)

  9. Large scale structure and baryogenesis

    International Nuclear Information System (INIS)

    Kirilova, D.P.; Chizhov, M.V.

    2001-08-01

    We discuss a possible connection between the large scale structure formation and the baryogenesis in the universe. An update review of the observational indications for the presence of a very large scale 120h -1 Mpc in the distribution of the visible matter of the universe is provided. The possibility to generate a periodic distribution with the characteristic scale 120h -1 Mpc through a mechanism producing quasi-periodic baryon density perturbations during inflationary stage, is discussed. The evolution of the baryon charge density distribution is explored in the framework of a low temperature boson condensate baryogenesis scenario. Both the observed very large scale of a the visible matter distribution in the universe and the observed baryon asymmetry value could naturally appear as a result of the evolution of a complex scalar field condensate, formed at the inflationary stage. Moreover, for some model's parameters a natural separation of matter superclusters from antimatter ones can be achieved. (author)

  10. Systems of Systems: Scaling Up the Development Process

    National Research Council Canada - National Science Library

    Humphrey, Watts

    2006-01-01

    ... of massive systems into system-of-systems structures Section 3 points out how large-scale systems development efforts have typically failed because of project-management and not technical problems...

  11. Computer systems and software engineering

    Science.gov (United States)

    Mckay, Charles W.

    1988-01-01

    The High Technologies Laboratory (HTL) was established in the fall of 1982 at the University of Houston Clear Lake. Research conducted at the High Tech Lab is focused upon computer systems and software engineering. There is a strong emphasis on the interrelationship of these areas of technology and the United States' space program. In Jan. of 1987, NASA Headquarters announced the formation of its first research center dedicated to software engineering. Operated by the High Tech Lab, the Software Engineering Research Center (SERC) was formed at the University of Houston Clear Lake. The High Tech Lab/Software Engineering Research Center promotes cooperative research among government, industry, and academia to advance the edge-of-knowledge and the state-of-the-practice in key topics of computer systems and software engineering which are critical to NASA. The center also recommends appropriate actions, guidelines, standards, and policies to NASA in matters pertinent to the center's research. Results of the research conducted at the High Tech Lab/Software Engineering Research Center have given direction to many decisions made by NASA concerning the Space Station Program.

  12. Computer software program for monitoring the availability of systems and components of electric power generating systems

    International Nuclear Information System (INIS)

    Petersen, T.A.; Hilsmeier, T.A.; Kapinus, D.M.

    1994-01-01

    As availabilities of electric power generating stations systems and components become more and more important from a financial, personnel safety, and regulatory requirements standpoint, it is evident that a comprehensive, yet simple and user-friendly program for system and component tracking and monitoring is needed to assist in effectively managing the large volume of systems and components with their large numbers of associated maintenance/availability records. A user-friendly computer software program for system and component availability monitoring has been developed that calculates, displays and monitors selected component and system availabilities. This is a Windows trademark based (Graphical User Interface) program that utilizes a system flow diagram for the data input screen which also provides a visual representation of availability values and limits for the individual components and associated systems. This program can be customized to the user's plant-specific system and component selections and configurations. As will be discussed herein, this software program is well suited for availability monitoring and ultimately providing valuable information for improving plant performance and reducing operating costs

  13. Automating software design system DESTA

    Science.gov (United States)

    Lovitsky, Vladimir A.; Pearce, Patricia D.

    1992-01-01

    'DESTA' is the acronym for the Dialogue Evolutionary Synthesizer of Turnkey Algorithms by means of a natural language (Russian or English) functional specification of algorithms or software being developed. DESTA represents the computer-aided and/or automatic artificial intelligence 'forgiving' system which provides users with software tools support for algorithm and/or structured program development. The DESTA system is intended to provide support for the higher levels and earlier stages of engineering design of software in contrast to conventional Computer Aided Design (CAD) systems which provide low level tools for use at a stage when the major planning and structuring decisions have already been taken. DESTA is a knowledge-intensive system. The main features of the knowledge are procedures, functions, modules, operating system commands, batch files, their natural language specifications, and their interlinks. The specific domain for the DESTA system is a high level programming language like Turbo Pascal 6.0. The DESTA system is operational and runs on an IBM PC computer.

  14. Collisionless magnetic reconnection in large-scale electron-positron plasmas

    International Nuclear Information System (INIS)

    Daughton, William; Karimabadi, Homa

    2007-01-01

    One of the most fundamental questions in reconnection physics is how the dynamical evolution will scale to macroscopic systems of physical relevance. This issue is examined for electron-positron plasmas using two-dimensional fully kinetic simulations with both open and periodic boundary conditions. The resulting evolution is complex and highly dynamic throughout the entire duration. The initial phase is distinguished by the coalescence of tearing islands to larger scale while the later phase is marked by the expansion of diffusion regions into elongated current layers that are intrinsically unstable to plasmoid generation. It appears that the repeated formation and ejection of plasmoids plays a key role in controlling the average structure of a diffusion region and preventing the further elongation of the layer. The reconnection rate is modulated in time as the current layers expand and new plasmoids are formed. Although the specific details of this evolution are affected by the boundary and initial conditions, the time averaged reconnection rate remains fast and is remarkably insensitive to the system size for sufficiently large systems. This dynamic scenario offers an alternative explanation for fast reconnection in large-scale systems

  15. A high-speed transmission method for large-scale marine seismic prospecting systems

    International Nuclear Information System (INIS)

    KeZhu, Song; Ping, Cao; JunFeng, Yang; FuMing, Ruan

    2012-01-01

    A marine seismic prospecting system is a kind of data acquisition and transmission system with large-scale coverage and synchronous multi-node acquisition. In this kind of system, data transmission is a fundamental and difficult technique. In this paper, a high-speed data-transmission method is proposed, its implications and limitations are discussed, and conclusions are drawn. The method we propose has obvious advantages over traditional techniques with respect to long-distance operation, high speed, and real-time transmission. A marine seismic system with four streamers, each 6000 m long and capable of supporting up to 1920 channels, was designed and built based on this method. The effective transmission baud rate of this system was found to reach up to 240 Mbps, while the minimum sampling interval time was as short as 0.25 ms. This system was found to achieve a good synchronization: 83 ns. Laboratory and in situ experiments showed that this marine-prospecting system could work correctly and robustly, which verifies the feasibility and validity of the method proposed in this paper. In addition to the marine seismic applications, this method can also be used in land seismic applications and certain other transmission applications such as environmental or engineering monitoring systems. (paper)

  16. A high-speed transmission method for large-scale marine seismic prospecting systems

    Science.gov (United States)

    KeZhu, Song; Ping, Cao; JunFeng, Yang; FuMing, Ruan

    2012-12-01

    A marine seismic prospecting system is a kind of data acquisition and transmission system with large-scale coverage and synchronous multi-node acquisition. In this kind of system, data transmission is a fundamental and difficult technique. In this paper, a high-speed data-transmission method is proposed, its implications and limitations are discussed, and conclusions are drawn. The method we propose has obvious advantages over traditional techniques with respect to long-distance operation, high speed, and real-time transmission. A marine seismic system with four streamers, each 6000 m long and capable of supporting up to 1920 channels, was designed and built based on this method. The effective transmission baud rate of this system was found to reach up to 240 Mbps, while the minimum sampling interval time was as short as 0.25 ms. This system was found to achieve a good synchronization: 83 ns. Laboratory and in situ experiments showed that this marine-prospecting system could work correctly and robustly, which verifies the feasibility and validity of the method proposed in this paper. In addition to the marine seismic applications, this method can also be used in land seismic applications and certain other transmission applications such as environmental or engineering monitoring systems.

  17. Requirements engineering for software and systems

    CERN Document Server

    Laplante, Phillip A

    2014-01-01

    Solid requirements engineering has increasingly been recognized as the key to improved, on-time and on-budget delivery of software and systems projects. This book provides practical teaching for graduate and professional systems and software engineers. It uses extensive case studies and exercises to help students grasp concepts and techniques. With a focus on software-intensive systems, this text provides a probing and comprehensive review of recent developments in intelligent systems, soft computing techniques, and their diverse applications in manufacturing. The second edition contains 100% revised content and approximately 30% new material

  18. Scaling-Up the Functional Diagnostic Systems

    International Nuclear Information System (INIS)

    Mohamed, A.H.

    2008-01-01

    Functional diagnostic systems received a lot of attention in the last decade. They have proven their powerful for diagnosis the new faults of some complex systems. But, they still have some complexity in both the representation and reasoning about the large-scale systems. This paper introduces a new functional diagnostic system that can divide its small functions into main and auxiliary ones. This process enables the diagnostic system to scale -up the representation of the tested system and simplify the diagnostic mechanism tasks. Thus, it can improve both the representation and reasoning complexity. Also,it can decrease the required analysis, cost, and time. Proposed system can be applied for a wide area of the large-scale systems. It has been proven its acceptance to be applied practically for the Complex real-time systems

  19. Virtual Exercise Training Software System

    Science.gov (United States)

    Vu, L.; Kim, H.; Benson, E.; Amonette, W. E.; Barrera, J.; Perera, J.; Rajulu, S.; Hanson, A.

    2018-01-01

    The purpose of this study was to develop and evaluate a virtual exercise training software system (VETSS) capable of providing real-time instruction and exercise feedback during exploration missions. A resistive exercise instructional system was developed using a Microsoft Kinect depth-camera device, which provides markerless 3-D whole-body motion capture at a small form factor and minimal setup effort. It was hypothesized that subjects using the newly developed instructional software tool would perform the deadlift exercise with more optimal kinematics and consistent technique than those without the instructional software. Following a comprehensive evaluation in the laboratory, the system was deployed for testing and refinement in the NASA Extreme Environment Mission Operations (NEEMO) analog.

  20. DEVELOPMENT AND ADAPTATION OF VORTEX REALIZABLE MEASUREMENT SYSTEM FOR BENCHMARK TEST WITH LARGE SCALE MODEL OF NUCLEAR REACTOR

    Directory of Open Access Journals (Sweden)

    S. M. Dmitriev

    2017-01-01

    Full Text Available The last decades development of applied calculation methods of nuclear reactor thermal and hydraulic processes are marked by the rapid growth of the High Performance Computing (HPC, which contribute to the active introduction of Computational Fluid Dynamics (CFD. The use of such programs to justify technical and economic parameters and especially the safety of nuclear reactors requires comprehensive verification of mathematical models and CFD programs. The aim of the work was the development and adaptation of a measuring system having the characteristics necessary for its application in the verification test (experimental facility. It’s main objective is to study the processes of coolant flow mixing with different physical properties (for example, the concentration of dissolved impurities inside a large-scale reactor model. The basic method used for registration of the spatial concentration field in the mixing area is the method of spatial conductometry. In the course of the work, a measurement complex, including spatial conductometric sensors, a system of secondary converters and software, was created. Methods of calibration and normalization of measurement results are developed. Averaged concentration fields, nonstationary realizations of the measured local conductivity were obtained during the first experimental series, spectral and statistical analysis of the realizations were carried out.The acquired data are compared with pretest CFD-calculations performed in the ANSYS CFX program. A joint analysis of the obtained results made it possible to identify the main regularities of the process under study, and to demonstrate the capabilities of the designed measuring system to receive the experimental data of the «CFD-quality» required for verification.The carried out adaptation of spatial sensors allows to conduct a more extensive program of experimental tests, on the basis of which a databank and necessary generalizations will be created

  1. Scaling of Health Information Systems in Nigeria and Ethiopia

    DEFF Research Database (Denmark)

    Mengiste, Shegaw Anagaw; Shaw, Vincent; Braa, Jørn

    2007-01-01

    Systems Programme in Nigeria and Ethiopia, the interdependencies between three spheres are identified as being important in scaling health information systems. The three spheres that are explored are the volume of data collected, human resource factors and access to technology. We draw on concepts from...... the balance. Three flexible standards are identified as being critical strategies to global health information scaling initiatives, namely an essential data set, a scalable process of information systems collection and collation consisting of gateways between paper based systems and hardware and software...

  2. Hybrid Software and System Development in Practice: Waterfall, Scrum, and Beyond

    OpenAIRE

    Kuhrmann, Marco; Diebold, Philipp; Münch, Jürgen; Tell, Paolo; Garousi, Vahid; Felderer, Michael; Trektere, Kitija; McCaffery, Fergal; Linssen, Oliver; Hanser, Eckhart; Prause, Christian

    2017-01-01

    Software and system development faces numerous challenges of rapidly changing markets. To address such challenges, companies and projects design and adopt specific development approaches by combining well-structured comprehensive methods and flexible agile practices. Yet, the number of methods and practices is large, and available studies argue that the actual process composition is carried out in a fairly ad-hoc manner. The present paper reports on a survey on hybrid software development app...

  3. Large-Scale Cooperative Task Distribution on Peer-to-Peer Networks

    Science.gov (United States)

    2012-01-01

    SUBTITLE Large-scale cooperative task distribution on peer-to-peer networks 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...disadvantages of ML- Chord are its fixed size (two layers), and limited scala - bility for large-scale systems. RC-Chord extends ML- D. Karrels et al...configurable before runtime. This can be improved by incorporating a distributed learning algorithm to tune the number and range of the DLoE tracking

  4. A Methodology for Integrating Maintainability Using Software Metrics

    OpenAIRE

    Lewis, John A.; Henry, Sallie M.

    1989-01-01

    Maintainability must be integrated into software early in the development process. But for practical use, the techniques used must be as unobtrusive to the existing software development process as possible. This paper defines a methodology for integrating maintainability into large-scale software and describes an experiment which implemented the methodology into a major commercial software development environment.

  5. State of the Art in Large-Scale Soil Moisture Monitoring

    Science.gov (United States)

    Ochsner, Tyson E.; Cosh, Michael Harold; Cuenca, Richard H.; Dorigo, Wouter; Draper, Clara S.; Hagimoto, Yutaka; Kerr, Yan H.; Larson, Kristine M.; Njoku, Eni Gerald; Small, Eric E.; hide

    2013-01-01

    Soil moisture is an essential climate variable influencing land atmosphere interactions, an essential hydrologic variable impacting rainfall runoff processes, an essential ecological variable regulating net ecosystem exchange, and an essential agricultural variable constraining food security. Large-scale soil moisture monitoring has advanced in recent years creating opportunities to transform scientific understanding of soil moisture and related processes. These advances are being driven by researchers from a broad range of disciplines, but this complicates collaboration and communication. For some applications, the science required to utilize large-scale soil moisture data is poorly developed. In this review, we describe the state of the art in large-scale soil moisture monitoring and identify some critical needs for research to optimize the use of increasingly available soil moisture data. We review representative examples of 1) emerging in situ and proximal sensing techniques, 2) dedicated soil moisture remote sensing missions, 3) soil moisture monitoring networks, and 4) applications of large-scale soil moisture measurements. Significant near-term progress seems possible in the use of large-scale soil moisture data for drought monitoring. Assimilation of soil moisture data for meteorological or hydrologic forecasting also shows promise, but significant challenges related to model structures and model errors remain. Little progress has been made yet in the use of large-scale soil moisture observations within the context of ecological or agricultural modeling. Opportunities abound to advance the science and practice of large-scale soil moisture monitoring for the sake of improved Earth system monitoring, modeling, and forecasting.

  6. Development of the simulation package 'ELSES' for extra-large-scale electronic structure calculation

    International Nuclear Information System (INIS)

    Hoshi, T; Fujiwara, T

    2009-01-01

    An early-stage version of the simulation package 'ELSES' (extra-large-scale electronic structure calculation) is developed for simulating the electronic structure and dynamics of large systems, particularly nanometer-scale and ten-nanometer-scale systems (see www.elses.jp). Input and output files are written in the extensible markup language (XML) style for general users. Related pre-/post-simulation tools are also available. A practical workflow and an example are described. A test calculation for the GaAs bulk system is shown, to demonstrate that the present code can handle systems with more than one atom species. Several future aspects are also discussed.

  7. Certification of digital system software

    International Nuclear Information System (INIS)

    Waclo, J.; Cook, B.; Adomaitis, D.

    1991-01-01

    The activities involved in the successful application of digital systems to Nuclear Protection functions is not achieved through happenstance. At Westinghouse there has been a longstanding program to utilize digital state of the art technology for protection system advancement. Thereby gaining the advantages of increased system reliability, performance, ease of operation and reduced maintenance costs. This paper describes the Westinghouse background and experience in the safety system software development process, including Verification and Validation, and its application to protection system qualification and the successful use for licensing the Eagle 21 Digital Process Protection System Upgrade. In addition, the lessons learned from this experience are discussed from the perspective of improving the development process through applying feedback of the measurements made on the process and the software product quality. The goal of this process optimization is to produce the highest possible software quality while recognizing the real world constraints of available resources, project schedule and the regulatory policies that are customary in the nuclear industry

  8. Origin of the large scale structures of the universe

    International Nuclear Information System (INIS)

    Oaknin, David H.

    2004-01-01

    We revise the statistical properties of the primordial cosmological density anisotropies that, at the time of matter-radiation equality, seeded the gravitational development of large scale structures in the otherwise homogeneous and isotropic Friedmann-Robertson-Walker flat universe. Our analysis shows that random fluctuations of the density field at the same instant of equality and with comoving wavelength shorter than the causal horizon at that time can naturally account, when globally constrained to conserve the total mass (energy) of the system, for the observed scale invariance of the anisotropies over cosmologically large comoving volumes. Statistical systems with similar features are generically known as glasslike or latticelike. Obviously, these conclusions conflict with the widely accepted understanding of the primordial structures reported in the literature, which requires an epoch of inflationary cosmology to precede the standard expansion of the universe. The origin of the conflict must be found in the widespread, but unjustified, claim that scale invariant mass (energy) anisotropies at the instant of equality over comoving volumes of cosmological size, larger than the causal horizon at the time, must be generated by fluctuations in the density field with comparably large comoving wavelength

  9. A study of software safety analysis system for safety-critical software

    International Nuclear Information System (INIS)

    Chang, H. S.; Shin, H. K.; Chang, Y. W.; Jung, J. C.; Kim, J. H.; Han, H. H.; Son, H. S.

    2004-01-01

    The core factors and requirements for the safety-critical software traced and the methodology adopted in each stage of software life cycle are presented. In concept phase, Failure Modes and Effects Analysis (FMEA) for the system has been performed. The feasibility evaluation of selected safety parameter was performed and Preliminary Hazards Analysis list was prepared using HAZOP(Hazard and Operability) technique. And the check list for management control has been produced via walk-through technique. Based on the evaluation of the check list, activities to be performed in requirement phase have been determined. In the design phase, hazard analysis has been performed to check the safety capability of the system with regard to safety software algorithm using Fault Tree Analysis (FTA). In the test phase, the test items based on FMEA have been checked for fitness guided by an accident scenario. The pressurizer low pressure trip algorithm has been selected to apply FTA method to software safety analysis as a sample. By applying CASE tool, the requirements traceability of safety critical system has been enhanced during all of software life cycle phases

  10. Modularity and Architecture of PLC-based Software for Automated Production Systems: An analysis in industrial companies

    OpenAIRE

    B. Vogel-Heuser, J. Fischer, S. Feldmann, S. Ulewicz, S. Rösch

    2018-01-01

    Adaptive and flexible production systems require modular and reusable software especially considering their long-term life cycle of up to 50 years. SWMAT4aPS, an approach to measure Software Maturity for automated Production Systems is introduced. The approach identifies weaknesses and strengths of various companies’ solutions for modularity of software in the design of automated Production Systems (aPS). At first, a self-assessed questionnaire is used to evaluate a large number of companies ...

  11. Large-Scale Outflows in Seyfert Galaxies

    Science.gov (United States)

    Colbert, E. J. M.; Baum, S. A.

    1995-12-01

    \\catcode`\\@=11 \\ialign{m @th#1hfil ##hfil \\crcr#2\\crcr\\sim\\crcr}}} \\catcode`\\@=12 Highly collimated outflows extend out to Mpc scales in many radio-loud active galaxies. In Seyfert galaxies, which are radio-quiet, the outflows extend out to kpc scales and do not appear to be as highly collimated. In order to study the nature of large-scale (>~1 kpc) outflows in Seyferts, we have conducted optical, radio and X-ray surveys of a distance-limited sample of 22 edge-on Seyfert galaxies. Results of the optical emission-line imaging and spectroscopic survey imply that large-scale outflows are present in >~{{1} /{4}} of all Seyferts. The radio (VLA) and X-ray (ROSAT) surveys show that large-scale radio and X-ray emission is present at about the same frequency. Kinetic luminosities of the outflows in Seyferts are comparable to those in starburst-driven superwinds. Large-scale radio sources in Seyferts appear diffuse, but do not resemble radio halos found in some edge-on starburst galaxies (e.g. M82). We discuss the feasibility of the outflows being powered by the active nucleus (e.g. a jet) or a circumnuclear starburst.

  12. Event-triggered decentralized robust model predictive control for constrained large-scale interconnected systems

    Directory of Open Access Journals (Sweden)

    Ling Lu

    2016-12-01

    Full Text Available This paper considers the problem of event-triggered decentralized model predictive control (MPC for constrained large-scale linear systems subject to additive bounded disturbances. The constraint tightening method is utilized to formulate the MPC optimization problem. The local predictive control law for each subsystem is determined aperiodically by relevant triggering rule which allows a considerable reduction of the computational load. And then, the robust feasibility and closed-loop stability are proved and it is shown that every subsystem state will be driven into a robust invariant set. Finally, the effectiveness of the proposed approach is illustrated via numerical simulations.

  13. Large-scale impact cratering on the terrestrial planets

    International Nuclear Information System (INIS)

    Grieve, R.A.F.

    1982-01-01

    The crater densities on the earth and moon form the basis for a standard flux-time curve that can be used in dating unsampled planetary surfaces and constraining the temporal history of endogenic geologic processes. Abundant evidence is seen not only that impact cratering was an important surface process in planetary history but also that large imapact events produced effects that were crucial in scale. By way of example, it is noted that the formation of multiring basins on the early moon was as important in defining the planetary tectonic framework as plate tectonics is on the earth. Evidence from several planets suggests that the effects of very-large-scale impacts go beyond the simple formation of an impact structure and serve to localize increased endogenic activity over an extended period of geologic time. Even though no longer occurring with the frequency and magnitude of early solar system history, it is noted that large scale impact events continue to affect the local geology of the planets. 92 references

  14. ANEMOS: Development of a next generation wind power forecasting system for the large-scale integration of onshore and offshore wind farms.

    Science.gov (United States)

    Kariniotakis, G.; Anemos Team

    2003-04-01

    offshore wind farms taking into account advances in marine meteorology (interaction between wind and waves, coastal effects). The benefits from the use of satellite radar images for modeling local weather patterns are investigated. A next generation forecasting software, ANEMOS, will be developed to integrate the various models. The tool is enhanced by advanced Information Communication Technology (ICT) functionality and can operate both in stand alone, or remote mode, or be interfaced with standard Energy or Distribution Management Systems (EMS/DMS) systems. Contribution: The project provides an advanced technology for wind resource forecasting applicable in a large scale: at a single wind farm, regional or national level and for both interconnected and island systems. A major milestone is the on-line operation of the developed software by the participating utilities for onshore and offshore wind farms and the demonstration of the economic benefits. The outcome of the ANEMOS project will help consistently the increase of wind integration in two levels; in an operational level due to better management of wind farms, but also, it will contribute to increasing the installed capacity of wind farms. This is because accurate prediction of the resource reduces the risk of wind farm developers, who are then more willing to undertake new wind farm installations especially in a liberalized electricity market environment.

  15. A Common Software-Configuration Management System for CERN SPS and LEP Accelerators and Technical Services

    CERN Document Server

    Hatziangeli, Eugenia; Bragg, A E; Ninin, P; Patino, J; Sobczak, H

    2000-01-01

    Software-configuration management activities are crucial to ensure the integrity of current operational software and the quality of new software either being developed at CERN or outsourced. The functionality of the present management system became insufficient with large maintenance overheads. In order to improve our situation, a new software-configuration management system has been set up. It is based on Razor R, a commercial tool, which supports the management of file versions and operational software releases, along with integrated problem-reporting capabilities. In addition to the basic tool functionality, automated procedures were custom-made for the installation and distribution of operational software. The system ensures that, at all times, the status and location of all deliverable versions are known, the state of shared objects is carefully controlled and unauthorized changes prevented. This paper outlines the reasons for selecting the chosen tool, the implementation of the system and the final goal...

  16. Developing Large-Scale Bayesian Networks by Composition: Fault Diagnosis of Electrical Power Systems in Aircraft and Spacecraft

    Science.gov (United States)

    Mengshoel, Ole Jakob; Poll, Scott; Kurtoglu, Tolga

    2009-01-01

    This CD contains files that support the talk (see CASI ID 20100021404). There are 24 models that relate to the ADAPT system and 1 Excel worksheet. In the paper an investigation into the use of Bayesian networks to construct large-scale diagnostic systems is described. The high-level specifications, Bayesian networks, clique trees, and arithmetic circuits representing 24 different electrical power systems are described in the talk. The data in the CD are the models of the 24 different power systems.

  17. An Architecture, System Engineering, and Acquisition Approach for Space System Software Resiliency

    Science.gov (United States)

    Phillips, Dewanne Marie

    Software intensive space systems can harbor defects and vulnerabilities that may enable external adversaries or malicious insiders to disrupt or disable system functions, risking mission compromise or loss. Mitigating this risk demands a sustained focus on the security and resiliency of the system architecture including software, hardware, and other components. Robust software engineering practices contribute to the foundation of a resilient system so that the system "can take a hit to a critical component and recover in a known, bounded, and generally acceptable period of time". Software resiliency must be a priority and addressed early in the life cycle development to contribute a secure and dependable space system. Those who develop, implement, and operate software intensive space systems must determine the factors and systems engineering practices to address when investing in software resiliency. This dissertation offers methodical approaches for improving space system resiliency through software architecture design, system engineering, increased software security, thereby reducing the risk of latent software defects and vulnerabilities. By providing greater attention to the early life cycle phases of development, we can alter the engineering process to help detect, eliminate, and avoid vulnerabilities before space systems are delivered. To achieve this objective, this dissertation will identify knowledge, techniques, and tools that engineers and managers can utilize to help them recognize how vulnerabilities are produced and discovered so that they can learn to circumvent them in future efforts. We conducted a systematic review of existing architectural practices, standards, security and coding practices, various threats, defects, and vulnerabilities that impact space systems from hundreds of relevant publications and interviews of subject matter experts. We expanded on the system-level body of knowledge for resiliency and identified a new software

  18. Challenges in analysing and visualizing large-scale molecular dynamics simulations: domain and defect formation in lung surfactant monolayers

    International Nuclear Information System (INIS)

    Mendez-Villuendas, E; Baoukina, S; Tieleman, D P

    2012-01-01

    Molecular dynamics simulations have rapidly grown in size and complexity, as computers have become more powerful and molecular dynamics software more efficient. Using coarse-grained models like MARTINI system sizes of the order of 50 nm × 50 nm × 50 nm can be simulated on commodity clusters on microsecond time scales. For simulations of biological membranes and monolayers mimicking lung surfactant this enables large-scale transformation and complex mixtures of lipids and proteins. Here we use a simulation of a monolayer with three phospholipid components, cholesterol, lung surfactant proteins, water, and ions on a ten microsecond time scale to illustrate some current challenges in analysis. In the simulation, phase separation occurs followed by formation of a bilayer fold in which lipids and lung surfactant protein form a highly curved structure in the aqueous phase. We use Voronoi analysis to obtain detailed physical properties of the different components and phases, and calculate local mean and Gaussian curvatures of the bilayer fold.

  19. Large-Scale medical image analytics: Recent methodologies, applications and Future directions.

    Science.gov (United States)

    Zhang, Shaoting; Metaxas, Dimitris

    2016-10-01

    Despite the ever-increasing amount and complexity of annotated medical image data, the development of large-scale medical image analysis algorithms has not kept pace with the need for methods that bridge the semantic gap between images and diagnoses. The goal of this position paper is to discuss and explore innovative and large-scale data science techniques in medical image analytics, which will benefit clinical decision-making and facilitate efficient medical data management. Particularly, we advocate that the scale of image retrieval systems should be significantly increased at which interactive systems can be effective for knowledge discovery in potentially large databases of medical images. For clinical relevance, such systems should return results in real-time, incorporate expert feedback, and be able to cope with the size, quality, and variety of the medical images and their associated metadata for a particular domain. The design, development, and testing of the such framework can significantly impact interactive mining in medical image databases that are growing rapidly in size and complexity and enable novel methods of analysis at much larger scales in an efficient, integrated fashion. Copyright © 2016. Published by Elsevier B.V.

  20. Quantum Monte Carlo for large chemical systems: implementing efficient strategies for peta scale platforms and beyond

    International Nuclear Information System (INIS)

    Scemama, Anthony; Caffarel, Michel; Oseret, Emmanuel; Jalby, William

    2013-01-01

    Various strategies to implement efficiently quantum Monte Carlo (QMC) simulations for large chemical systems are presented. These include: (i) the introduction of an efficient algorithm to calculate the computationally expensive Slater matrices. This novel scheme is based on the use of the highly localized character of atomic Gaussian basis functions (not the molecular orbitals as usually done), (ii) the possibility of keeping the memory footprint minimal, (iii) the important enhancement of single-core performance when efficient optimization tools are used, and (iv) the definition of a universal, dynamic, fault-tolerant, and load-balanced framework adapted to all kinds of computational platforms (massively parallel machines, clusters, or distributed grids). These strategies have been implemented in the QMC-Chem code developed at Toulouse and illustrated with numerical applications on small peptides of increasing sizes (158, 434, 1056, and 1731 electrons). Using 10-80 k computing cores of the Curie machine (GENCI-TGCC-CEA, France), QMC-Chem has been shown to be capable of running at the peta scale level, thus demonstrating that for this machine a large part of the peak performance can be achieved. Implementation of large-scale QMC simulations for future exa scale platforms with a comparable level of efficiency is expected to be feasible. (authors)